entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
17
188
authors
sequence
primary_category
stringlengths
5
18
categories
sequence
text
stringlengths
2
629k
http://arxiv.org/abs/2307.05596v1
20230710193032
Compositional Generalization from First Principles
[ "Thaddäus Wiedemer", "Prasanna Mayilvahanan", "Matthias Bethge", "Wieland Brendel" ]
cs.LG
[ "cs.LG", "stat.ML" ]
[1]Equal contribution [2]Equal supervision [0]Code available at <github.com/brendel-group/compositional-ood-generalization> The Synthesis Lab: Empowering Collaborative Learning in Higher Education through Knowledge Synthesis Bodong Chen August 12, 2023 ==================================================================================================== Leveraging the compositional nature of our world to expedite learning and facilitate generalization is a hallmark of human perception. In machine learning, on the other hand, achieving compositional generalization has proven to be an elusive goal, even for models with explicit compositional priors. To get a better handle on compositional generalization, we here approach it from the bottom up: Inspired by identifiable representation learning, we investigate compositionality as a property of the data-generating process rather than the data itself. This reformulation enables us to derive mild conditions on only the support of the training distribution and the model architecture, which are sufficient for compositional generalization. We further demonstrate how our theoretical framework applies to real-world scenarios and validate our findings empirically. Our results set the stage for a principled theoretical study of compositional generalization. § INTRODUCTION Systematic compositionality <cit.> is the remarkable ability to utilize a finite set of known components to understand and generate a vast array of novel combinations. This ability, referred to by Chomsky <cit.> as the “infinite use of finite means”, is a distinguishing feature of human cognition, enabling us to adapt to diverse situations and learn from varied experiences. It's been a long-standing idea to leverage the compositional nature of the world for learning. In object-centric learning, models learn to isolate representations of individual objects as building blocks for complex scenes. In disentanglement, models aim to infer factors of variation that capture compositional and interpretable aspects of their inputs, for example hair color, skin color, and gender for facial data. So far, however, there is little evidence that these methods deliver substantially increased learning efficacy or generalization capabilities (<cit.>, <cit.>). Across domains and modalities, machine learning models still largely fail to capture and utilize the compositional nature of the training data (<cit.>). To exemplify this failure, consider a model trained on a data set with images of two sprites with varying position, size, shape, and color overlaid on a black canvas. Given the latent factors, a simple multi-layer neural network can easily learn to reconstruct images containing compositions of these sprites that were covered by the training set (Figure <ref>, top rows). However, reconstruction fails for novel compositions—even if the individual components have been observed before (Figure <ref>, bottom row). Failure to generalize to unseen data in even this simplistic regression setting demonstrates that compositional generalization does not automatically emerge simply because the data is of a compositional nature. We therefore take a step back to formally study compositionality and understand what conditions need to be fulfilled for compositional generalization to occur. To this end, we take inspiration from identifiable representation learning and define a broad class of data generating processes that are compositional and for which we can provably show that inference models can generalize to novel compositions that have not been part of the training set. More precisely, our contributions are as follows: * We specify compositional data-generating processes both in terms of their function class and latent distributions (Sections <ref> and <ref>) such that they cover a wide range of assumptions made by existing compositional methods. * We prove a set of sufficient conditions under which models trained on the data are able to generalize compositionally (Section <ref>). * We validate our theory in a range of synthetic experiments and perform several ablation studies that relate our findings to empirical methods (Section <ref>). § RELATED WORK Representation learning Disentanglement and identifiable representation learning aim to learn succinct representations that both factorize the data space efficiently and are robust towards distributional changes <cit.>. However, the expectation that more compositional representations lead to better out-of-distribution (OOD) generalization has not been met, as demonstrated by <cit.> and <cit.>. Although our work does not directly address generalization issues in identifiable representation learning, our setup is directly inspired by it, and we examine data-generating processes similar to <cit.>. Empirical Approaches Many empirical methods use compositional priors and claim improved compositional generalization. The problem has been studied especially closely in language <cit.>, but it remains far from being solved <cit.>. Object-centric learning is another domain in which compositionality plays a major role, and many approaches explicitly model the composition of scenes from object-“slots” <cit.>. The slot approach is also common in vector-symbolic architectures like <cit.> and <cit.>. For most of these works, however, compositional generalization is not a focal point, and their actual generalization capability remains to be studied. There are also some architectures like transformers <cit.>, graph neural networks <cit.>, bilinear models <cit.>, or complex-valued autoencoders <cit.> that have been claimed to exhibit some degree of compositional generalization, but again, principled analysis of their generalization ability is lacking. Our framework can guide the systematic evaluation of these methods. While we use the visual domain as an example throughout this work, our contributions are not tied to any specific data domain or modality. Theoretical approaches to OOD generalization The OOD generalization problem for non-linear models where train and test distributions differ in their densities, but not their supports, has been studied extensively, most prominently by <cit.> and <cit.>. We refer the reader to <cit.> for a comprehensive overview. In contrast, compositional generalization requires generalizing to a distribution with different, possibly non-overlapping support. This problem is more challenging and remains unsolved. <cit.> were able to show that models can generalize between distributions with a very specific relation, but it is unclear what realistic distributions fit their constraints. <cit.> also study out-of-support problems theoretically but touch on compositional generalization only as a workaround for general extrapolation. Recently, <cit.> took a first step towards a more applicable theory of compositional generalization to unseen domains, but their results still rely on specific distributions, and they do not consider functions with arbitrary (nonlinear) compositions or multi-variate outputs. In contrast, our framework is independent of the exact distributions used for training and testing, and our assumptions on the compositional nature of the data allow us to prove generalization in a much broader setting. § A FRAMEWORK FOR COMPOSITIONAL GENERALIZATION We use the following notation throughout. [N] denotes the set of natural numbers {1, 2, ..., N}. Id denotes the (vector-valued) identity function. We denote two functions f, g agreeing for all points in set P as f ≡_P g. Finally, we write the total derivative of a vector-valued function f by all its inputs z as ∂ f/∂ z, corresponding to the Jacobian matrix with entries ∂ f_i/∂ z_j. §.§ Compositionality Colloquially, the term “compositional data” implies that the data can be broken down into discrete, identifiable components that collectively form the whole. For instance, in natural images, these components might be objects, while in music, they might be individual instruments. As a running illustrative example, we will refer to a simple dataset similar to multi-dSprites <cit.>, as shown in Figure <ref>. Each sample in this dataset is a composition of two basic sprites, each with a random position, shape, size, and color, size. Drawing inspiration from identifiable representation learning, we define compositionality mathematically as a property of the data-generating process. In our example, the samples are generated by a simple rendering engine that initially renders each sprite individually on separate canvases. These canvases are then overlaid to produce a single image featuring two sprites. More specifically, the rendering engine uses the (latent) properties of sprite one, z_1 = (z_1,x, z_1,y, z_1,shape, z_1,size, z_1,color), to produce an image x̃_1 of the first sprite. The same process is repeated with the properties of sprite two, z_2 = (z2,x, z_2,y, z_2,shape, z_2,size, z_2,color), to create an image x̃_2 of the second sprite. Lastly, the engine combines x̃_1 and x̃_2 to create the final overlaid rendering x of both sprites. Figure <ref> demonstrates this process. In this scenario, the individual sprite renderers carry out the bulk of the work. In contrast, the composition of the two intermediate sprite images x̃_1, x̃_2 can be formulated as a simple pixel-wise operation (see Appendix <ref> for more details). The rendering processes for each sprite are independent: adjusting the properties of one sprite will not influence the intermediate image of the other, and vice versa. We posit that this two-step generative procedure—the (intricate) generation of individual components and their (simple) composition into a single output—is a key characteristic of a broad class of compositional problems. If we know the composition function, then understanding the basic elements (for example, the individual sprites) is enough to grasp all possible combinations of sprites in the dataset. We can thus represent any latent variable model f : 𝒵→𝒳, which maps a latent vector z∈𝒵 to a sample x in the observation space 𝒳, as a two-step generative process. { C, φ_1, …, φ_K, 𝒵_1, …, 𝒵_K, 𝒳̃_1, …, 𝒳̃_K} is a compositional representation of function f if ∀z∈𝒵 f( z) = C ( φ_1( z_1), ..., φ_K( z_K) ) and 𝒵 = 𝒵_1×…×𝒵_K, where z_i denotes the canonical projection of z onto 𝒵_i. We refer to φ_k: 𝒵_k →𝒳̃_k as the component functions, to 𝒳̃_1, …, 𝒳̃_K as the (hidden) component spaces, and to C: 𝒳̃_1 ×…×𝒳̃_K →𝒳 as the composition function. Note that in its most general form, we do not require the component functions to be identical or to map to the same component space. The compositional representation of a function f is also not unique. For instance, any f possesses a trivial compositional representation given by {f, Id, …, Id} (for the sake of clarity, we will omit the explicit mention of the latent factorization and component spaces henceforth). We will later establish conditions that must be met by at least one compositional representation of f. Our definition of compositionality naturally aligns with various methods in the fields of identifiability, disentanglement, or object-centric learning. In the decoder of SlotAttention <cit.>, for example, each component function is a spatial broadcast decoder followed by a CNN, and the composition function is implemented as alpha compositing. <cit.> model the component functions as element-wise multiplication of high-dimensional latent codes, which are then composed through a straightforward sum. A similar approach is chosen by <cit.>, except that interactions between components are modeled using matrix multiplication. §.§ Compositional Generalization The model in Figure <ref> was trained supervisedly, it was trained to reconstruct samples x given the ground-truth latent factors (z_1, z_2) for each sprite (see Section <ref> for more details). We denote this model as f̂, indicating that it is meant to replicate the ground-truth generating process f of the data. The model f̂ indeed learned to fit f almost perfectly on the training distribution P, but failed to do so on the test distribution Q. This failure is surprising because the test samples only contain sprites already encountered during training. The novelty lies solely in the combination of these sprites. We would expect any model that comprehends the compositional nature of the dataset to readily generalize to these test samples. This compositional aspect of the generalization problem manifests itself in the structure of the training and test distribution. In our running example, the model was trained on samples from a distribution P that contained all possible sprites in each slot, but only in combination with one base sprite in the other slot (illustrated in Figure <ref>A). More formally, the support of P can be written as P = { ( z_1 ∈𝒵_1, z_2 ∈𝒵_2) | z_1 = z_1^0 ∨ z_2 = z_2^0 }. The test distribution Q is a uniform distribution over the full product space 𝒵_1×𝒵_2, i.e. it contains all possible sprite combinations. More generally, we say that a generalization problem is compositional if the test distribution contains only components that have been present in the training distribution, see Figure <ref>. This notion can be formalized as follows based on the support of the marginal distributions: Given two arbitrary distribution P, Q over latents z = ( z_1, ..., z_K) ∈𝒵 = 𝒵_1 ×⋯×𝒵_K, P has compositional support Q if P_ z_k = Q_ z_k⊆𝒵_k ∀ k ∈ [K]. Clearly, compositional generalization requires compositional support. If regions of the test latent space exist for which a component is not observed, as in Figure <ref>E, we can examine a model's generalization capability, but the problem is not compositional. Depending on whether the gap in the support is in the middle of a latent's domain or towards either end, the generalization problem becomes an interpolation or extrapolation problem instead, which are not the focus of this work. §.§ Sufficient conditions for compositional generalization With the above setup, we can now begin to examine under what conditions compositional generalization can be guaranteed to occur. To make this question precise, let us assume for the moment that the sprites don't occlude each other but that they are just summed up in pixel space. Then the compositional representation of the generative process is simply {Id, φ_1, φ_2}, i.e. f(z) = φ_1( z_1) + φ_2( z_2). The question becomes: Given supervised samples (z_i, x_i) from P, can we learn a new model f̂ that is equivalent to f on Q, i.e. for which f̂≡_Qf? We assume that C is known, so in order to generalize, we must be able to reconstruct the individual component functions φ_i. For the simple case from equation <ref>, we can fully reconstruct the component functions as follows. First, we note that if P is in an open set, we can locally reconstruct the hidden Jacobian of φ_i from the observable Jacobian of f as ∂f/∂z_k(z) = ∂φ_k/∂z_k(z_k). Since the training distribution contains all possible component configurations z_i, we can reconstruct the Jacobian of φ_i in every point z_i. Then we know everything about φ_i up to a global offset (which can be removed if there exists a known initial point for integration). Our goal is to extend this approach to a maximally large set of composition functions C. Our reasoning is straightforward if C is the identity, but what if we have occlusions or other nonlinear interactions between slots? What are general conditions on C and the support of the training distribution P such that we can still reconstruct the individual component functions and thus generalize compositionally? Let us now consider the sprites example with occlusions, and let us assume that the support of P is basically a thin region around the diagonal; see Figure <ref> (left). In this case, the two sprites are always relatively similar, leading to large overlaps. It is impossible to reconstruct the full Jacobian of the occluded sprite from a single sample. Instead, we need a set of samples for which the background sprite is the same while the foreground sprite is in different positions; see Figure <ref> (right). With sufficient samples of this kind, we can observe all pixels of the background sprite at least once. Then reconstruction of the Jacobian of φ_1 is possible again. This line of thought brings us to a more general condition on the data-generating process: The composition function C and the support P must be chosen such that the full Jacobian can be reconstructed for each component function for all component latents. We formally define the concept of sufficient support below. Note that whether the support of P is sufficient or not strongly depends on the choice of composition function C. A distribution P over latents z = ( z_1, ..., z_K) ∈𝒵, has sufficient support a compositional representation of a function f, if P is in an open set and for any latent value z_k^*, there exists a (finite) set of points P'_k( z_k^*) ⊆{ p ∈ P| p_k = z_k^* } for which the sum of total derivatives of C has full rank. That is, ∑_ p ∈ P'_k( z_k^*)∂ C/∂φ_k(φ( p)) = M, where M is the dimension of the component space 𝒳̃_k ⊆ℝ^M. We are now ready to state our main theorem, namely that if f, f̂ share the same composition function and if P has compositional and sufficient support, then the model f̂ generalizes to Q if it matches the ground-truth data-generating process f on P. Let P, Q be arbitrary distributions over latents z = ( z_1, ..., z_K) ∈𝒵. Let f, f̂ be functions with compositional representations in the sense of definition <ref> that share { C, 𝒵_1, ..., 𝒵_K }, but use arbitrary {φ_1, ..., φ_K, 𝒳̃_1, ..., 𝒳̃_K }, {φ̂_1, ..., φ̂_K, 𝒳̂_1, ..., 𝒳̂_K }. Assume the following assumptions hold: * C, φ_k, φ̂_k are differentiable, C is Lipschitz in φ, and φ is continuous in z. * P has compositional support Q in the sense of definition <ref>. * P has sufficient support f in the sense of definition <ref>. * There exists an initial point p^0∈ P such that φ ( p^0) = φ̂( p^0). Then f̂ generalizes to Q, f P≡f̂ f Q≡f̂. The proof follows roughly the intuition we developed above in that we show that the Jacobians of the component functions can be reconstructed everywhere. Bear in mind that this is simply a construction for the proof: The theorem holds whenever f̂ fits the output of f on the training distribution P, which we can achieve with standard supervised training and without access to the ground-truth Jacobians. It should also be emphasized that since the compositional representation is not unique, the theorem holds if there exists at least one for which the assumptions are fulfilled. Note also that the initial point condition <ref> is needed in the proof, but in all practical experiments (see below), we can generalize compositionally without explicit knowledge of that point. We relegate further details to Appendix <ref>. § EXPERIMENTS We validate our theoretical framework on the multi-sprite data. All models were trained for 2000 epochs on training sets of 100k samples using an NVIDIA RTX 2080 Ti; all test sets contain 10k samples. Table <ref> summarizes the reconstruction quality achieved on the in-domain (ID) test set (P) and the entire latent space (Q) for all experiments. Motivating experiment We implement the setup from Figure <ref> to demonstrate that a compositional model does indeed generalize if the conditions from Theorem <ref> are met. We model the component functions as four fully-connected layers followed by four upsampling-convolution stages, mapping the 5d component latent to 64×64 RGB images. For training stability, the composition function is implemented as a soft pixel-wise addition using the sigmoid function σ(·) as x = σ(x̃_1) ·x̃_1 + σ(-x̃_1) ·x̃_2, which allows component 1 to occlude component 2. We contrast this to a non-compositional monolithic model, which has the same architecture as a single component function (with adjusted layer sizes to match the overall parameter count of the compositional model). We show that both models have the capacity to fit the data by training on random samples covering the entire latent space (Table <ref>, #1,2). We then train on a distribution with orthogonal support as in equation <ref>, albeit with two planes for the foreground component to satisfy the sufficient support condition (Definition <ref>) as explained in Figure <ref>. Both models can reconstruct ID samples, but only the compositional model generalizes to the entire latent space (Table <ref>, #3,4). Flexible compositional support Next, we demonstrate the variety of settings that fulfil the compositional support assumption as illustrated in Figure <ref>B and C. To this end, we repeat the experiment on training sets P sampled from (i) a normal distribution with orthogonal support (Table <ref>, #5) and (ii) a uniform distribution over a diagonal support chosen broad enough to satisfy the sufficient support condition (Table <ref>, #6). The model generalizes to the entire latent space in both settings. Since the generalization performance is already close to ceiling, broadening the support of both distributions (Table <ref>, #7,8) does not further increase performance. Violating Conditions Finally, we look at the effect of violating some conditions. * Gaps in support (Table <ref>, #9) If there are gaps in the support of the training set such that some component configurations are never observed (Figure <ref>E) violates the compositional support condition (Definition <ref>). While the overall reconstruction performance only drops slightly, visualizing the reconstruction error over a 2d-slice of the latent space in Figure <ref> illustrates clearly that generalization fails exactly where the condition is violated. * Insufficient training variability (Table <ref>, #10) Reducing the width of the diagonal support violates the sufficient support condition (Definition <ref>) as soon as some parts of the background component are always occluded and can not be observed in the output anymore. We can clearly see that reconstruction performance on the entire latent space drops significantly as a result. * Collapsed Composition Function (Table <ref>, #11) Changing the output of each component function from RGB to RGBa and implementing the composition as alpha compositing yields a model that is still compositional, but for which no support can satisfy the sufficient support condition since the derivative of transparent pixels will always be zero and the Jacobian matrix can therefore never have full rank (more details in Appendix <ref>). However, we observe that the model still generalizes to the entire latent space and achieves even lower reconstruction error than the original model. This emphasizes that what we present are merely sufficient conditions, which might be loosened in future work. § DISCUSSION We presented a first step and a framework to study compositional generalization in a more principled way. Clearly, there remain many open questions and limitations that we leave for future work. Supervised setting We only studied a supervised regression setting in which the model has access to the ground-truth latents of each training sample. Ultimately, we are interested in the unsupervised setting akin to what is typically studied in identifiable representation learning. The unsupervised setting comes with inherent ambiguities that make generalizations guarantees harder to derive. Still, the results in this paper build an important foundation for future studies because sufficient conditions in the supervised setting can be considered necessary conditions in the unsupervised setting. Jacobian and initial point The proof of Theorem <ref> utilizes the Jacobian of the ground-truth model. We emphasize again that this construction is necessary only for the proof and does not mean that we require access to the data-generating processes' full Jacobian for training. Similarly, the existence of an initial point p^0 is a technicality of the proof that is not reflected in the experiments. While it is not yet clear whether it is possible to complete the proof without the initial point condition, we believe there is a self-consistency condition that might alleviate the need for this condition. The experiments thus hint at the existence of alternative proof strategies with relaxed assumptions. Known composition function We also assume the composition function to be known which is approximately true in many interesting scenarios, such as object composition in scenes or the composition of instruments in music. In fact, many structured representation learning approaches like SlotAttention <cit.> incorporate structural components that are meant to mimic the compositional nature of the ground-truth-generating process. In other interesting cases like language, however, the composition function is unknown a priori and needs to be learned. This might be possible by observing how the gradients of C change with respect to a fixed slot, at least if certain regularity conditions are fulfilled. Inductive biases Some of the conditions we derived can be relaxed in the presence of certain inductive biases. For example, models with an inductive bias towards shift invariance might be able to cope with certain gaps in the training support (e.g., if sprites are not visible in every position). Similarly, assuming all component functions φ to be identical would substantially simplify the problem and allow for much smaller sufficient supports P. The conditions we derived do not assume any inductive bias but are meant to formally guarantee compositional generalization. We expect that our conditions generalize to more realistic conditions as long as the core aspects are fulfilled. Error bounds Our generalization results hold only if the learned model perfectly matches the ground-truth model on the training distribution. This is similar to identifiable representation learning, where a model must find the global minimum of a certain loss or reconstruction error for the theory to hold. Nonetheless, extending our results towards generalization errors that are bounded by the error on the training distribution is an important avenue for future work. Broader impact Compositional generalization, once achieved, has the potential to be beneficial in many downstream applications. By substantially increasing sample and training efficiency, it could help to democratize the development and research of large-scale models. Better generalization capabilities could also increase the reliability and robustness of models but may amplify existing biases and inequalities in the data by generalizing them and hinder our ability to interpret and certify a model's decisions. § CONCLUSION Machine learning, despite all recent breakthroughs, still struggles with generalization. Taking advantage of the basic building blocks that compose our visual world and our languages remains unique to human cognition. We believe that progress towards more generalizable machine learning is hampered by a lack of a formal understanding of how generalization can occur. This paper focuses on compositional generalization and provides a precise mathematical framework to study it. We derive a set of sufficient conditions under which compositional generalization can occur and which cover a wide range of existing approaches. We see this work as a stepping stone towards identifiable representation learning techniques that can provably infer and leverage the compositional structure of the data. It is certainly still a long road toward scalable empirical learning techniques that can fully leverage the compositional nature of our world. However, once achieved, there is an opportunity for drastically more sample-efficient, robust, and human-aligned machine learning models. § ACKNOWLEDGMENTS We would like to thank (in alphabetical order): Jack Brady, Simon Buchholz, Attila Juhos, and Roland Zimmermann for helpful discussions and feedback. This work was supported by the German Federal Ministry of Education and Research (BMBF): Tübingen AI Center, FKZ: 01IS18039A. WB acknowledges financial support via an Emmy Noether Grant funded by the German Research Foundation (DFG) under grant no. BR 6382/1-1 and via the Open Philantropy Foundation funded by the Good Ventures Foundation. WB is a member of the Machine Learning Cluster of Excellence, EXC number 2064/1 – Project number 390727645. This research utilized compute resources at the Tübingen Machine Learning Cloud, DFG FKZ INST 37/1057-1 FUGG. We thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting TW and PM. § AUTHOR CONTRIBUTIONS The project was led and coordinated by TW. TW and PM jointly developed the theory with insights from WB. TW implemented and conducted the experiments with input from PM and WB. TW led the writing of the manuscript with help from WB, PM, and MB. TW created all figures with comments from PM and WB. unsrtnat § PROOF OF THEOREM <REF> We reiterate the setup and notation introduced in the paper here for ease of reference. Notation [N] denotes the set of natural number {1, 2, ..., N}. Id denotes the (vector-valued) identity function. We write two functions f, g agreeing for all points in set P as f ≡_P g. Finally, we write the total derivative of a vector-valued function f by all its inputs z as ∂ f/∂ z, the Jacobian matrix with entries ∂ f_i/∂ z_j. Setup We are given two arbitrary distributions P, Q over latents z = ( z_1, ..., z_K) ∈𝒵. Each latent z_k describes one of the K components of the final data point x produced by the ground-truth data-generating process f. A model f̂ is trained to fit the data-generating process on samples of P; the aim is to derive conditions on P and f̂ that are sufficient for f̂ to then also fit f on Q. We assume that f, f̂ are chosen such that we can find at least one compositional representation (Definition <ref>) for either function that shares a common composition function C and factorization of the latent space 𝒵_1 ×⋯×𝒵_K = 𝒵. For f̂ to generalize to Q, we need to show fitting f on P implies also fitting it on Q, in other words f P≡f̂ f Q≡f̂ Since C is the same for both functions, we immediately get φQ≡φ̂ f Q≡f̂, it suffices to show that the component functions generalize. Note, however, that since C is not generally assumed to be invertible, we do not directly get that agreement of f, f̂ on P also implies agreement of their component functions φ, φ̂ on P. We require P to have compositional support Q (Definition <ref> and Assumption <ref>). The consequence of this assumption is that any point q = ( q_1, ..., q_K) ∈ Q can be constructed from components of the K support points p^k = ( p^k_1, ..., p^k_K) ∈ P subject to p^k_k = q_k as q = ( p^1_1, ..., p^K_K ). A trivial consequence, then, is that points x̃∈𝒳̃ in component space corresponding to points in Q in latent space can always be mapped back to latents in P φ( q) = (φ_1( q_1), ..., φ_K( q_K)) = (φ_1( p^(1)_1), ..., φ_K( p^(K)_K )) because each component function φ_k only depends on the latents z_k of a single component. This is also the case for the component functions φ̂ of f̂ so that we get φP≡φ̂φQ≡φ̂. We now only need to show that φP≡φ̂ follows from f P≡f̂. As noted above, this is not guaranteed to be the case, as C is not generally invertible (in the presence of occlusions). We, therefore, need to consider when a unique reconstruction of the component functions φ (and correspondingly φ̂) is possible, based on only the observations x = f( z) on Q. As explained in the main paper, we can reason about how a change in the latents z_k of some slot affects the final output, which we can express through the chain rule as ∂ f/∂ z_k( z) _N × D = ∂ C/∂φ_k(φ( z)) _N × M∂φ_k/∂ z_k( z_k) _M × D ∀ k ∈ [K]. Here, N is the dimension of the final output (64 × 64 × 3 for RGB images), M is the dimension of a component's representation x̃_k (also 64 × 64 × 3 for RGB images), and D is the dimension of a component's latent description z_k (5: x-position, y-position, shape, size, hue for sprites). Note that we can look at the derivative component-wise because each component function φ_k only depends on the latents z_k of its component. However, the combination function still depends on the (hidden) representation of all components, and therefore ∂ C/∂φ_k is a function of all φ and the entire z. In equation <ref>, the left-hand side (LHS) ∂ f/∂ z_k can be computed from the training, as long as P is an open set. On the right-hand side (RHS), the functional form of ∂ C/∂φ_k is known since C is given, but since φ( z) is still unknown, the exact entries of this Jacobian matrix are unknown. As such, equation <ref> defines a system of partial differential equations (PDEs) for the set of component functions φ with independent variables z. Before we can attempt to solve this system of PDEs, we simplify it by isolating ∂φ_k/∂ z_k. Since all terms are matrices, this is equivalent to solving a system of linear equations. For N = M, ∂ C/∂φ_k is square, and we can solve by taking its inverse as long as the determinant is not zero. In the general case of N ≥ M, however, we have to resort to the pseudoinverse to write ∂φ_k/∂ z_k^* = ( ∂ C/∂φ_k^⊤∂ C/∂φ_k)^-1∂ C/∂φ_k^⊤∂ f/∂ z_k ∀ k ∈ [K], which gives all solutions ∂φ_k/∂ z_k^* if any exist. This system is overdetermined, and a (unique) solution exists if ∂ C/∂φ_k has full (column) rank. In other words, to execute this simplification step on P, we require that for all z ∈ P the M column vectors of the form ( ∂ C_1/∂φ_km(φ( z)), ..., ∂ C_N/∂φ_km(φ( z)) )^⊤ ∀ m ∈ [M] are linearly independent. Each entry of a column vector describes how all entries C_n of the final output (the pixels of the output image) change with a single entry φ_km of the intermediate representation of component k (a single pixel of the component-wise image). It is easy to see that if even a part of the intermediate representation is not reflected in the final output (in the presence of occlusions, when a single pixel of one component is occluded), the entire corresponding column is zero, and the matrix does not have full rank. To circumvent this issue, we realize that the LHS of equation <ref> only depends on the latents z_k of a single component. Hence, for a given latent z and a slot index k, the correct component function will have the same solution for all points in any (finite) set P'( z, k) ⊆{ p ∈ P | p_k = z_k }. We can interpret these points as the intersection of P with a plane in latent space at z_k (all latent combinations in the training set in which one component is fixed in a specific configuration). We can then define a modified composition function C̃ that takes z and a slot index k as input and produces a “superposition” of images corresponding to the latents in the subset as C̃( z, k ) = ∑_ p ∈ P'( z, k) C(φ( p) ). Essentially, we are condensing the information from multiple points in the latent space into a single function. This enables us to write a modified version of equation <ref> as ∑_ p ∈ P'( z, k)∂ f/∂ z_k( p) = ∑_ p ∈ P'( z, k)∂ C/∂φ_k(φ( p)) ∂φ_k/∂ z_k( z_k) = ∂C̃/∂φ_k ( z, k) ∂φ_k/∂ z_k( z_k) ∀ k ∈ [K] Now we can solve for ∂φ_k/∂ z_k as in equation <ref>, but this time require only that ∂C̃/∂φ_k has full (column) rank for a unique solution to exist, ∂C̃/∂φ_k ( z, k) = ∑_ p ∈ P'( z, k)∂ C/∂φ_k(φ( p)) = M ∀ z ∈ P ∀ k ∈ [K]. In general, this condition is easier to fulfill since full rank is not required in any one point but over a set of points. For occlusions, for example, any pixel of one slot can be occluded in some points p ∈ P', as long as it is not occluded in all of them. We can interpret this procedure as “collecting sufficient information” such that an inversion of the generally non-invertible C becomes feasible locally. The requirement that P has to be an open set, together with the full rank condition on the Jacobian of the composition function condensed over multiple points, C̃, is termed sufficient support in the main paper (Definition <ref> and Assumption <ref>). As explained here, this allows for the reconstruction of ∂φ_k/∂ z_k from the observations, f P≡f̂∂φ/∂ zP≡∂φ̂/∂ z. The above step only gives us agreement of the derivative of the component functions, ∂φ_k/∂ z_k, not agreement of the functions themselves. As explained above, the solution to the linear system of equations <ref> constitutes a system of partial differential equations (PDEs) in the set of component functions φ with independent variables z. We can see that this system has the form ∂_i φ( z) = a_i( z, φ( z)), where i ∈ [L] = [K× D] is an index over the flattened dimensions K and D such that ∂_i φ denotes ∂φ/∂ z_L (which is essentially one column of ∂φ_k/∂ z_k aggregated over all k) and a_i is the combination of corresponding terms from the LHS. If this system allows for more than one solution, we cannot uniquely reconstruct the component functions from their derivatives. If we have access to some initial point, however, for which we know φ( 0) = φ^0, we can write φ(z_1, ..., z_L) - φ^* = ( φ(z_1, ..., z_L) - φ(0, z_2, ..., z_L) ) + ( φ(0, z_2, ..., z_L) - φ(0, 0, z_3, ..., z_L) ) + ... + ( φ(0, ..., 0, z_L) - φ(0, ..., 0) ). In each line of this equation, only a single z_i =: t is changing; all other z_1, ..., z_L are fixed. Any solution of <ref>, therefore, also has to solve the L ordinary differential equations (ODEs) of the form ∂_t φ(z_1, ..., z_i-1, t, z_i+1, ..., z_L) = a_i(z_1, ..., z_i-1, t, z_i+1, ..., z_L, φ(z_1, ..., z_i-1, t, z_i+1, ..., z_L) ), which have a unique solution if a_i is Lipschitz in φ and continuous in z_i, as guaranteed by <ref>. Therefore, <ref> has at most one solution. This reference point does not have to be in z = 0, as a simple coordinate transform will yield the same result for any point in P. It is therefore sufficient that there exists some point p^0 ∈ P for which φ ( p^0) = φ̂( p^0) to obtain the same unique solution for φ and φ̂, which is exactly what <ref> states. Overall, this means that agreement of the derivatives of the component functions also implies agreement of the component functions themselves, ∂φ/∂ zP≡∂φ̂/∂ zφP≡φ̂ Finally, we can conclude the model f̂ fitting the ground-truth generating process f on the training distribution P, through <ref>, <ref>, <ref>, <ref>, implies the model generalizing to Q as well. In other words, equation <ref> holds. § DETAILS ABOUT THE COMPOSITIONAL FUNCTIONS As explained in equation <ref> in section <ref>, the composition function is implemented as a soft pixel-wise addition in most experiments. The use of the sigmoid function σ(·) in the composition x = σ(x̃_1) ·x̃_1 + σ(-x̃_1) ·x̃_2 was necessary for training stability. With this formulation, sprites can also overlap somewhat transparently, which is not desired and leads to small reconstruction artifacts for some specific samples. Implementing the composition with a step function as x = step(x̃_1) ·x̃_1 + step(-x̃_1) ·x̃_2 instead would be more faithful to the ground-truth data-generating process, but is hard to train with gradient descent. Note that both formulations could easily be extended to more than one sprite by simply repeating the composition operation with any additional sprite. In section <ref>, we also looked at a model that implements the composition through alpha compositing instead (see also Table <ref>, #11). Here, each component's intermediate representation is an RGBa image. The components are then overlaid on an opaque black background using the composition function x_α = x_1, α + ( 1 - x_1, α) · x_2, α x_RGB = x_1, α· x_1, RGB + ( 1 - x_1, α) ·x_2, α/x_α· x_2, RGB. While this yields a compositional function, the sufficient support condition (Definition <ref>) is generally not fulfilled on the sprites data. The reason is that in fully transparent pixels (α = 0), changing the RGB value is not reflected in the output. Conversely, if a pixel is black, changing its alpha value will not affect how it is blended over a black background. As a result, most columns in the Jacobian ∂ C/∂φ_k (see also equation <ref>) will be zero. Since the intermediate representations of each sprite will contain a lot of black or transparent pixels (the entire background), the rank of the Jacobian here will be low. In this case, the workaround from equation <ref> does not help since the low rank is not a result of another component in the foreground but of the specific parameterization of each component itself. As stated in the main paper, the fact that this parameterization still produces good results and generalizes well is an indicator that there might be another proof strategy or workaround that avoids this specific issue.
http://arxiv.org/abs/2307.05732v1
20230711185927
Semiparametric Shape-restricted Estimators for Nonparametric Regression
[ "Kenta Takatsu", "Tianyu Zhang", "Arun Kumar Kuchibhotla" ]
stat.ME
[ "stat.ME" ]
Dzyaloshinskii-Moriya interactions, Néel skyrmions and V_4 magnetic clusters in multiferroic lacunar spinel GaV_4S_8 Olle Eriksson August 12, 2023 ==================================================================================================================== Estimating the conditional mean function that relates predictive covariates to a response variable of interest is a fundamental task in statistics. In this paper, we propose some general nonparametric regression approaches that are widely applicable under very mild conditions. The method decomposes a function with a Lipschitz continuous k-th derivative into a sum of a (k-1)-monotone function and a parametric component. We implement well-established shape-restricted estimation procedures (such as isotonic regression) to handle the “nonparametric" components of the true regression function and combine them with a simple sample-splitting procedure to estimate the parametric components. The resulting estimators inherit several favorable properties from the shape-restricted regression estimators. Notably, it is (practically) tuning parameter-free, converges at the minimax rate, and exhibits a locally adaptive rate when the true regression function is “simple". Finally, a series of numerical studies are presented, confirming these theoretical properties. § INTRODUCTION This article considers the nonparametric regression problem of estimating the conditional mean function based on observed covariates and response variables, without assuming the truth taking any specific parametric forms. Specifically, we consider n independent and identically distributed (IID) observations {(X_i, Y_i)}_i=1^n ∼ P_0(X,Y), where the predictive covariate variable vector X_i takes values in some measurable space Ω (often a subset of ℝ^d) and the response Y_i is a real number. We are interested in estimating the unknown conditional mean function f_0(X):= [Y|X] that minimizes the mean-squared prediction error of the response among all the square-integrable functions of X. Defining the “error variables” as ξ_i := Y_i - f_0(X_i), we can write the relationship between the covariates and the response as Y_i = f_0(X_i) + ξ_i for i = 1, …, n. Many existing works assume ξ_i's to be independent of X_i's, which is more justified if ξ_i represents “external noise” contaminating the samples. We note that no such independence assumptions are made in this paper. The only property of the variables ξ_i's implied directly by the relationship (<ref>) is that 𝔼[ξ_i|X_i] = 0. Nonparametric regression methods usually assume the truth f_0 belong to some function space. These function spaces often impose constraints on the smoothness properties of the contained functions. One common class of nonparametric functions considered in the literature is the Hölder class. Formally, let k=(k_1, k_2, …, k_d) be a d-dimensional index set where each k_i is a non-negative integer and |k|=∑_i=1^d k_i. For each f : Ω↦ℝ where x = (x_1, x_2, …, x_d) ∈Ω⊆ℝ^d, differentiable up to the order k ≥ 1, we define an operator D^k as D^k f = ∂^|k| f(x)/∂^k_1 x_1…∂^k_d x_d, and D^0f = f. For β, L > 0, the Hölder class Σ(β,L) on Ω consists of functions that satisfy the following condition: Σ(β, L) := {f: Ω↦ℝ | D^m f_∞≤ L for all m ∈ℤ_+^d with |m| = 0, 1, …⌊β⌋ and. .|D^k f(y)-D^k f(x)|≤ Lx-y^β-|k| with |k| = ⌊β⌋ for all x,y ∈Ω}. When β=1, the Hölder class coincides with the class of L-Lipschitz functions. The performance of numerous classical methods have been analyzed for the Σ(β,L) class, including k-nearest neighbor, Nadaraya-Watson estimator, local polynomial estimator, smoothing splines, series estimator, tree-based methods, RKHS, random forest, and so on. These methods typically involve a tuning parameter, such as a bandwidth for local smoothing methods or the number of basis functions for the series estimator. The choice of the parameters is crucial for both establishing the convergence rates and achieving better practical performances. However, the optimal choice of such tuning parameters is often unavailable in practice since it almost always depends on the underlying distribution P_0. Therefore, data-adaptive model selection procedures, such as cross-validation or Lepski's method <cit.>, become necessary for estimating the optimal tuning parameters. While these procedures can automate the model selection and achieve provably optimal convergence rates, they are not entirely parameter free. For instance, both methods require the candidate sets where the optimal parameter is contained, which is often difficult to verify. Additionally, the performance of the resulting estimator can still be sensitive to the specific choices made due to the randomness of the selection process <cit.>. In this paper, we propose a class of simple estimators based on shape-restricted methods, which can be seen as tuning parameter free. The key concept underlying our methodology can be summarized through the following simple yet significant observations: For any L-Lipschitz function f, there exists a corresponding function g such that f(x) = g(x) - L x where g is a non-decreasing function. Similarly, for any function f ∈Σ(2, L), there exists a function g such that f(x) = g(x) - L x^2 where g is a convex function. We will discuss this decomposition with greater generality in Section <ref>. These results can also be found in the optimization literature <cit.>. Following (<ref>) and (<ref>), the original problem of nonparametric regression can be decomposed into two subproblems: (1) estimating the shape-restricted regression function g, and (2) estimating the parameter L. One may expect the first problem to be more challenging as it involves the estimation of a function under shape constraints, which is an infinite-dimensional object. Therefore, the properties of the proposed class of estimators are expected to resemble those of shape-restricted regression estimators. This is indeed the case as we demonstrate in this article. Given the decompositions listed in (<ref>) and (<ref>), the proposed estimator naturally builds on existing nonparametric methods for estimating shape-restricted regression functions. Shape-restricted methods differ significantly from other nonparametric methods in several ways. Firstly, unlike popular nonparametric estimators including kernel smoothing, local polynomials, or orthogonal series estimators, shape-restricted estimators are often tuning parameter free, eliminating the need for model selection procedures. Secondly, despite their nonparametric nature, shape-restricted estimators can often be computed efficiently. For instance, a least squares estimator over the monotone function class can be solved in a near linear time. In addition to their computational advantages, shape-restricted estimators often exhibit the property of adaptive risk bounds. This implies that the estimator converges at a faster rate than the worst-case, that is, the minimax rate over general monotone functions, when the true function associated is “simple”. In the case of estimating non-decreasing f_0, for instance, the least squares estimator does not only achieve the minimax rate of n^-2/3 in terms of the squared risk, but also achieves the parametric n^-1 rate (ignoring a logarithmic term) if f_0 is non-decreasing piecewise constant. Remarkably, the estimator achieves this faster convergence rate automatically without any knowledge of the underlying function (and hence it is called adaptive). Finally, shape-restricted regression can be extended to multivariate cases, for instance, by assuming that the underlying function is additive in each coordinate. When this assumption holds, the proposed estimator is no longer confined to univariate problems. By directly building upon shape-restricted regression estimators, our nonparametric estimators for the class Σ(β,L) also inherit the aforementioned properties. To summarize, our proposed nonparametric estimators possess the following properties: * Nonparametric consistency: The proposed estimator does not rely on parametric assumptions and consistently estimates a regression function that belongs to a nonparametric class of functions. * Optimal and adaptive convergence rate: The proposed estimator demonstrates the minimax optimal rate of convergence for the studied function classes. The estimator is also adaptive in the sense that its risk automatically converges at a faster rate if f_0 is “simple”. * No tuning parameters: The proposed procedures essentially have no tuning parameters, eliminating the need for the cumbersome model selection process. * Efficient computation time: The proposed estimators can often be constructed with an almost linear time expense. * Support for multivariate covariates: The general estimation procedure can be extended to multivariate covariates as long as a corresponding multivariate shape-restricted estimator exists. When the additivity of a regression function is assumed, the method can be extended with great generality. § LITERATURE REVIEW The proposed method in this article is closely related to the extensive body of literature on nonparametric shape-restricted regression estimation. The problem is particularly well-studied in the univariate covariate setting, where popular shape constraints include monotonicity and convexity. A common approach is calculating the least squares estimator (LSE) using the observed samples over the function class of interest <cit.>. When the monotonicity constraint is considered, the LSE is commonly known as the isotonic regression, which can be efficiently computed using the Pool Adjacent Violators Algorithm (PAVA) <cit.>. The univariate regression estimator under the convexity has also been studied in the literature <cit.>. Recently, there has been a surge in theoretical analysis associated with the convergence rate of shape-restricted LSEs. The literature often emphasizes the remarkable adaptive property of LSEs, where the estimator demonstrates a faster convergence rate depending on the local structure around the true regression function <cit.>. These results are typically derived under strong assumptions regarding the distribution of ξ, such as sub-Gaussianity; <cit.> is an exception as the errors here are only assumed to have a finite second moment. Recent works have focused on relaxing such assumptions and explored the behavior of LSEs in the presence of heavy-tailed errors <cit.>. Multivariate applications of shape-restricted methods have been investigated particularly in recent years although a comprehensive understanding of theoretical behaviors is still under development <cit.>. <cit.> studied the LSE estimator for the multivariate isotonic regression while <cit.> proved that an alternative method must be considered to achieve minimax-optimal adaptivity for all dimensions. The multivariate convex estimators have been studied in <cit.> and <cit.>. Finally, additivity is a common structural assumption in regression analysis. In this context, studying shape constraints in conjunction with additivity helps maintain the theoretical properties of shape-restricted methods. Some recent developments can be found in the works of <cit.> and <cit.>. As outlined earlier, the proposed estimator leverages a decomposition that separates a nonparametric function into shape-restricted and parametric components. The concept of decomposing nonparametric regression into parametric components has been previously explored in the field of semiparametric regression. In particular, the literature has investigated two-stage estimation procedures that aim to improve the initial parametric estimator through nonparametric methods. This approach has been adopted to the context of density estimation <cit.>, regression estimation <cit.>, conditional distribution functions <cit.>, and additive models <cit.>. These methods often exhibit a faster rate of convergence when the initial parametric estimator is correctly specified. However, this differs from the notion of local adaptivity in the shape-restricted LSE literature. In this case, adaptivity is not automatic and often relies on prior knowledge of the true data-generating distribution to attain a faster convergence rate. Notation. For any positive integer n ≥ 1, we denote by [n] the index set {1, 2, …, n}. For any j ∈ [d] with d ≥ 1, we denote by e_j the d-dimensional vector of zero's with one at the jth position. For a univariate function f, f^(k) with a positive integer k denotes the kth derivative of f. § DECOMPOSITIONS OF NONPARAMETRIC REGRESSION FUNCTIONS As briefly mentioned in the introduction, the key concept behind the proposed method is the general decomposition of a regression function within the Hölder class into its shape-restricted and parametric component. We first discuss this decomposition in detail. A real-valued univariate function g: ℝ↦ℝ is k-monotone if its (k-1)-th derivative is non-decreasing <cit.>. Common examples are monotone (k=1) and convex (k=2) functions. Note that this definition differs from the one common in the context of nonparametric density estimation <cit.>. We first provide the following decomposition for univariate functions: For any k-times differentiable function f: ℝ↦ℝ with an L-Lipschitz k-th derivative, and any α≥ L, there exists a (k+1)-monotone function g_α : ℝ↦ℝ such that f(x) = g_α(x) - {α/(k+1)!} x^k+1. Let f^(k) denote the k-th derivative of f. By the assumption that f is k-times differentiable, it follows that f(x) + {α/(k+1)!} x^k+1 is also k times differentiable with the k-th derivative f^(k)(x) + α x. It thus remains to show g_α^(k)(x) := f^(k)(x)+ α x is non-decreasing, which implies that g_α is (k+1)-monotone. This holds because for any y ≥ x, g^(k)_α(y) - g^(k)_α(x) = (f^(k)(y) + α y) - (f^(k)(x) + α x) ≥ -L |y-x| +α(y-x) ≥ 0, which follows by the Lipschitz continuity of f^(k) and α≥ L. This concludes the claim. While Proposition <ref> is presented for a univariate function, analogous results also hold for several multivariate functions, as demonstrated by the following examples. To begin, we define coordinate-wise k-monotone functions. Given the index set k := (k_1, …, k_d) of non-negative integers and Ω⊆ℝ^d, a function g : Ω↦ℝ is coordinate-wise k-monotone if for each j ∈ [d] and for all x ∈Ω, the (k_j-1)-th derivative of the univariate mapping t ↦ g(x + te_j) is non-decreasing in t ∈ℝ. We now extend Proposition <ref> to the context of multivariate covariates in the following result: Suppose we have a function f : Ω↦ℝ where Ω⊆ℝ^d. For each j ∈ [d], x=(x_1,…, x_d)∈Ω, and an index set k=(k_1, …, k_d) of non-negative integers, we define a univariate mapping t ↦ f_j,x(t) = ∂^k_j/∂ t^k_jf(x+t e_j), t∈ℝ. For each j ∈ [d], suppose f_j, x is Lipschitz continuous with the Lipschitz constant L_j such that |f_j,x(t) - f_j,x(0)| ≤ L_j|t| for all x∈Ω and t ∈ℝ. Then for any sequence (α_1, …, α_d) such that α_j ≥ L_j, there exists a coordinate-wise (k+1)-monotone function g_α such that f(x) = g_α(x) - ∑_i=1^d α_i/(k_i+1)!x_i^k_i+1. For each j ∈ [d], it follows that ∂^k_j/∂ t^k_j g_α (x+te_j) =∂^k_j/∂ t^k_j(f(x+te_j) + ∑_i=1^d α_i/(k+1)!(x+te_j)^k+1) =∂^k_j/∂ t^k_j(f(x+te_j) + ∑_i≠ jα_i/(k_i+1)!x_i^k_i+1+ α_j/(k_j+1)!(x_j+t)^k_j+1) = f_j,x(t) + α_j (x_j+t). It remains to show that the univariate function t ↦∂^k_j/∂ t^k_j g_α (x+te_j) is non-decreasing in t ∈ℝ. From the derivation above, ∂^k_j/∂ t^k_j g_α (x+te_j) equals f_j,x(0) + α_j x_j when t=0. Thus for t > 0, we have f_j,x(t) + α_j (x_j + t) - (f_j,x(0) + α_j x_j) ≥ -L_j |t| + α_j t ≥ 0. Hence, this concludes that g_α is coordinate-wise (k+1)-monotone. As a concrete application of Proposition <ref>, we obtain the following result: Suppose f : Ω↦ℝ for Ω⊆ℝ^d is a coordinate-wise Lipschitz function with a Lipschitz constant L := (L_1, …, L_d) where |f(x)-f(x+e_jh)| ≤ |h|L_j for any h ∈ℝ. There exits a coordinate-wise monotone function g_α : Ω↦ℝ such that f(x) = g_α(x) - α^⊤ x for any α = (α_1, …, α_d) satisfying α_i ≥ L_i for all i = 1,…, d. The above result follows as the direct application of Proposition <ref> where k_1 = … = k_d = 0. We now introduce additional multivariate extensions of Proposition <ref>. However, readers who are primarily interested in understanding the proposed methods can skip to Section <ref> as the remaining part of this section may not be crucial for this purpose. First, a similar result to Proposition <ref> holds when the gradient of f : Ω↦ℝ is Lipschitz continuous in L_2-norm. Suppose f : Ω↦ℝ for Ω⊆ℝ^d has a Lipschitz continuous gradient in L_2-norm such that D^1 f(y) - D^1 f(x)_2 ≤ L x-y_2 for some L > 0. Then there exits a multivariate convex function g_α : Ω↦ℝ for Ω⊆ℝ^d such that f(x) = g_α(x) - α x^⊤ x for any α satisfying α≥ L. A multivariate function g : Ω↦ℝ is convex if and only if (y-x)^⊤(D^1 g(y) - D^1 g(x)) ≥ 0 for any x, y ∈Ω. By definition of g_α and f, it follows that (y-x)^⊤(D^1 g_α(y) - D^1 g_α(x)) =(y-x)^⊤ (D^1 f(y) - D^1 f(x)) + αx-y_2^2 ≥ - y-x_2 D^1 f(y) - D^1 f(x)_2 + αx-y_2^2 ≥ - L y-x_2^2+ αx-y_2^2 ≥ 0 which follows by Hölder's inequality and the assumed Lipschitz continuity. Thus, the function g_α is convex. We note that the parameter L is a d-dimensional vector in Proposition <ref> while it is a scalar constant in Proposition <ref>. The final example involves the generalized additive index model, which was initially studied by <cit.>. In this model, the multivariate function admits the additive decomposition: f(x_1, x_2, …, x_d) = ∑_i=1^m f_i(β_i^⊤ x) for some β_i ∈ℝ^d and 1 ≤ i ≤ m. Since each f_i is a univariate function, we can extend Proposition <ref> to the context of the additive model in the following proposition: Suppose f : Ω↦ℝ where Ω⊆ℝ^d and suppose there exists β_i ∈ℝ^d (1≤ i≤ m) such that f(x_1, x_2, …, x_d) = ∑_i=1^m f_i(β_i^⊤ x) for all x∈Ω. If f_i is k_i-times differentiable function with an L_i-Lipschitz k_i-th derivative for all 1≤ i≤ m, then for any α_i ≥ L_i, there exists (k_i+1)-monotone funct f(x_1, …, x_d) = ∑_i=1^m g_i, α_i(β_i^⊤x) - ∑_i=1^m {α_i/(k_i+1)!} (β_i^⊤x)^k_i+1 where g_i, α_i is a (k_i+1)-monotone function. In particular, if k_i = 0 for all 1≤ i≤ d, then, for any α_i ≥ L_i, there exists non-decreasing functions g_i,α_i(·) such that f(x_1, …, x_d) = ∑_i=1^d g_i,α_i(β_i^⊤x) - ℓ^⊤x, where ℓ := ∑_i=1^d α_iβ_i. The “standard" additive model is a special case of Proposition <ref> as follows: Taking β_i = e_i for 1≤ i≤ m = d in Proposition <ref> implies the decomposition of the following additive model: ∑_i=1^d f_i(x_i) = ∑_i=1^d g_i, α_i(x_i) - ∑_i=1^d {α_i/(k_i+1)!} x_i^k_i+1 where g_i, α_i is a (k_i+1)-monotone function. We have demonstrated that functions in the nonparametric class can be decomposed into the shape-restricted and parametric components. This decomposition holds true for a range of applications, including k-times differentiable functions for any integer k or certain multivariate cases. Based on this observation, we introduce a class of nonparametric regression estimators that leverages shape-restricted regression estimators, as detailed in the subsequent section. Although all the results in this section are presented in terms of Lipschitz continuous k-th derivatives, they can be extended to the Sobolev space using the Sobolev embedding theorem <cit.>. Assume Ω is an open subset of ℝ^d. For a multi-index k, the Sobolev space W^s,p(Ω) consists of functions f in L^p(Ω) such that for every multi-index k with |k| ≤ s the weak derivative D^k f belongs to L^p(Ω). We thus have W^s, p(Ω) := {f: Ω↦ℝ | f ∈ L^p(Ω) and D^k f ∈ L^p(Ω) for all |k| ≤ s}. The Sobolev space does not require pointwise differentiability of the functions it contains, but instead characterizes them based on the integrability of their weak derivatives. The Sobolev embedding theorem establishes that a function in the Sobolev space is a subset of a suitably chosen Hölder space, which allows us to extend the results of Proposition <ref> with the corresponding k. The theorem states W^s,p(Ω) ⊂Σ(s-d/p, L) when ps > d. As a limiting case, we simply have W^1, ∞(ℝ) = Σ(1, L), equivalently the space of L-Lipschitz functions. § PROPOSED ESTIMATORS In this section, we present a general estimation procedure for nonparametric regression functions that admit the decomposition described in Section <ref>. While we focus on the estimation of L-Lipschitz functions for exposition, the following method can be extended to the general case with minimal modifications. To establish the underlying intuition, we begin by considering the estimation of the univariate function. By applying Proposition <ref> with k=0, any L-Lipschitz function can be decomposed as f(x) = g_α(x) - α x for any α≥ L and g_α is a non-decreasing function. If the true Lipschitz constant were known, we could easily estimate g_L by regressing {y_i + L x_i}_i=1^n on x_1, …, x_n using an isotonic regression. However the exact value of L is rarely available in practice, and thus it needs to be estimated. Although a simple LSE over the class of functions g_α(x) - α x may seem plausible, it is not immediately effective. This is due to the fact that any finite data can be interpolated using a Lipschitz function by selecting an adequately large constant α. As a result, this class of functions can always achieve zero squared error for any observed data, which is reminiscent of overfitting. Hence, it is crucial to incorporate an estimation procedure for L beyond relying solely on simple least squares. Algorithm <ref> outlines a simple procedure based on sample-splitting. The procedure can be summarized as follows: First, we split the data into two independent sets. For each value of α in a pre-specified candidate set, we construct the estimator g_α(x) based on the first set. This can be any isotonic estimator including the LSE <cit.>, the monotone spline estimator <cit.>, or the smoothed monotone estimator <cit.>, in the univariate case. In the multivariate case, one can consider the block min-max estimator of <cit.> or the Bayesian monotone estimator <cit.>. The value of α and thereby the final estimator g_α(x) - α^⊤x is selected if it minimizes the estimated risk on the second set. While the proposed algorithm can be interpreted as performing cross-validation over the parameter α, it also possesses unique and desirable properties. Firstly, the core decomposition based on Proposition <ref> holds for any α≥ L without requiring α = L. This means that a “good" estimate of the regression function f_0 in terms of mean squared error can be obtained without precisely identifying the true value of L. As we demonstrate later, the estimator exhibits robust performance in terms of the mean squared error when a sufficiently large α is chosen (see Figure <ref>). This stands in contrast to standard model-selection procedures based on cross-validation. Second, the third step in Algorithm <ref> can be implemented using a numerical optimization program such as the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm <cit.>. Consequently, in practice, the only restriction is that the candidate set ℒ must contain large enough values of α such that α≥ L holds for some α∈ℒ. We will also demonstrate that the proposed procedure is robust against the randomness introduced by the data split, holding for a fixed observation sequence (see Figure <ref>), which is a desirable yet uncommon property for methods based on cross-validation. The extension to functions with Lipschitz k-th derivatives is straightforward by replacing α x with the appropriate polynomials as provided by Proposition <ref>. In this case, g_α can be obtained using a suitable estimator for (k+1)-monotone functions. For example, when k=1, this corresponds to an estimator for convex functions. The same approach can be applied to the multivariate examples presented in Propositions <ref> and <ref> by adopting the coordinate-wise isotonic regression or multivariate convex function and appropriate polynomial terms accordingly. Next, in view of Proposition <ref>, we outline the procedure when the additivity assumption is plausible. We focus on the estimator for the additive Lipschitz functions. However, the same algorithm can be straightforwardly extended to additive functions where each component has Lipschitz k-th derivatives. The detailed procedure is outlined in Algorithm <ref>. While the basic concept behind the estimation procedure remains the same, there are notable differences in steps 2 and 3 compared to Algorithm <ref>. In this case, we perform a coordinate-descent procedure to update g^(i) for each i=1,… d component until the empirical risk converges. § CONVERGENCE RATES The convergence rate of (shape-restricted) LSEs has been well-studied in the literature. In this section, we present the convergence rates of the proposed estimator when α is fixed (i.e., not random) and the shape-restricted LSE g_α is used in Step 2 of Algorithm <ref>. Specifically, we let Z_i, α := Y_i +α X_i, and the shape-restricted least squares estimator <cit.> is given by g_α = _g∈𝒢∑_i=1^n (Z_i, α-g(X_i))^2 where 𝒢 is a collection of functions that satisfy constraints such as monotonicity. We now introduce the necessary notation and conditions. For the marginal distribution P_X of X and P_X-square integrable function g, we denote the squared L_2(P_X)-norm as g^2_L_2(P_X) := ∫ g ^2(x) dP_X. Using this notation, the mean squared error of the estimator integrated under the marginal distribution of X is given by f_n-f_0^2_L_2(P_X) = ∫ (f_n-f_0)^2 dP_X, which is the primary object of interest in this section. Furthermore, we assume the univariate regression model (<ref>) where X_1, … X_n ∈𝒳 := [0,1]. We then introduce the following assumptions: * There exists a constant C_1 > 0 such that f_0 _∞ < C_1 and f_n_∞ = O_P(1). * There exists a constant C_2 > 0 such that [Y^2 | X=x] ≤ C_2. * There exists a constant C_3 > 0 such that [ξ^2 | X] ≤ C_3 P_X-almost everywhere and [|ξ|^3 | X] is uniformly bounded. * The Lebesgue density of X satisfies c_4 < dP_X < C_4 for some c_4, C_4 > 0. * P_X is the uniform distribution. Broadly speaking, these assumptions can be grouped into three categories: <ref>, <ref>–<ref>, and <ref>–<ref>. We note that <ref><ref> and <ref> <ref>. <ref> is a common requirement that assumes the uniform boundedness of f_0 and the boundedness in probability of f_n. Assumptions <ref> and <ref> impose the moments of Y (and ξ). Traditionally, more stringent structures are assumed for ξ_i's, such as sub-Gaussianity, but recent works have focused on analyzing scenarios involving heavy-tailed ξ_i's <cit.>. Assumptions <ref> and <ref> pertain to the structure of the marginal distribution of P_X. Notably, the independence between ξ_i's and X_i's is not required for the following result we state. For any L-Lipschitz function f_0 and α≥ L, assuming <ref>, <ref> and <ref>, f_n - f_0 _L_2(P_X) = O_P(n^-1/3). Similarly for f_0 ∈Σ(2, L) and α≥ L, assuming <ref>, <ref> and <ref>, f_n - f_0 _L_2(P_X) = O_P(n^-2/5log n). Under the regression model  (<ref>) with L-Lipschitz function f_0, Proposition <ref> with k=0 implies that Y_i = f_0(X_i) + ξ_i = g_α(X_i) - α X_i + ξ_i for any α≥ L where g_α is a non-decreasing function. It then follows that f_n - f_0 _L_2(P_X) =(g_α - α x) - (g_α - α x) _L_2(P_X) =g_α - g_α_L_2(P_X). The last quantity has been investigated extensively in the literature on LSEs, and similar results can be obtained under different assumptions. For instance, Theorem 4.1 of <cit.> (with d=1 and α_0=α=1) states that under <ref>, <ref> and <ref>, g_α - g_α_L_2(P_X) = O_P(n^-1/3) which concludes the first claim. For f_0 ∈Σ(2, L), Proposition <ref> with k=1 implies that Y_i = f_0(X_i) + ξ_i = g_α(X_i) - α X_i^2 + ξ_i where g_α is a convex function. Theorem 3.1 of <cit.> (with α=1/2, s=2/3, ν=1/3, Φ=C√(log n) and A=(log n)^1/4) as well as its discussion in Section 3.1 shows that assuming <ref>, <ref> and <ref> g_α - g_α_L_2(P_X) = O_P(n^-2/5log n). This proves the second claim. The logarithmic factor in the upper-bound (<ref>) can be relaxed even for a heavy-tailed ξ_i's under the independence between ξ_i and X_i. See, for instance, Theorem 3 of <cit.> for the detailed conditions. Next, we present the convergence rate of the estimator when it is expected to be locally adaptive. The results are based on <cit.>, which provides the adaptive rate of shape-restricted LSEs. The following assumptions are introduced to the regression model (<ref>): * ξ_1, …, ξ_n are IID and independent of X_1, …, X_n. * ξ has finite L_2,1-norm, defined as ξ_2,1 :=∫_0^∞ P(|ξ| > t )^1/2 dt < ∞. We conjecture that <ref> can be relaxed in view of the recent work by <cit.>. <ref> assumes finite L_2,1-moment of ξ, which is only slightly stronger than assuming finite second moment of ξ but not stronger than assuming 2+δ moment for any δ > 0. We define ℳ_m as a collection of non-decreasing m-piecewise constant functions and 𝒞_m as a collection of convex m-piecewise linear functions. We now have the following result: Suppose f_0(x) = f_m(x) + Lx for all x, for some f_m ∈ℳ_m. Then, under <ref>, <ref>–<ref> and 0 < α - L = O(√(m/nlog^2n)), f_n - f_0 _L_2(P_X) = O_P(√(m/nlog^2 n)). Similarly, if f_0(x) = f_m(x) + Lx^2 for some f_m ∈𝒞_m, then under <ref>, <ref>–<ref> and 0 < α - L = O(√(m/nlog^2n)), f_n - f_0 _L_2(P_X) = O_P(√(m/nlog^2 n)). By the assumption, there exists L and f_m ∈ℳ_m such that f_0(x) = f_m(x) - Lx. It now follows by Proposition <ref> that P(f_n - f_0 _L_2(P_X)≥δ_n ) = P((g - α x) - (f_m - Lx + α x - α x) _L_2(P_X)≥δ_n ) ≤ P(g - f_m _L_2(P_X)+(L- α) x _L_2(P_X)≥δ_n ) ≤ P(g - f_m _L_2(P_X)≥δ_n/2 )+P(|L-α|x _L_2(P_X)≥δ_n/2 ). Theorem 3 by <cit.> states that under <ref> and <ref>–<ref>, P(g - f_m _L_2(P_X)≥ c√(m/nlog^2 n)) = ε for any ε∈ (0,1) where the constant c depends only on ε, ξ_2,1, f_m_∞ and 𝒢. Since α - L = O(√(m/nlog^2n)) and x_L_2(P_X)=1/9, we can select δ_n = √(m/nlog^2 n) to make the right-hand side of the above inequality arbitrarily small. This concludes the proof. When f_0 has representation f_m(x) - Lx^2 for some f_m ∈𝒞_m, it follows analogously that P(f_n - f_0 _L_2(P_X)≥δ_n ) ≤ P(g - f_m _L_2(P_X)≥δ_n/2 )+P(|L-α|x^2 _L_2(P_X)≥δ_n/2 ). The rest of the proof follows as an analogous application of Theorem 3 by <cit.>. Contrary to Theorem <ref> where the result holds for all α≥ L, the adaptive rate of Theorem <ref> requires α -L = O(√(m log^2 n/n)). This indicates that the proposed method requires slightly more accurate estimate of L in order to obtain the adaptive rate. In a nonparametric regression model, it is also common to consider a scenario where the design for X is fixed, meaning that X_1, …, X_n are deterministic and are not random or IID. An illustrative example is the equi-distant design, where X_i is defined as i/n for i=1, …, n. The upper bound on the squared risk and the local adaptivity of the LSEs under the fixed design have been shown by <cit.> for sub-exponential or sub-Gaussian ξ's. The preliminary results in this section focus on the univariate (and convex set) 𝒳⊂ℝ. For the multivariate function class corresponding to Proposition <ref>, it is known that the LSEs fail to adapt to non-decreasing piecewise constant functions at the minimax rate in L_2 for d ≥ 3 <cit.>. Hence, an alternative approach such as the block max-min estimator must be considered in order to recover minimax optimality <cit.>. We conjecture that the minimax rate of the block max-min estimator proved by <cit.> can be extended to our setting. Finally, we discuss the convergence rate of shape-restricted additive models. Remark <ref> implies that an additive function can be decomposed as follows: ∑_i=1^d f_i(x_i) = ∑_i=1^d g_i, α_i(x_i) - ∑_i=1^d {α_i/(k_i+1)!} x_i^k_i+1 = μ^* + ∑_i=1^d g^*_i, α_i(x_i) - ∑_i=1^d {α_i/(k_i+1)!} x_i^k_i+1. The constant offset μ^* is introduced to ensure that ∫_0^1 g^*_i, α_i(x) dx=0 for all i = 1, …, d. By applying the analogous arguments to the proofs of Theorems <ref> and <ref>, we can characterize the risk of the proposed estimator for a fixed α=(α_1, …, α_d) using the LSEs of μ^* + ∑_i=1^d g^*_i, α_i(x_i). Formally, we define the LSE of the additive function for given {(X_i, Y_i)}_i=1^n with X_i = (X_i1,…, X_id) as (μ, g_1, …g_d) := ∑_i=1^n{Y_i-μ-∑_j =1^d g_j(X_ij)}^2 where is taken over μ∈ℝ and g_j∈𝒢_j for j = 1, …, d. We define an oracle estimator of the ℓth component of the additive function as follows: g_ℓ := _g∈𝒢∑_i=1^n{Y_i-μ^*-∑_j ≠ℓ g^*_j, α_j(X_ij)- g(X_iℓ)}^2. This estimator is not practically feasible as it assumes the knowledge of μ^* and g^*_j, α_j for j ≠ℓ. Under the assumption that the covariate space Ω⊆ℝ^d forms a Cartesian product set 𝒳_1 ×…×𝒳_d and 𝒳_i ⊆ℝ for all i=1,…,d, Lemma 3.1 of <cit.> states that g_i = g_i for i=1,…, d. With this general result, the risk behavior of the LSE g_i of g^*_i, α_i is characterized by the behavior of the oracle estimator g_i. Thus, the convergence rates provided by Theorems <ref> and <ref> are applicable to the context of the additive model. In the subsequent section, we demonstrate that our estimator for additive models, as outlined by Algorithm <ref>, achieves the convergence rate prescribed by Theorems <ref> and <ref> when each component of additive functions satisfies the necessary shape constraint. § NUMERICAL STUDIES §.§ Univariate Nonparametric Regression We conduct numerical studies to assess the finite-sample properties of the proposed procedures. We first consider cases with univariate covariates where X follows a uniform distribution [0,1]. The response variables are generated according to Y_i = f(X_i) + ξ_i for i = 1, 2,…, n with several different regression functions f. The noise ξ_i's are IID. N(0, 0.1^2) variables across all scenarios. The samples sizes vary from 10^2 to 10^4. For each scenarios, we repeat the experiment 300 times. For the proposed method, we split the index set {1, 2, …, n} into two disjoint subsets ℐ_1 and ℐ_2 such that {(X_i, Y_i) : i ∈ℐ_2} contains √(n) observations. We also compare our method with other existing nonparametric estimators: * Kernel ridge regression (): The kernel function is given by K(x,z) := 1+min(x,z), which corresponds to the first-order Sobolev space (i.e., Example 12.16 of <cit.>). We use 10 grids for the penalization tuning parameter of ridge regression. Due to the computationally intensive nature of KRR, we do not explore finer choices of the tuning parameters. * Gradient boosting machines (): The shrinkage parameter is set to 0.01, and we choose the maximum depth of each tree from the set {2, 5}. The total number of trees is selected from {100, 1000, 2000, 4000, 8000}. The remaining parameters are set to their default values according to the library in . * Random forest regression (): The number of trees is selected from {50, 100, 500, 1000, 5000}. We use the estimator implemented by R package <cit.>. * Penalized sieve estimator with cosine basis: We employ 50 basis functions and select the penalization tuning parameter from (approximately) 100 default grids. The estimators are realized using R package <cit.>. For all procedures, the model selection is performed through cross-validation based on the validation set {(X_i, Y_i) : i∈ℐ_2} with a sample size of √(n). For sample sizes exceeding 2000, we omit the results obtained from and due to computational limitations. We consider the following four “true” regression functions: Scenario 1 We examine a Lipschitz function defined as: f_1(x) := (1-3x)I_[0,1/3](x) + (-1+3x)I_[1/3,2/3](x) + (3-3x)I_[2/3,1](x). The proposed estimator, along with other nonparametric regression estimators, is expected to converge at a rate of n^-2/3 in terms of their (integrated) squared risk. Scenario 2 We consider a case where our proposed estimators are expected to be locally adaptive, converging at a parametric rate of n^-1 (up to a logarithmic term). Define M_m(x) := ∑_i=1^m iI_[(i-1)/m,i/m](x). The proposed estimator is anticipated to be adaptive to the following truth function: f_2(x; m, β) := M_m(x) + β x. For this specific scenario, we use m=3 and β=1. Scenario 3 The next two scenarios focus on the application of the convex regression. The following example corresponds to a smooth function, in the Hölder sense, defined as: f_3(x; γ) := sin(γ(2x-1)). For this particular scenario, we select γ = 4. The proposed estimator is anticipated to converge at a rate of n^-4/5 ignoring a logarithmic term. Scenario 4 The final example illustrates another scenario where our proposed estimator is expected to be locally adaptive. Specifically, the estimator is designed to adapt to any function that can be decomposed as a sum of a convex m-piecewise linear function and a quadratic term. We define C_m(x) as a convex m-piecewise linear function with 1/m equally sized segments over X, and the slopes are determined as (-1, 0, 1, …, m-2). Additionally, we enforce the condition C_m(0)=0, thereby defining a unique convex m-piecewise linear function. We generate observations from one such regression defined as: f_4(x; m, β) :=C_m(x) + β x^2. In this particular scenario, we consider the case with m=3 and β=1. Figure <ref> displays a single sample path of the estimated regression function using a sample size of n=500. The red line represents the estimated regression function, while the black line represents the true regression function. The estimator appears to be consistent with the true curves, including the regression functions that contain non-differentiable points (Scenario 1) as well as discontinuities (Scenario 2). Next, we study the convergence rate of mean squared errors (MSE) for various methods as the sample sizes vary. To estimate the MSEs, we generate 10^5 new data points and calculate the prediction errors. Figure <ref> displays the average MSEs for various sample sizes, while Figure <ref> displays box plots representing the MSE distributions specifically for sample sizes of n=100, 1000, and 10000. The solid black lines in Figure <ref> represent the advertised rates of convergence for the proposed method, namely n^-2/3 for Scenario 1, n^-4/5 for Scenario 3, and n^-1 (ignoring a logarithmic rate) for Scenarios 2 and 4. Additionally, Table <ref> presents the estimated slope based on linear regression using observations with sample sizes n ≥ 2000. We observe that in the small-sample regimes, the proposed method deviates from the theoretical rate of convergence. However, as the sample size increases, the method aligns more closely with the expected rate of convergence. In Scenario 1, most of the nonparametric regression methods demonstrate comparable performance, with the proposed method performing particularly well for larger sample sizes. Similar conclusions can be drawn for Scenario 3, with the performance of the proposed method being especially pronounced. In the two scenarios where the proposed method is expected to be adaptive, exhibiting a parametric convergence rate, it outperforms all other methods for sample sizes larger than 1000. These findings highlight the adaptability and effectiveness of the proposed method as the sample size increases. §.§ Multivariate Regression with Additive Structure In this section, we consider the multivariate extensions assuming an additive structure for the underlying regression functions, as discussed in Remark <ref> from Section <ref>. For this set of numerical studies, we also investigate the Generalized Additive Model () with the dimension of basis set to 30. We first consider two examples with 2-dimensional covariates x=(x_1, x_2): * Scenario 1 (2d): f(x) := f_1(x_1) - f_1(x_2), * Scenario 2 (2d): f(x) := f_2(x_1; 3, 1) - f_2(x_2; 3, 1), where the component functions f_1,f_2 are defined in (<ref>) and (<ref>). Similar to the univariate cases, we anticipate that our proposed method will converge essentially at a rate of n^-2/3 for Scenario 1 and n^-1 for Scenario 2. We also consider two 5-dimensional examples: * Scenario 3 (5d): f(x) := f_1(x_1) - f_1(x_2) + x_3- x_4 + 1, * Scenario 4 (5d): f(x) := f_2(x_1; 1, 0) + f_2(1-x_2; 3, 3) +f_2(x_3; 3, 3) + f_2(1-x_4; 1, 3) + f_2(x_5; 1, 3). We also expect our proposed method to converge at a rate of n^-2/3 for Scenario 3 and n^-1 for Scenario 4. In the case of additive functions, Figure <ref> displays the average MSEs while Figure <ref> displays box plots representing the distribution of observed MSEs. Additionally, Table <ref> presents the estimated slope based on linear regression using observations with sample sizes n ≥ 2000. Similar to the univariate case, the theoretical rate of convergence aligns with the empirical behavior for larger sample sizes. We observe that the generalized additive model performs well for regression functions even with non-differentiable points (Scenarios 1 and 3). However, it struggles to accurately estimate functions with discontinuities (Scenarios 2 and 4). In contrast, our proposed method performs well when each additive component is a non-decreasing piecewise constant function plus a linear term. §.§ Robustness to the Tuning Parameters Next, we examine the robustness of the proposed method regarding two aspects: (1) the specific values of the parameter α and (2) the randomness from cross-validation. First, we investigate the MSEs of the proposed method when the parameter α is pre-specified. Hence, the resulting estimator g_α - α x depends on the data only through the estimated function g_α. Figure <ref> displays the MSEs (on a logarithmic scale) as the value of α changes. The results are presented for sample sizes of n=500, 1000 and 5000. To recall, Proposition <ref> holds for any α≥ L where L is the true Lipschitz constant, implying that we expect the MSEs to be robust once the value of α surpasses L. This property is demonstrated in Figure <ref>. We also investigate the robustness of the proposed method under the random split of cross-validation. To study this, we generate the observations once and then examine the MSEs of the resulting estimators for different cross-validation splits. We repeat this process 300 times, each time using a different split, and then examine the MSEs. The top plot of Figure <ref> displays the box plots of the MSEs while the bottom plot shows the box plots of the values of α obtained from the cross-validation splits. We observe that the MSEs across different splits are concentrated even for small sample sizes. For example, in Scenario 1 with n=500, the majority of MSEs fall within the range of 10^-3.25 to 10^-3, indicating very small variability between cross-validation splits. Similar conclusions can be drawn for different scenarios. The bottom plot of Figure <ref> displays that the variability of α between random splits is also small. This indicates the robustness of the proposed procedure in the selection of α. §.§ Additional Results on Adaptive Rates Lastly, we investigate the behavior of the proposed estimator in scenarios where the method is expected to converge at the faster rate. In particular, Scenario 2 represents non-decreasing m-piecewise constant functions with linear terms and Scenario 3 represents m-piecewise convex affines with quadratic terms. Theorem <ref> implies that the rate of convergence of the proposed estimator in these scenarios is expected to display a linear relationship as the number of m changes. To verify this property, we generated 5000 observations from Scenarios 2 and 4 over the values of m in {1,2,3,4,5}. The results are presented in Figure <ref>. As expected, the average MSEs over 300 repetitions increase linearly as the value of m increases. § CONCLUSION This article introduces a new class of nonparametric regression estimators that leverages existing shape-restricted regression methods. The proposed method takes advantage of the decompositions of nonparametric functions within the Hölder class into shape-restricted and parametric components. We then propose an estimation procedure based on sample-splitting, which practically eliminates the turning parameter. Our proposed method inherits favorable properties from shape-restricted regression estimators, including efficient computation and optimal convergence rates. Moreover, our estimator demonstrates adaptability to specific regression functions such that it automatically converges at the parametric rate (up to a logarithmic term). In our subsequent investigation, we establish more rigorous theoretical properties, specifically the rate of convergence under dependent, heteroscedastic and heavy-tail noises <cit.>. Moreover, we provide these theoretical results without assuming a fixed value for α, thereby formally proving the properties of the proposed procedure while accounting for the randomness introduced by the sample-splitting steps. In practice, the uncertainty quantification is crucial in regression analysis and the construction of valid confidence intervals is desired. While precisely characterizing the limiting distribution in shape-restricted regression is challenging due to irregularity, we anticipate that the recently developed HulC procedure <cit.> can be applicable in this context. § ACKNOWLEDGEMENTS The authors gratefully acknowledge support from NSF DMS-2210662. apalike
http://arxiv.org/abs/2307.04508v1
20230710120620
Laplace-Transform GW
[ "Johannes Tölle", "Niklas Niemeyer", "Johannes Neugebauer" ]
physics.chem-ph
[ "physics.chem-ph", "physics.comp-ph" ]
Laplace-Transform GW Johannes Tölle^1,[email: [email protected]], Niklas Niemeyer^2,, and Johannes Neugebauer^2[email: [email protected]] ^1Division of Chemistry and Chemical Engineering, California Institute of Technology, Pasadena, California 91125, USA ^2Theoretische Organische Chemie, Organisch-Chemisches Institut and Center for Multiscale Theory and Computation, Westfälische Wilhelms-Universität Münster, Corrensstraße 36, 48149 Münster, Germany ^Both authors contributed equally. Date: July 9, 2023 empty Abstract We present a simple and accurate GW implementation based on a combination of a Laplace transformation (LT) and other acceleration techniques used in post-SCF quantum chemistry, namely, natural auxiliary functions and the frozen-core approximation. The LT-GW approach combines three major benefits: (a) a small prefactor for the computational scaling, (b) easy integration into existing molecular GW implementations, and (c) significant performance improvements for a wide range of possible applications. Illustrating these advantages for systems consisting of up to 352 atoms and 7412 basis functions, we further demonstrate the benefits of this approach combined with an efficient implementation of the Bethe–Salpeter equation. INTRODUCTION – After its introduction in 1965 <cit.>, the GW (G: time ordered one-body Green’s function, W: screened Coulomb interaction) method has now become the standard approach for the accurate ab-initio determination of ionization potentials (IPs), electron affinities (EAs) (or more generally quasi-particle energies), and in combination with the Bethe–Salpeter equation (BSE), for excitation energies in condensed matter physics <cit.>. The adoption within the realm of quantum chemistry has been established in recent years <cit.> with the availability of implementations in a wide range of molecular quantum chemistry codes, see e.g., Refs. <cit.>. The success of the GW method is owed to the fact that it offers good accuracy while being computationally feasible for a wide range of systems, c.f. Ref. <cit.>. However, the GW method generally relies on error cancellation, and G_0W_0, in particular, depends on the starting point chosen, the approach used for determining the dielectric function, and the self-consistency scheme chosen for the GW calculation. An excellent overview of the different aspects related to the GW approximation can be found in Ref. <cit.>. Especially the computational cost for determining the screened Coulomb interaction and therefore the G_0W_0 self-energy Σ_0 varies significantly for different practical realizations of the GW method in molecular orbital bases. The “fully-analytic” approach <cit.>, for example, scales as 𝒪(N^6). The scaling can be reduced significantly by numerical integration of the self-energy Σ_0, Σ_0(,,ω) = i/2π∫ dω' e^iω'η G_0(,,ω+ω') W_0(,,ω'), where the non-interacting one-particle Green's function is denoted as G_0 and the screened Coulomb interaction as W_0. To avoid divergences along the real frequency axis <cit.>, the integration in Eq. (<ref>) is commonly performed along the imaginary frequency axis in combination with analytic continuation (AC) to the real frequency axis leading to a formal scaling of 𝒪(N^4) <cit.>. Alternatively, one can employ the so-called contour-deformation approach (CD) <cit.> by dividing the integration in Eq. (<ref>) into an integration along the imaginary frequency axis and the real-frequency axis. The scaling, however, is 𝒪(N^4-5) and depends on the quasi-particles to be determined (see Ref. <cit.>). Σ_0 can also be determined within the space-time formulation of the GW method <cit.>. In this approach, the construction of W_0 is performed in imaginary-time rather than frequency space in combination with additional techniques, among others, real-space grid representation of the Green's function <cit.>, pair atomic density fitting <cit.>, or separable density-fitting <cit.> to reduce the overall scaling to 𝒪(N^3). Note that this ansatz shares certain similarities to Laplace-transform (LT) techniques developed in molecular quantum chemistry <cit.>. One drawback of these methods is, however, related to increasing memory requirements and larger prefactors due to the real-space representation <cit.>, potentially uncontrollable errors introduced by exploiting locality <cit.>, or the necessity to construct specialized real-space grids <cit.>. These aspects also lead to more challenging numerical implementations of these methods, potentially limiting their widespread application. This work demonstrates an alternative efficient evaluation of the GW self-energy by combining different ideas for reducing the computational cost based on the AC-GW formulation. In particular, we make use of a Laplace transformation for the evaluation of W_0, a truncation of the auxiliary basis using natural auxiliary functions (NAF) <cit.> and the frozen-core (FC) approximation. We refer to this approach as LT-GW which is based on three guiding principles: (a) a small prefactor should be preserved, (b) adaptation of existing AC-GW implementations should require minimal effort, and (c) significant performance improvements should result for a wide range of system sizes with controllable error.   THEORY – In the following, a concise overview of the modified GW implementation based on the Laplace-transform (LT) technique is given. More detailed information regarding GW implementations based on imaginary frequency integration can be found in Refs. <cit.>. A diagonal element nm for the correlation part of the screened-Coulomb interaction W^c_nm in a molecular orbital basis for an imaginary frequency iω is calculated as W^c_nm(iω') = ∑_PQ R^P_nm{[1 - Π(iω')]_PQ^-1 - δ_PQ}R^Q_nm, where molecular spin-orbital (ϕ) and auxiliary basis function (χ) indices are given in lowercase and uppercase letters, respectively. Furthermore, i,j,… refer to occupied, a,b,… to virtual, and n,m,… to arbitrary orbitals with eigenvalues ϵ. Π_PQ(iω') is evaluated as Π_PQ(iω') = - 2 ∑_iaR^P_ia(ϵ_a - ϵ_i)/ω'^2 + (ϵ_a - ϵ_i)^2 R^Q_ia, and the transformed three-center integrals R^P_nm are defined as R^Q_nm = ∑_P (nm|P) [𝐕^-1/2]_PQ, with (nm|P) = ∫ d∫ dϕ_n() ϕ_m() χ_P()/| - |, and V_PQ = ∫ d∫ dχ_P() χ_Q()/|-|. In AC-GW, the construction of Π_PQ(iω') is the most time-consuming step, formally scaling as 𝒪(N_oN_vN_aux^2) for each imaginary frequency (N_o being the number of occupied orbitals, N_v the number of virtual orbitals, and N_aux the number of auxiliary functions). Finally, the correlation (dynamical) part of the G_0W_0 self-energy Σ^c is obtained (ϵ_F denotes the Fermi-level) Σ_n^c(iω)= -1/π∑_m ∫_0^∞ d ω' iω + ϵ_F - ϵ_m/(iω + ϵ_F - ϵ_m )^2 + ω'^2 W_nm(iω'), which is integrated numerically using a modified Gauss-Legendre quadrature, see Refs. <cit.>. Quasi-particle energies are then determined by AC of Σ^c to the real frequency axis. For the AC to the real frequency axis, we use a N-point Padé approximation as described in the appendix of Ref. <cit.>. In this work, we make use of the LT for evaluating Π_PQ(iω'). In a first step, the denominator in Eq. (<ref>) is rewritten as 1/ω'^2 + (ϵ_a - ϵ_i)^2 = ∫^∞_0 dτexp(-(ω'^2 + (ϵ_a - ϵ_i)^2)τ) = ∫^∞_0 dτexp(-ω'^2τ) exp(-( ϵ_a - ϵ_i)^2 τ). holding for (ω'^2 + (ϵ_a - ϵ_i)^2) > 0 which is guaranteed to be true. Replacing the denominator with the integral in Eq. (<ref>) allows to apply a numerical integration of the form 1/ω'^2 + (ϵ_a - ϵ_i)^2 ≈ - ∑_m^N_LT w_m exp(-(ω'^2 + (ϵ_a - ϵ_i)^2) x_m) = - ∑_m^N_LT w_m exp(-ω'^2 x_m) exp(-(ϵ_a - ϵ_i)^2 x_m), where the N_LT quadrature points and their corresponding weights are denoted as x_m and w_m, respectively. Factorizing the exponential functions with frequencies and orbital-energy differences as their arguments through the LT allows evaluating their contributions to Π_PQ(iω') separately as Π_PQ(iω') ≈ -2 ∑_m ∑_iaR^P_ia w_m (ϵ_a - ϵ_i) e^-(ϵ_a - ϵ_i)^2 x_m R^Q_ia_M^m_PQ(iω') e^-ω'^2 x_m. In practice, M^m_PQ(iω') is calculated for each quadrature point, which requires N_LT N_oN_vN_aux^2 operations, followed by the outer loop over imaginary frequencies [see Eq. (<ref>)] counting N_LT N_aux^2 N_iω operations. In contrast, the evaluation of Eq. (<ref>) for the determination of quasi-particle energies requires N_iω N_oN_vN_aux^2 operations. It becomes clear that the formal scaling remains unchanged with 𝒪(N^4) since neither N_iω nor N_LT depends on the system size represented by N. A constant speed-up can, however, be expected using the LT technique as long as N_LT < N_iω which is proportional to the ratio N_iω/N_LT. The natural auxiliary function (NAF) approximation <cit.> reduces the size of the three-index integral tensor that commonly appears in post-SCF methodology making use of the resolution of the identity approximation. Its basis is given by a symmetric, positive definite matrix K that reads K_PQ = ∑_nm R^P_nmR^Q_nm. A rank reduction of the three-index integral list is achieved by first diagonalizing K to yield the NAFs labeled by P̃, ∑_Q K_PQ V_Q,P̃ = V_P P̃ϵ_P̃ , followed by setting up a transformation matrix U_PP̃ that only includes NAFs with corresponding eigenvalues above a certain threshold ε_NAF (assembled from the columns of V_P P̃). Finally, the three-center integral tensor is transformed to the NAF space following R^P̃_nm = ∑_P R^P_nm U_PP̃. In the limit of U including all eigenvectors of K, Eq. (<ref>) represents an orthogonal transformation. Our implementation omits the virtual–virtual part of the sum in Eq. (<ref>) due to its unfavorable scaling with the system size. Closed-shell molecules are handled by including a factor of two in Eq. (<ref>) to account for the single set of spatial orbitals. Determining the NAFs formally scales as 𝒪(N_o N_v N^2_aux). The theoretical speed-up of the NAF approximation in AC-GW calculations becomes apparent when inspecting Eqs. (<ref>) and (<ref>). The time-determining step includes an inner product of the three-index integral tensor contracting the occupied–virtual composite index ia. As a result, the expected speed-up scales quadratically with the quotient of the number of original auxiliary basis functions N_aux and the number of NAFs N_NAF, that is, (N_aux/N_NAF)^2.   Quasi-particle energies using LT-G_0W_0 – A detailed overview of the computational details is given in Sec. S1 of the Supporting Information (SI). In the following, we will demonstrate the robustness, scalability, and speed-up of combining AC-G_0W_0 with the LT, NAF, and FC techniques. First, its accuracy is determined for a subset of the GW100 benchmark set <cit.>. Reference orbitals were obtained using the Hartree–Fock approximation throughout. All results are compared to reference quasi-particle (QP) energies based on the “fully-analytic” evaluation of the G_0W_0 self-energy without employing the RI approximation (also for the mean-field calculation) <cit.>. The results of 15 representative molecular systems are explicitly shown here and deviations for the rest of the benchmark set can be found in the SI. Note that we omitted all molecular systems containing very heavy atoms such as iodine and xenon, as well as the rubidium and silver dimers because we restrict ourselves here to a non-relativistic description and do not use effective core potentials in this work. This reduces the total number of systems included in our calculations to 93. The signed error for the HOMO and LUMO QP energies relative to the “fully-analytic” evaluation of the G_0W_0 self-energy without making use of the RI approximation are shown in Tabs. <ref> and <ref>. The approximate treatments include (a) the “fully-analytic” approach using the RI approximation, (b) AC-G_0W_0, (c) AC-G_0W_0 in combination with LT (ε_LT=10^-7), (d) AC-G_0W_0 in combination with FC, (e) AC-G_0W_0 in combination with the NAF approximation (ε_NAF = 10^{-6,-4,-2}), and (f) combining AC-G_0W_0 with LT/NAF/FC (ε_LT=10^-7, ε_NAF = 10^{-6,-4,-2}). Comparing the “fully-analytic” evaluation with and without the RI approximation, a mean absolute error (MAE) of 1.1 meV (HOMO) and 1.6 meV (LUMO) in the quasi-particle energies is found. Virtually identical deviations are obtained for AC-G_0W_0 highlighting its applicability for determining valence G_0W_0 quasi-particle energies. Applying the LT leads to almost identical results with deviations smaller than 0.1 meV, numerically justifying the chosen parameters for the LT quadrature. Introducing additional approximations such as NAF and FC increases the QP errors. However, the overall accuracy for the different thresholds and combinations of the various approximations remains below an MAE of 10.0 meV for both HOMO and LUMO quasi-particle energies with the largest deviation of 29.6 meV for the HOMO quasi-particle energy of vinyl bromide in the case of FC and AC/FC/LT/NAF. As described in the SI, this error originates from the FC for bromine and can readily be reduced to below 5 meV by adjusting the number of frozen core orbitals. Because all systems in the following mainly contain first- and second-row elements (with the exception of WW-6 which is separately benchmarked against non-FC calculations), we continue to use the default number for frozen core orbitals as described in Sec. S1 of the Supporting Information. From the above analysis, it becomes clear that AC-G_0W_0 in combination with a comparatively loose NAF threshold of 10^-2 leads to an almost negligible error. As a result, all further calculations shown in this article will be confined to this threshold. Next, we performed G_0W_0 calculations on water clusters (see Fig. <ref>) of increasing size containing ten to 100 water molecules (corresponding to 430 to 4300 SCF basis functions in a def2-TZVP basis, respectively) and investigate QP energies and computational timings (computational details are given in Sec. S1 of the Supporting Information). The geometries were obtained by first generating a cubic 20× 20× 20 Å^3 water cluster containing 233 water molecules with VMD <cit.>, optimizing it with GFN2-xTB (6.4.1) <cit.> and then including the respective number of molecules closest to the center of mass of the whole cluster. In Fig. <ref>, we display the signed error in QP energies as a function of the number of molecules included in the water cluster for the HOMO and the LUMO for the different approximate strategies employed here as well as a combination thereof. Again, we find that the LT approximation does not introduce significant errors in QP energies for either the HOMOs or the LUMOs. For the NAF approximation (ε_NAF = 10^-2), the error with respect to the reference calculation is constant at about 1.5 meV and 3.0 meV for the HOMO and the LUMO, respectively. For the FC approximation, a constant error of about 3.5 meV and -0.5 meV is observable for the HOMO and the LUMO energies, respectively. While the error of the approximation combining LT, NAF, and FC exceeds the individual errors in the HOMO case (about 4.5 meV), we find partial error cancellation in the LUMO case (about 1.8 meV). Most importantly, however, it can be seen that (a) the error in QP energies is essentially independent of the system size and (b) the magnitude of QP energy errors is within a tolerable range using the approximations and thresholds suggested here (compare SI, Sec. 1). As a next step, we show computational timings of the various G_0W_0 methods. To assess the practical scaling behavior with the system size, we consider a double logarithmic plot of wall-clock timings for the calculation of the screened Coulomb interaction W_0 [see, e.g., Eq. (<ref>)] as a function of the number of SCF basis functions in Fig. <ref>. A non-logarithmic wall-clock timing plot along with the resulting speed-ups can be found in Fig. S2 of the Supporting Information. Taking a look at the corresponding linear fits performed on the data in Fig. <ref>, we find a slope of 3.34 for the unmodified AC-G_0W_0 algorithm, which is only slightly smaller than the formal scaling exponent of four that would be expected for the AC approach. The exponent is reduced by both the FC and NAF approximations to 3.30 and 3.13, respectively, where no such reduction would be expected for the exponent but rather for the prefactor only. Here, we note that the number of NAFs included in the calculations is on average 25–30% lower than the number of original auxiliary basis functions. For the water cluster containing 100 water molecules, the auxiliary-basis size reduction is 26%, which should result in a speed-up of 0.74^-2≈ 1.83, and which is close to the observed speed-up of 2.0. The LT approximation leads to a lowering of the exponent from 3.34 to 2.78. In this case, the expected speed-up should be proportional to the quotient of the original number of imaginary frequencies and the number of Laplace grid points (see Eq. <ref>). For the cluster containing 100 water molecules, this ratio is 128/17 ≈ 7.5 which compares well with the observed speed-up of 6.7. Inspecting the exponents of the two combined approximations LT/NAF as well as LT/NAF/FC, we find that the individual reductions in computational scaling add up so that for LT/NAF/FC the slope of the linear fit (as a measure of the computational scaling) is lowered by almost one with respect to the regular AC-G_0W_0 calculation. For the presented wall-clock timings, it can thus be seen that, although the formal scaling behavior is unchanged by the approximations introduced, LT-G_0W_0 leads to a drastically lower practical computational scaling while retaining a very high degree of accuracy. Additionally, we consider absolute timings of the G_0W_0 and eigenvalue-self-consistent GW (five cycles) calculations for the cluster containing 100 water molecules to illustrate the speed-up that can be expected in practical calculations with moderately sized systems and the LT-G_0W_0 method. The results can be found in Tab. <ref>. It turns out that the speed-ups of the composite approximation LT/NAF/FC are 18.1 and 17.6 for G_0W_0 and evGW, respectively, which slightly exceeds the product of the speed-ups of the individual LT (6.7 and 6.6), NAF (2.0 and 2.1), and FC (1.2 and 1.3) approximations, each amounting to roughly 16. The individual approximations thus do not interfere with each other but can constructively be used in combination, and the respective speed-up directly carries over to (partially) self-consistent GW calculations. Finally, we note that the G_0W_0 calculation using only the LT approximation is about twice as fast as the regular one already for the smallest investigated water cluster containing 10 molecules (10 seconds vs 20 seconds), providing evidence for the small prefactor of LT-GW combined with the NAF and FC approximations. LT-G_0W_0 with BSE – We apply a combination of LT-G_0W_0 and the Bethe–Salpeter (BSE) equation to investigate the effect of the LT approximation on the accuracy of linear absorption spectra. The BSE calculations are performed with the efficient integral-direct resolution of the identity implementation for the Hartree–Fock and long-range exchange part of the response matrix in Serenity originally presented in our work in Ref. <cit.>. As introduced above, the LT-G_0W_0 method refers to the application of the LT, NAF, and FC approximation and will be used in the following. As a first test case, we consider the WW-6 dye relevant in photovoltaics <cit.>. The molecular geometry was taken from Ref. <cit.> and is displayed in Fig. <ref>. Within the def2-TZVP basis set, there are 5583 SCF basis functions as well as 13802 auxiliary basis functions for the GW/BSE part of the calculation. In Fig. <ref>, we compare the linear absorption spectra for the WW-6 system that was obtained with the regular AC-G_0W_0/BSE calculation with the LT-G_0W_0 calculation employing both the NAF (ε_NAF = 10^-2) and the FC approximations. In both cases, eight of the lowest-lying excitation energies and corresponding oscillator strengths were determined. The FC approximation was not applied for the BSE calculations. We find no visible difference between the linear absorption spectra calculated with the regular and the approximate approach. Numerical results for QP energies as well as excitation energies and oscillator strengths can be found in Tabs. <ref> and <ref>, respectively. The mean deviation of QP energies is about 9.6 meV which far exceeds the mean error of excitation energies and oscillator strengths which amount to 0.75 meV and 0.39· 10^-3 a.u., respectively. The occupied and virtual QP energy errors are more systematic for this test system than for the HOMOs and LUMOs of the water clusters investigated beforehand. This results in more favorable error cancellation for excitation energies, which depend on QP energy differences. The errors of the oscillator strengths are equally negligible, which, in turn, is probably a result of the eigenvectors of the BSE problem being largely unaffected because of the error cancellation mentioned above. Inspecting the computational timings (given in Fig. <ref>), we find that in the regular case, the overall wall-clock timings are dominated by the calculation of the screened Coulomb interaction W with 2293 minutes, while in the approximate case, the BSE part of the calculation exceeds the time needed for the GW calculation by far. Here, the overall G_0W_0 calculation time is, in fact, dominated by the preparation of the three-index MO integrals, as the calculation of W only took 103 minutes. We also note that for the approximate calculation, setting up the NAF matrix, diagonalizing it, and then performing the NAF transformation to the three-index integral tensor introduces a small overhead of about 25 minutes (or ten percent), which is summarized in the timings for the “MO Ints”. The number of NAFs included in the calculation was 8755 corresponding to a reduction of 37% with respect to the full number of auxiliary basis functions. The speed-up for the entire calculation amounts to 2.3 (3915 minutes vs 1720 minutes) while the speed-up for the calculation of the screened Coulomb interaction alone is 22.3 (2293 minutes vs 103 minutes). These calculations demonstrate that LT-GW is able to provide accurate references for BSE calculations, while drastically reducing the computational demand of the preceding G_0W_0 calculation. As a second test system, we consider stacks of BODIPY dyes, which are of interest in the field of supramolecular polymer design <cit.>. Additionally, supermolecular BODIPY-based compounds are interesting for GW/BSE calculations in particular because alternative (standard) methods for predicting their absorption spectra may either lack the necessary accuracy (e.g. linear response time-dependent density-functional theory, see e.g. Ref. <cit.>) or are simply not feasible for this kind of system size (e.g. coupled cluster-based methodology such as coupled cluster with singles and approximate doubles <cit.> and even local variants thereof <cit.>). In our calculations, we include monomer, dimer, and tetramer geometries (provided by the authors of Ref. <cit.> and displayed in Fig. <ref>) and compare our G_0W_0/BSE-based spectra with experimental ones in Fig. <ref>. For all n-mers, 32 of the lowest-lying excitation energies and corresponding oscillator strengths were determined after calculating 20 of both the lowest-lying virtual and highest-lying occupied QP energies for each monomer in each geometry, that is, 40 for the dimer as well as 80 for the tetramer. Based on the findings of the approximate calculations for the WW-6 test system, we omit G_0W_0 calculations that do not apply any further approximations here. The experimental spectra exhibit three main bands at about 600, 400, and 300 nm. Interestingly, a strong blue shift of, in particular, the energetically lowest-lying absorption band is observed upon aggregation (experimentally induced by lowering the solution temperature). This behavior can most likely be attributed to the corresponding interaction of the transition dipole moments of the monomers in this stacking pattern. Going over to the computed spectra, one finds that the monomer spectrum reproduces the position and intensity of the experimental bands with a high degree of accuracy (given a constant shift of the absorption spectrum of 0.48 eV). It can further be seen that the blue shift of the lowest-lying absorption band of the dimer compares well with the experimental one. The computed tetramer spectrum exhibits a blue shift far exceeding the experimental one. This is most likely due to a combination of different factors. On the one hand, the experimental spectrum is a combination of several different aggregates of varying sizes and particular arrangements. On the other hand, the tetramer geometry was obtained by stacking two dimers on top of each other followed by a reoptimization. As a result, the distance between the inner two monomers is smaller than the distance between the outer pairs which could lead to an overestimation of the excitonic couplings leading to the blue shift. The GW calculation (screened Coulomb interaction W) took 6, 70, and 813 minutes for the monomer, dimer, and tetramer, respectively.   CONCLUSION – We have presented the LT-GW method, for which we numerically demonstrated that it follows our three main objectives: (a) a small prefactor, (b) minimal effort for adaptation in existing AC-GW codes, and (c) significant performance improvements (up to 22-fold) for a wide range of system sizes with controllable error. For this, LT-GW combines the GW approximation in the context of the analytic continuation (AC) approach with a Laplace transformation (LT), natural auxiliary functions (NAFs), and the frozen-core (FC) approximation. We have highlighted its synergy with the BSE for calculations of excitation energy and properties for extended systems consisting of up to 7412 basis functions. We are convinced that the LT-GW method constitutes a practical and widely applicable extension to existing GW implementations for molecular systems. In the LT-G_0W_0/BSE calculations, we have shown that the computational time is now dominated by the BSE calculation. Based on our three guiding principles, we aim to achieve similar improvements also for the BSE in the future by making use of, for example, minimal auxiliary basis sets <cit.> or simplified integrals <cit.>.   Computational details, additional analysis of quasi-particle energies for atoms and molecules from the GW100 benchmark set as well as non-logarithmic wall-clock-timings and the speed-up plot of the water clusters can be found in the Supporting Information. J.T. gratefully acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through DFG-495279997. N.N. and J.N. gratefully acknowledge funding by the DFG through SFB 1459 (Project A03, Project-ID A03-433682494). We would like to thank Christian Mück-Lichtenfeld for providing the monomer, dimer, and tetramer BODIPY geometries originally presented in Ref. <cit.>. We would like to thank Alexander Rödle and Gustavo Fernández for providing the raw data of the experimental absorption spectra originally presented in Ref. <cit.>. The data supporting the findings of this study are available either within the supplementary material or upon reasonable request from the authors.
http://arxiv.org/abs/2307.03980v1
20230708140837
Building and Road Segmentation Using EffUNet and Transfer Learning Approach
[ "Sahil Gangurde" ]
cs.CV
[ "cs.CV", "cs.LG", "eess.IV" ]
Building and Road Segmentation Using EffUNet and Transfer Learning Approach Sahil Gangurde ABV-Indian Institute of Information Technology & Management, Gwalior, India [email protected] =========================================================================================================================== In city, information about urban objects such as water supply, railway lines, power lines, buildings, roads, etc., is necessary for city planning. In particular, information about the spread of these objects, locations and capacity is needed for the policymakers to make impactful decisions. This thesis aims to segment the building and roads from the aerial image captured by the satellites and UAVs. Many different architectures have been proposed for the semantic segmentation task and UNet being one of them. In this thesis, we propose a novel architecture based on Google's newly proposed EfficientNetV2 as an encoder for feature extraction with UNet decoder for constructing the segmentation map. Using this approach we achieved a benchmark score for the Massachusetts Building and Road dataset with an mIOU of 0.8365 and 0.9153 respectively. segmentation,urban planning, state-of-the-art, mask, road, building § INTRODUCTION With the increasing population, city areas will increase, so the road network and building networks will get congested and intertwined. It will be difficult for humans to look at the aerial views of the scene and create proper layouts of the roads and buildings. Land cover segmentation has been in the picture for a very long time. The area of unmanned aerial vehicles (UAVs) has seen significant growth in attention in recent years, particularly in research and industry. As unmanned aerial vehicles become more commercially successful, aerial photographs provide a new and intriguing study avenue. Integrating drones with computer vision is a unique and demanding notion that allows unmanned aerial vehicles to grasp the overflown region. The process of aerial image interpretation entails inspecting aerial images for the express goal of detecting numerous distinguishing qualities of the objects of interest. Several stages are required to acquire complete scene comprehension from an aerial photograph. Given a picture, a segmentation phase is used to separate the scene into sections of specific categories (such as residential areas, flood, woodland, roads, and so on), essentially seeing the entire environment as a completely linked location with all categories interacting with each other. Semantic segmentation is the process of segregating different parts of images into predefined classes. It helps identify different labels in the image and pinpoint the exact map of it. Various problems related to medical imagery, satellite imagery, and urban planning can be solved by automating the process of detecting and segmenting multiple objects associated with the corresponding domain. The ability to recognize various objects from UAV images, such as railway lines, water bodies, forests, and other categories, could be beneficial in multiple applications, including creating and maintaining maps of cities, improving urban planning, noting environmental changes, and disaster relief. Our study focuses on creating effective ways of recognizing buildings from top-down aerial photos and establishing an efficient automatic system capable of identifying individual structures. In this paper segmentation on aerial images is performed to extract building mask. Then the project explores road segmentation and can be further extended for other classes. § RELATED WORK performed road network segmentation from SAR images using FCN <cit.>. They evaluated three models, FCN-8s, VGG19 with UNet, and DeepLabv3+. The paper uses inferior backbone models along with UNet <cit.>. This is a major drawback and achieved very low accuracy over the custom dataset. proposed stacking two UNets and generating the output mask <cit.>. The input image is first divided into blocks of 224x224 pixels and trained of the two UNets. The patches are then again converted into the real segmented mask. Though it gave promising results on the Massachusetts Building Dataset and Inria's Aerial dataset, the problem of converting the image into different patches and then reconstructing the image again is computationally expensive. proposed EU-Net to perform building segmentation <cit.>. The paper uses a dense spatial pyramid pooling structure(DSPP) after the encoder network to increase the multi-class feature extraction. The decoder used is a UNet architecture. The DSPP block achieved better results than normal UNet in performing segmentation of different sizes of buildings. proposed using an attention mechanism in the encoder to extract the features <cit.>. The encoder then produces the segmentation mask, transferring it to the edge detection block, which makes road edges. Thus, using a hybrid encoder mechanism provided very high accuracy for the road segmentation task. used of attention mechanism in the encoder to extract the features <cit.>. The proposed model employed a hybrid encoder separated into two parts: the first harvests full-resolution features, while the second creates high-resolution feature encoding. The second half, on the other hand, employs max-pooling layers to expand our network's entire receptive field, providing the network with adequate context information to operate with. Before the features from both sections are combined, a 2D activation map is constructed for each portion, letting the network choose how much attention to devote to the features from each encoder step. This helped the segmentation of huge roadways and the development of fine-edged segmentation masks used a novel vision transformer network to perform building segmentation <cit.>. The transformer simultaneously captured global and spatial detail contexts using a dual path structure for accurate building segmentation. The disadvantage of this approach was that to gather the global context; the search window size has to be large, which causes very high computation resources. § PROPOSED WORK The problem statement can be formulated as follows: * Develop models to segment buildings and roads from the urban environment and generate the mask for the same. * Evaluate the models for segmentation metrics and choose the best one. In this paper, we combine the state-of-the-art CNN architectures like Resnet50<cit.> and variations of EfficientNet<cit.> as encoders for UNet architecture and train them on the Massachusetts dataset<cit.>. This will generate the mask of the roads and buildings; hence, the two can be identified from the actual image. Figure <ref> shows the complete process of steps involved to achieve our goal. § DATASET The datasets used in this project are Massachusetts Road dataset and Massachusetts Building dataset<cit.>. Both the datasets have a aeriel view of the Boston city and the corresponding segmented masks of roads and building. §.§ Building Dataset The dataset used is Massachussetts Buildings Dataset. It includes a total of 151 images shot from UAV in the Boston region. A single image has dimension of 1500 x 1500 pixels. Each image convers an approcimate area of 2.25 sq km of land. The whole dataset expands over a region of 340 sq km. The dataset have been split into three parts: * Training Data: 137 images * Validation Data: 4 images * Test data: 10 photos The segmentation masks are created by using the building footprints of the OpenStreetMap project. The dataset covers urban and suburban part of Boston. The building labels include houses, buildings, garages all of various sizes. The images are made available by the Massachusetts government. The segmentation masks after computing computationally were further hand corrected for higher accuracy on the model training. Figure <ref> shows the sample images and their masks of the building dataset. §.§ Road Dataset The Massachusetts Roads Dataset contains 1171 photos clicked from the UAV. Each picture has a resolution of 1500x1500 pixels. A single photo covers area of 2.25 sq km. The images were randomly divided into three sets: * Training Data: 1108 images * Validation Data: 14 photos * Test Data: 49 photos The dataset spans over 2600 square kilometers and includes many urban, suburban, and rural areas. The test site alone spans 110 square kilometers. The segmentation masks are created by using the road centerline footprints of the OpenStreetMap project. The labels i.e. the centerline is then given a thickness of 7 pixels. All picture is rescaled to 1 pixel per square metre resolution. Figure <ref> shows the sample images and their masks of the building dataset. § MECHANISM/ALGORITHMS We will train the above datasets on the following models as encoders to the UNet. The encoder's general functionality is extracting features present in the image using mask labels. These models will extract the necessary details from the images the decoder will reconstruct the mask for the input. §.§ Encoder-Decoder Architecture The encoder is a CNN model which extracts the features from the image. The encoder downsamples the image, reduces the feature map resolution so that it captures the high level details from the original image. This is followed by many SOTA models in past like ResNet<cit.>. It is a common practice in CNN architecture to reduce the size of input image to extract the high level details. It is challenging to create a segmentation map based on the final feature map of the encoder due to its reduced size. A decoder network consists of set of layers that upsamples the feature maps extracted form the encoder network to again recover the information. Figure <ref> §.§ UNet . created the UNET for biomedical image segmentation<cit.>. The UNet has two parts one is the encoder and other is the decoder. The encoder extracts the features from the input image and the decoder achieves exact localization using transposed convolutions. The encoder consists of only convolutional and max pooling layers. It was mainly developed for the use of medical image segmentation but for our task this model we will use this model along with other encoder and find the segmentation mask. §.§ EfficientNet introduced the EfficientNet with a scaling convolutional strategy. Figure <ref> displays the architecture of EfficientNet<cit.>. As the depth of the network increases the the accuracy increases is what is shown by the ResNet<cit.> architecture. But at some point the accuracy of the network cannot be increased due to the problem of vanishing gradient. To solve this issue, scaling must be performed in all dimensions i.e. depth, width and resolution. EfficientNet introduced a new method called 'compound scaling' through which each of these above mentioned parameters get scaled by a factor ϕ. The parameters for scaling are given in <ref>. depth(d) = α^ϕ width(w) = β^ϕ resolution(r) = γ^ϕ such that, (αβ^2γ^2)^ϕ≈2 where α≥1, β≥1and γ≥1 With ϕ = 1 using grid search authors came up with the value of α=1.2, β=1.1 and γ=1.15. Now keeping these value as constant we can change the factor ϕ to get the scaled models from EfficientNetB1 upto EfficientNetB7. §.§ EfficientNetV2 Compound scaling in EfficientNet scales all the parameters of the model by the factor of ϕ. This type of scaling is not necessary as this scales in all parameters. So EfficientNet gives less control over the model parameters. Also in EfficientNet as the size of the image increases we need to decrease the batch size. This increase in image size needs more time to compute the features. EfficientNet uses MBConv layer which uses depth wise convolution which is an expensive operation. The motive behind EfficientNetV2<cit.> was to create a CNN model with the motive to increase the accuracy(A) while decreasing the training step time(S) and having less parameters(P). Basically max(A) while min(S^w, P^v) where w and v are experimentally determined. To solve the problem of less parameters and less training time NAS was used to create a model with the above given objective function. To reduce the depthwise convolution time, proposed Fused MBconv method. In Fused MBConv instead of performing a depthwise convolution and we perform a convolution with a filter of 3x3. As the depthwise convolution performs multiplication over all channels by removing it we reduce the computation cost and create faster models. The Figure <ref> shows the MBConv operation. § TECHNOLOGIES USED FOR IMPLEMENTATION The problem involves solving the segmentation task of various deep learning libraries, matrix manipulation libraries, image processing libraries, and plotting libraries are used. Table <ref> shows the different libraries, frameworks, and other technologies used in this project. Most of the code runs were done in the Kaggle environment. Kaggle environments are backed by Google Cloud, which provides free computation power to run ML tasks. § DATA PROCESSING Following are the steps involved in data processing: * One-hot encoding: For all the images, perform one hot encoding. One hot encoding is the process of converting the pixel values into number of the class we want the image to be segmented. Figure <ref> shows the original image, real mask and the constructed one hot encoded mask of the image. * Augmentation: Perform random horizontal flip, vertical flip and 90 degree rotation on the images and their corresponding masks. * Padding: The encoder models are implemented in such a way that the padding is added to arbitrary input size to match the input size of various encoders. * Dataset Loader: Create a data loader for model to train with input as image and the label as the one hot encoded mask. § RESULTS AND DISCUSSION §.§ System Configuration All the models are trained on Kaggle with Google Cloud backbone. Table <ref> shows the system parameters of the environment under which the models are trained. § EVALUATION METRICS The given models will be evaluated on two metrics - Intersection Over Union(IOU) and F1 Score. * Intersection Over Union (IOU) - Intersection Over Union is also known as the Jaccard index, is used to calculate the percentage of overlap between the true mask and the predicted output mask. IOU = y ∩ y^'/y ∪ y^' The intersection consists of the pixels found in the true mask and the predicted mask and the union consists of the pixels containing the true mask as well as the predicted mask. Equation <ref> shows the formula for IOU calculation. * F1 Score - F1 score or dice coefficient calculates the overlap of two masks. The values of the dice coefficient lie between 0 and 1 inclusive where 1 denotes perfect overlap and 0 represent no overlap. The equation <ref> shows the formula for Dice coefficient. Dice Coefficient = 2 * |y ∩ y^'|/|y| ∩ |y^'| The loss function for the neural network to minimise is given by 'Dice Loss' and is shown in <ref>. Dice Loss = 1 - 2∑_pixels^yy^'/∑_pixelsy^2 + ∑_pixelsy^'2 * Accuracy - Accuracy is defined as the ratio of sum of how many pixels in the image are correctly identified as the true segmented pixel and the number of pixels not identified as true segmented correctly to all the pixels present. Basically in terms of true position, negative and false positive negative in terms of pixel values accuracy can be defined as given in equation <ref> Accuracy = TP + TN/TP + TN + FP + FN * Precision - It shows the purity of the positive detection relative to the ground truth values. In precision TP mask having an IOU of above threshold, while FP represent the mask having an IOU of below threshold. Precision = TP/TP + FP * Recall - Recall determines the completeness of positive prediction with respect to ground truth label. Equation <ref> represents the formula to calculate the recall. Recall = TP/TP + FN §.§ Building Segmentation The goal of this experiment is to detect building mask from the input aerial image. Five different encoders are tested on UNet and the experiments are carried out by training the models on dataset and finding the IOU score and dice loss. The parameters for all the models are same. Table <ref> shows in-detail configuration of the parameters. §.§ Road Segmentation The goal of this experiment is to detect road mask from the input aerial image. Similiar to the building segmentation, five different encoders are tested on UNet and the experiments are carried out by training the models on dataset and finding the IOU score and dice loss. The parameters for road segmentation task are given in the Table <ref>. These parameters are common for all the models in this experiment. §.§ Results and discussion The models are tested on the test data, and the results obtained are shown in Table <ref> and <ref> for building and road segmentation respectively. The scores written in bold represent the best score achieved under a particular metric. §.§ Benchmarks The results derived from the experiments outperform the benchmark scores for both datasets. Table <ref> shows the recent papers presented for the building dataset, and the best accuracy achieved has been presented in this paper. Also, in Table <ref> existing models are compared concerning mIOU and mDice for the road dataset. The models presented in the paper set new benchmark scores for the Massachusetts dataset. § LIMITATIONS AND FUTURE SCOPE The size of the input image is a massive problem for UAV-based segmentation. Very high GPU memory is required for images with higher dimensions to load the model with weights. Standard images consist of 3 channels, but satellite images can contain more than three channels. In that case, the whole UNet architecture must be changed to fit the extra 3rd dimension data. In this paper, only roads and buildings are segmented as a part of urban object segmentation. Aerial images from different cities can be taken, and masks for various classes such as manholes, power lines, railway tracks, etc. must be created to expand the segmentation classes and allow more objects to be segmented in the urban environment. Attention mechanism must be explored on EfficientNet+Unet architecture to improve the accuracy further. § CONCLUSION Based on the experiments, we can conclude that for building and road segmentation, UNet architecture with a pre-trained encoder is the best architecture to be used. Using transfer learning, the training time and GPU cost are reduced, and the accuracy of the models is very high. The problems discussed in the research gaps regarding transfer learning are filled, and models pre-trained with an imagenet dataset were used. The thesis presents the new benchmark score for the Massachusetts Building and Road dataset. For the building segmentation task, EfficientNetV2L+UNet achieved an IOU of 0.8365, and for the road segmentation task, EfficientNetB7+UNet gave an IOU of 0.9153. IEEEtranN
http://arxiv.org/abs/2307.04306v1
20230710020559
The Category of reduced imaginary Verma modules
[ "Juan Camilo Arias", "Vyacheslav Futorny", "André de Oliveira" ]
math.RT
[ "math.RT" ]
Institute of Mathematics and Statistics, University of São Paulo, São Paulo, BRAZIL. [email protected] Shenzhen International Center for Mathematics, Southern University of Science and Technology, China and Institute of Mathematics and Statistics, University of São Paulo, São Paulo, BRAZIL. [email protected] Institute of Mathematics and Statistics, University of São Paulo, São Paulo, BRAZIL. [email protected] [2020]Primary 17B10, 17B67, 17B22 The category of reduced imaginary Verma modules Juan Camilo Arias, Vyacheslav Futorny and André de Oliveira August 12, 2023 =============================================================== For an arbitrary affine Lie algebra we study an analog of the category 𝒪 for the natural Borel subalgebra and zero central charge. We show that such category is semisimple having the reduced imaginary Verma modules as its simple objects. This generalizes the result of Cox, Futorny, Misra in the case of affine sl_2. § INTRODUCTION Let A=(a_ij)_0≤ i,j≤ N be a generalized affine Cartan matrix over with associated affine Lie algebra and Cartan subalgebra . Let Π ={α_0, α_1, ⋯, α_N} be the set of simple roots, δ the indivisible imaginary root and Δ the root system of . A subset S⊆Δ is a closed partition if for any α, β∈ S and α + β∈Δ then α + β∈ S, Δ = S ∪ (-S) and S∩ (-S) = ∅. The classification of closed partitions for root system of affine Lie algebras was obtained by H. Jakobsen and V. Kac in <cit.> and <cit.> and independently by V. Futorny in <cit.> and <cit.>. They show that closed partitions are parameterized by subsets X⊆Π and that (contrary to what happens in the finite case) there exists a finite number (greater than 1) of inequivalent Weyl group orbits of closed partitions. When X=Π we get that S=Δ_+ and we can developed the standard theory of Verma modules, but in the case X⊊Π we obtain new Verma-type modules called non-standard Verma modules. The theory of non-standard Verma modules was initiated by V. Futorny in <cit.> (see also <cit.>) in the case X=∅ and continued by B. Cox in <cit.> for arbitrary X⊊Π. The case X=∅ give rise to the natural Borel subalgebra associated to the natural partition Δ_nat = {α + nδ | α∈Δ_0,+ , n ∈}∪{ kδ | k ∈ℤ_>0}. The Verma module M(λ), of highest weight λ, induced by the natural Borel subalgebra is called imaginary Verma module for , when it is not irreducible it has an irreducible quotient called reduced imaginary Verma module. Unlike the standard Verma modules, imaginary Verma modules contain both finite and infinite dimensional weight spaces. Similar results hold for more general non-standard Verma modules. In <cit.>, while studying crystal bases for reduced imaginary Verma modules of ŝl̂_̂2̂, it was consider a suitable category of modules, denoted O_red,im, with the properties that any module in this category is a reduced imaginary Verma module or it is a direct sum of these modules. In this paper, by appropriate modifications we first define a category O_red,im for any affine Lie algebra and we show that all irreducible modules in this category are reduced imaginary Verma modules and, moreover, that any arbitrary module in O_red,im is a direct sum of reduced imaginary Verma modules. It should be noted that the results presented in this paper hold for both untwisted and twisted affine Lie algebras. The paper is organized as follows. In Sections 2 and 3, we define, set the notations and summarize the basic results for affine algebras, closed partitions and imaginary Verma modules. In section 4 we introduce the category O_red,im and present some of its properties. Finally, in section 5 we present the main results of this paper. § PRELIMINARIES In this section we fixed some notation and the preliminaries about affine algebras and root datum are set up. §.§ Affine algebras Let A=(a_ij)_0≤ i,j≤ N be a generalized affine Cartan matrix over with associated affine Lie algebra . Let D=diag(d_0, …, d_N) be a diagonal matrix with relatively primes integer entries such that DA is symmetric. The Lie algebra has a Chevalley-Serre presentation given by generators e_i, f_i, h_i for 0≤ i ≤ N and d which are subject to the defining relations: [h_i,h_j]=0 [d,h_i]=0 [h_i,e_j]=a_ije_j [h_i,f_j]=-a_ijf_j [e_i,f_j]=δ_i,jh_i [d,e_i]=δ_0,ie_i [d,f_i]=-δ_0,if_i ( e_i)^1-a_ij(e_j)=0 ( f_i)^1-a_ij(f_j)=0 Let be the Cartan subalgebra of which is the span of {h_0, …, h_N,d}. Recall that affine Lie algebras are classified into two classes: untwisted and twisted, see <cit.>. In the untwisted case, has a natural realization known as loop space realization which is defined by = ⊗[t,t^-1]⊕ c ⊕ d where is the simple finite dimensional Lie algebra with Cartan matrix (a_ij)_1≤ i,j ≤ N, c is a central element, d is a degree derivation such that [d,x⊗ t^n]=nx⊗ t^n for any x∈ and n∈ and we have [x⊗ t^n, y⊗ t^m] = [x,y]⊗ t^n+m + δ_n,-mn(x|y)c for all x,y∈, n,m∈ where (-|-) is a symmetric invariant bilinear form on . On the other hand, twisted affine Lie algebras are described as fixed points of automorphisms of untwisted algebras. Concretely, let μ̃ be an automorphism of order r=2 or r=3 of the Coxeter-Dynkin diagram of and let μ be the corresponding diagram automorphism of . Then μ can be extended to an automorphism μ on = ⊗[t,t^-1]⊕ c ⊕ d defined as μ(x ⊗ t^m) = (-1)^m (μ(x) ⊗ t^m), for x ∈, m ∈ℤ, μ(c) = c, μ(d) = d and extended by linearity. The twisted affine Lie algebra ()^μ is the subalgebra of fixed points of μ. For example, when r = 2, ()^μ = (∑_m ∈ℤμ_0⊗ t^2m) ⊕(∑_m ∈ℤμ_1⊗ t^2m+1) ⊕ℂc ⊗ℂd where μ_0 = {x ∈ | μ(x) = x} and μ_1 = {x ∈ | μ(x) = -x} (see <cit.>). §.§ Root datum and closed partitions Let I_0 = {1, …, N} and Δ_0 be the root system of with θ being the longest positive root. We denote by Q_0 and P_0 the root and weight lattices of . Let I={0,1,…, N}, Δ the root system of with simple roots Π={α_0, α_1, …, α_N} and let δ=α_0+θ be the indivisible imaginary root. Q denotes the root lattice, P the weight lattice, and Q̌, P̌ denotes the coroot and coweight lattices, respectively. Δ^re and Δ^im denotes the real and the imaginary sets of roots for Δ. A subset S of Δ is said to be closed if whenever α, β∈ S and α + β∈Δ then α + β∈ S. We also say that S is a closed partition if S is closed, Δ = S ∪ (-S) and S∩ (-S) = ∅. Closed partitions were classified in <cit.> and <cit.> (see also <cit.> and <cit.>). For an untwisted affine Lie algebra , there are two interesting closed partitions of the root system Δ, the standard partition and the natural partition, which give rise to two distinct Borel subalgebras that are not conjugate. The standard partition is defined by Δ_st = {α + nδ | α∈Δ_0 , n ∈_>0}∪Δ_0,+∪{ kδ | k ∈ℤ_>0} and the natural partition by Δ_nat = {α + nδ | α∈Δ_0,+ , n ∈}∪{ kδ | k ∈ℤ_>0} The respective Borel subalgebras, called standard Borel subalgebra and natural Borel subalgebra, are defined by _̱st = ( ⊗ tℂ[t]) ⊕⊕⊕ℂc ⊕ℂd and _̱nat = ( ⊗ℂ[t,t^-1]) ⊕( ⊗ tℂ[t]) ⊕⊕ℂc ⊕ℂd where n = ⊕_α∈Δ_0,+_α, is the nilpotent Lie subalgebra of the finite Lie algebra . As already mentioned above, a twisted affine algebra is a fixed point set in of a non-trivial symmetry of Chevalley generators and, in this case, _̱nat is the intersection of the fixed point set with the natural Borel subalgebra of . For more details see <cit.>. In this paper, we are going to work with the natural partition of the root system Δ_nat. § IMAGINARY VERMA MODULES Let S be a closed partition of the root system Δ. Let be the untwisted affine Lie algebra which has, with respect to the partition S, the triangular decomposition =_S⊕⊕_-S, where _S = ⊕_α∈ S_α and = ⊕ℂc⊕ℂd is an affine Cartan subalgebra. Let U(_S) and U(_-S) be, respectively, the universal enveloping algebras of _S and _-S. Let λ∈ P. A weight U()-module V is called an S-highest weight module with highest weight λ if there is some non-zero vector v∈ V such that: * u· v = 0 for all u ∈_S. * h · v = λ(h)v for all h ∈. * V=U()· v ≅ U(_-S)· v. In what follows, let us consider S to be the natural closed partition of Δ, i.e., S=Δ_nat and so b_nat = _Δ_nat⊕ĥ. We make into a 1-dimensional U(b_nat)-module by picking a generating vector v and setting (x+h)· v = λ(h)v, for all x∈_Δ_nat and h∈. The induced module M(λ) = U()⊗_U(b_nat) v ≅ U(_-Δ_nat)⊗ v is called an imaginary Verma module with Δ_nat-highest weight λ. Equivalently, we can define M(λ) as follows: Let I_Δ_nat(λ) the ideal of U() generated by e_ik:= e_i⊗ t^k, h_il:= h_i⊗ t^l for i∈ I_0, k∈, l ∈ℤ_>0, and by h_i - λ(h_i)· 1, d-λ(d)· 1 and c-λ(c)· 1. Then M(λ) = U()/I_Δ_nat(λ). The main properties of this modules, which hold for any affine Lie algebra, were proved in <cit.> (see also <cit.> for more properties on this modules), we summarize them in the following. Let λ∈ P and let M(λ) be the imaginary Verma module of Δ_nat-highest weight λ. Then M(λ) has the following properties: * The module M(λ) is a free U(_-Δ_nat)-module of rank 1 generated by the Δ_nat-highest weight vector 1⊗ 1 of weight λ. * M(λ) has a unique maximal submodule. * Let V be a U()-module generated by some Δ_nat-highest weight vector v of weight λ. Then there exists a unique surjective homomorphism ϕ: M(λ) → V such that 1⊗ 1 ↦ v. * M(λ)_λ = 1. For any μ=λ-kδ, k∈_>0, 0< M(λ)_μ < ∞. If μ≠λ - kδ for any integer k≥ 0 and M(λ)_μ≠ 0, then M(λ)_μ = ∞. * Let λ, μ∈^*. Any non-zero element of _U()(M(λ), M(μ)) is injective. * The module M(λ) is irreducible if and only if λ(c)≠ 0. Suppose now that λ(c)=0 and consider the ideal J_Δ_nat(λ) generated by I_Δ_nat(λ) and h_il, i∈ I_0 and l∈∖{0}. Set M̃(λ) = U()/J_Δ_nat(λ) Then M̃(λ) is a homomorphic image of M(λ) which we call reduced imaginary Verma module. The following is proved in <cit.>, Theorem 1. M̃(λ) is irreducible if and only if λ(h_i)≠ 0 for all i∈ I_0. § THE CATEGORY O_RED,IM Consider the Heisenberg subalgebra G which by definition is G= ⊕_k∈∖{0}_kδ⊕ c We will say that a -module V is G-compatible if: (i) V has a decomposition V=T(V)⊕ TF(V) where T(V) and TF(V) are non-zero G-modules, called, respectively, torsion and torsion free module associated to V. (ii) h_im for i∈ I_0, m∈∖{0} acts bijectively on TF(V), i.e., they are bijections on TF(V). (iii) TF(V) has no non-zero -submodules. (iv) G· T(V)=0. Consider the set ^*_red = {λ∈^* | λ(c)=0, λ(h_i)∉_≥ 0 i∈ I_0 } We define the category O_red,im as the category whose objects are -modules M such that * M is ^*_red-diagonalizable, that means, M = ⊕_ν∈^*_red M_ν, M_ν = { m∈ M | h_im=ν(h_i)m, dm = ν(d)m, i∈ I_0 } * For any i∈ I_0 and any n∈, e_in acts locally nilpotently. * M is G-compatible. * The morphisms between modules are -homomorphisms Reduced imaginary Verma modules belongs to O_red,im. Indeed, for M̃(λ) consider T(M̃(λ)) = v_λ and TF(V) = ⊕_k∈, n_1, …, n_N∈_≥0M̃(λ)_λ+kδ - n_1α_1 - … -n_Nα_N, and at least one n_j≠ 0. Moreover, direct sums of reduced imaginary Verma modules belongs to O_red,im. Recall that a loop module for is any representation of the form M̂ := M ⊗ℂ[t,t^-1] where M is a 𝔤-module and the action of on M̂ is given by (x ⊗ t^k)(m ⊗ t^l) := (x · m) ⊗ t^k+l , c(m ⊗ t^l) = 0 for x ∈𝔤, m ∈ M and k,l ∈ℤ. Here x · m is the action of x ∈𝔤 on m ∈ M. Let M is a 𝔤-module in the BGG category 𝒪. Then the loop module M̂ can not lie in O_red,im. Let M ∈𝒪 and let M̂ be its associated loop module. If M is finite dimensional, it is a direct sum of finite dimensional irreducible -modules, and these have highest weights which are non-negative integers when evaluated in h_i for any i∈ I_0. So, condition (1) is not satisfied and M̂ does not belongs to O_red,im. Assume now that M is an infinite dimensional -module. Note that condition (2) is satisfied as acts locally nilpotently on M. If condition (1) does not hold, we are done. Suppose that (1) holds and that M̂ is G-compatible. We have M̂ = T(M̂) ⊕ TF(M̂) satisfying (i) - (iv) above. Take any nonzero element ∑_i=-k^km_i⊗ t^i∈ T(M̂) with m_i∈ M_μ for some weight μ̅∈^*_red. Then by (iv) we have 0 = (h_j⊗ t^r)(∑_i=-k^km_i⊗ t^i) = ∑_i=-k^k(h_j· m_i) ⊗ t^i+r = μ̅(h_j)(∑_i=-k^km_i⊗ t^i+r) where j ∈ I_0, r ∈ℤ∖{0}. Hence μ̅(h_j) = 0, for any j ∈ I_0, which contradicts to the fact that μ̅∈^*_red. Then T(M̂) = 0 and M̂ = TF(M̂) which is a -module contradicting (i) and (iii), and thus (3). This completes the proof. § MAIN RESULTS In this section we will show that the category O_red,im is a semisimple category having reduced imaginary Verma modules as its simple objects. First we will show that reduced imaginary Verma modules have no nontrivial extensions in O_red,im. If λ,μ∈^*_red then _O_red,im^1(M̃(λ), M̃(μ)) = 0. Let M be an extension of M̃(λ) and M̃(μ) that fits in the following short exact sequence 0 [r] M̃(λ) [r]^ι M [r]^π M̃(μ) [r] 0 Suppose μ = λ +kδ- ∑_i=1^N s_iα_i, for s_i∈ and k∈, and all s_i's have the same sign or equal to 0. First, consider the case when s_i=0 for all i∈ I_0. Then μ = λ + kδ and so, in M there will be two vectors v_λ and v_μ of weights λ and μ respectively, annihilated by ⊗[t, t^-1]. Moreover, because of the condition (iv) in the definition of G-compatibility, these two points are isolated. So, v_λ and v_μ are highest weight vectors, each of which generates an irreducible subrepresentation (isomorphic to M̃(λ) and M̃(μ) respectively), and the extension splits. Hence, we can assume that not all s_i are equal to zero and that the map ι: M̃(λ) → M in the short exact sequence is an inclusion. Assume that s_i∈_≥ 0 for all i. Let v_μ∈ M be a preimage under the map π of a highest weight vector v_μ∈M̃(μ) of weight μ. We have (⊗[t, t^-1])v_μ=Gv_μ=0, and we are going to show that Gv_μ=0. Assume that v_μ∉ T(M). Then we claim that T(M)= v_λ. Indeed, we have v_λ⊂ T(M). If u∈ T(M)∖ v_λ is some nonzero weight element, then G· u=0 and π(u) belongs to T(M̃(μ))= v_μ. If π(u)=0 then u∈M̃(λ) which is a contradiction. If π(u) is a nonzero multiple of v_μ, then u has weight μ and thus u is a multiple of v_μ which is again a contradiction. So, we assume T(M)= v_λ. Note that for any i∈ I_0 and m∈∖{0} we have π (h_imv_μ)=h_imπ (v_μ)= h_im v_μ = 0. Then h_imv_μ∈M̃(λ). Suppose there exists j∈ I_0 such that h_jmv_μ≠ 0 for m∈∖{0}. Because h_jmv_μ∈M̃(λ) and has weight μ+mδ, it belongs to TF(M̃(λ)). Hence, there exists a nonzero v'∈M̃(λ) of weight μ such that h_jmv_μ = h_jm v'. Hence, h_jm (v_μ - v')=0 implying v_μ - v' ∈ T(M) ≅ v_λ. Then v_μ - v' = p v_λ, for some p ∈ℂ. Comparing the weight we arrive to a contradiction. Hence, h_inv_μ=0. So, we get Gv_μ=0. Recall that the operators e_im acts locally nilpotently on M̃(λ). We claim that e_imv_μ=0 for all possible i and n. Indeed, assume that e_jmv_μ≠ 0 for some j∈ I_0 and some integer m. Then e_imv_μ∈M̃(λ). Consider the ŝl̂_2-subalgebra s(j) generated by f_jn, e_jn and h_jl for n,l∈. Let M_j be an s(j)-submodule of M generated by v_μ. Then M_j is an extension of reduced imaginary Verma s(j)-modules, one of which of highest weight μ. Since M∈O_red,im, we immediately see that M_j is an object of the corresponding reduced category O_red,im(s(j)) for s(j). But this category is semisimple by <cit.>. Hence, e_imv_μ=0 for all i and m. Therefore, v_μ generates a 𝔤-submodule of M isomorphic to M̃(μ) and the short exact sequence splits. Assume now that s_i∈_≤ 0 for all i and not all of them are 0. As M̃(μ) is irreducible and M̃(λ) is a 𝔤-submodule of M, the short exact sequence splits completing the proof. Observe that modules M̃(λ) and M̃(λ-kδ) have a nontrivial extension in the category of -modules for any integer k. If M is an irreducible module in the category O_red,im, then M≅M̃(λ) for some λ∈ĥ_red^*. Let M be an irreducible module in O_red,im. As a G-module, M≅ T(M)⊕ TF(M) where both summands are non-zero. Let v∈ T(M) be a non-zero element of weigh λ∈ĥ_red^*. Then h_imv=0 for all i∈ I_0 and all m∈∖{0}. For each i∈ I_0 let p_i ∈_>0 be the minimum possible integer such that e_i0^p_iv=0. If all p_i=1 we have e_i0v=0 and then, because [h_in,e_i0]=2e_in we get that e_inv=0 for all i∈ I_0 and n∈∖{0}. Hence, we have an epimorphism M̃(λ) ↠ M, since λ∈ĥ_red^*, M̃(λ) is simple and so M≅M̃(λ). On the other hand, assume there exists at least one p_i such that p_i>1. We are going to construct a set of elements in M which are killed by e_i0 for all i∈ I_0. First of all, set p^(1) = max{p_i|i∈ I_0} and set w_i:=e_i0^p^(1)-1v. Note that w_i=0 if p^(1)>p_i and w_i≠ 0 if p^(1) = p_i, so at least one w_i in non-zero. If for all j∈ I_0, e_j0w_i=0 we are done, if not there exists numbers p_ij∈_>0 such that e_j0^p_ijw_i=0 and some of the p_ij are strictly bigger than 1. Set p^(2)=max{p_ij | i,j∈ I_0} and set w_ij = e_j0^p^(2)-1 w_i, note that at least one w_ij is non-zero. If e_k0w_ij=0 for all k∈ I_0 we are done, if not we repeat the process. Because of the locally nilpotency of the e_l0 for l∈ I_0, in finitely many steps, let say ℓ steps, we can find at least one non-zero element w_ i, for i = i_1i_2… i_ℓ a string of elements in I_0 such that e_l0w_ i=0. Moreover, if i^- denotes the string i_1i_2… i_ℓ -1, then w_ i = e_i_ℓ0^p^(ℓ)-1w_ i^- and so, for all n∈∖{0}, 0=h_i_ℓne_i_ℓ0^p^(ℓ)w_ i^- = 2p^(ℓ)e_i_ℓnw_ i, i.e., e_i_ℓnw_ i =0. Now, 0 = h_j0e_jme_i_ℓ0^p^(ℓ)w_ i^- = e_jmh_j0e_i_ℓ0^p^(ℓ)w_ i^- + 2e_jme_i_ℓ0^p^(ℓ)w_ i^- = 2p^(ℓ)e_jme_i_ℓ0^p^(ℓ)-1w_ i^- = 2p^(ℓ)e_jmw_ i. Pick one of the non-zero w_ i constructed above and let W_ i = U(G)w_ i be a G-submodule of M. By construction e_lnW_ i=0 for all l∈ I_0 and n∈. Considered the induced module I(W_ i) = _G⊕ H⊕ N_+^ W_ i, where N_+ = ⊕_i∈ I_0, n∈ Z e_in acts by 0, H = ⊕_i∈ I_0 h_i ⊕ d acts by h_iw_ i = μ(h_i)w_ i, dw_ i = μ(d)w_ i, for some weight μ. Because M is simple, it is a quotient of I(W_ i). If w_ i∈ T(M), we have W_ i = w_ i, and so M is a quotient of I(W_ i) = M̃(λ) and we are done. In case w_ i∉ T(M), as in the proof of Proposition 6.0.3. of <cit.> we get a contradiction. This completes the proof. If M is an arbitrary object in O_red,im, then M≅⊕_λ_i ∈ĥ^*_redM̃(λ_i), for some λ_i's. Because M is in O_red,im, it is a G-compatible and so, it has a decomposition as a G-module given by M≅ T(M) ⊕ TF(M). Since all the weights of M are in ĥ^*_red, T(M) is not a -submodule of M. Indeed, suppose T(M) is a -module. let v∈ T(M) and consider f_0 v∈ T(M). Then h_0mf_0v=0 and f_mv=0 for any m≠ 0. Applying h_0,-m we get h_0,-mf_m v=0 and f_0 v=0. Since the weight of v is in ĥ^*_red, e_0^p v≠ 0 for any p>0. But if p is sufficiently large the weigh of e_0^p v will not be in ĥ^*_red and we get a contradiction. Let v∈ T(M) non-zero. As in the proof of the previous statement there exists a string i of elements of I_0 and a vector w_ i such that e_jmw_ i =0 for all j∈ I_0 and m∈. Let W_ i = U(G)w_ i. Then we have two possibilities: either w_ i∉ T(M) or w_ i∈ T(M). In the first case, consider the induced module I(W_ i). Clearly TF(I(W_ i))⊆ I(W_ i). Now, if w∈ I(W_ i), because w_ i∉ T(M) we have gw≠ 0 for g∈ G and so w∈ TF(I(W_ i)). Then TF(I(W_ i)) = I(W_ i). By the five lemma, any quotient and subquotient of I(W_ i) also satisfies this property. Set M' := U()w_ i which is a subquotient of I(W_ i). Then M' is a -submodule of M and so M' = TF(M') is a -submodule of TF(M), but TF(M) does not have proper -submodule and so M'=TF(M). But, W_ i is a proper G-submodule of M' which is not possible because M is in O_red,im. And so, this case does not occur. In the second case, W_ i = w_ i⊆ T(M). So, as -modules I(W_ i) ≅M̃(λ_ i) for some λ_ i, is a -submodule of M. Then, any non-zero element of T(M) generates an irreducible reduced imaginary Verma module which is a -submodule of M and because there are no extensions between them, they are direct summands on M. The category O_red,im is closed under taking subquotients and direct sums, so it is a Serre subcategory. The proofs on the above statements depends on the structure of reduced imaginary Verma modules, the closed partition Δ_nat and the associated Borel subalgebra b_nat. But, the properties of reduced imaginary Verma modules hold for both untwisted or twisted affine Lie algebras. Moreover, the natural Borel subalgebra for the twisted Lie algebra is properly contained in the natural Borel subalgebra for the untwisted case. So, the results above hold for any affine Lie algebra. § ACKNOWLEDGEMENT JCA has been support by the FAPESP Grant 2021/13022-9. plain
http://arxiv.org/abs/2307.04199v1
20230709150835
Mid-infrared spectroscopy with a broadly tunable thin-film lithium niobate optical parametric oscillator
[ "Alexander Y. Hwang", "Hubert S. Stokowski", "Taewon Park", "Marc Jankowski", "Timothy P. McKenna", "Carsten Langrock", "Jatadhari Mishra", "Vahid Ansari", "Martin M. Fejer", "Amir H. Safavi-Naeini" ]
physics.optics
[ "physics.optics", "quant-ph" ]
APS/123-QED 1E.L. Ginzton Laboratory, Stanford University, Stanford, CA, 94305, USA 2NTT Research, Inc., Physics & Informatics Laboratories, Sunnyvale, CA, 94085 Mid-infrared spectroscopy, an important and widespread technique for sensing molecules, has encountered barriers stemming from sources either limited in tuning range or excessively bulky for practical field use. We present a compact, efficient, and broadly tunable optical parametric oscillator (OPO) device surmounting these challenges. Leveraging a dispersion-engineered singly-resonant OPO implemented in thin-film lithium niobate-on-sapphire, we achieve broad and controlled tuning over an octave, from 1.5–3.3 µm by combining laser and temperature tuning. The device generates >25 mW of mid-infrared light at 3.2 µm, offering a power conversion efficiency of 15% (45% quantum efficiency). We demonstrate the tuning and performance of the device by successfully measuring the spectra of methane and ammonia, verifying our approach's relevance for gas sensing. Our device signifies an important advance in nonlinear photonics miniaturization and brings practical field applications of high-speed and broadband mid-infrared spectroscopy closer to reality. Mid-infrared spectroscopy with a broadly-tunable thin-film lithium niobate optical parametric oscillator Amir H. Safavi-Naeini1 August 12, 2023 ======================================================================================================== § INTRODUCTION A fundamental technique for sensing is mid-infrared (MIR) spectroscopy, which exploits molecules' strong and distinct absorption responses in the 2–20 µm spectral region. High-sensitivity and high-resolution MIR spectroscopy with coherent sources has rich applications, e.g., in gas <cit.>, chemical reaction <cit.>, and biological <cit.> sensing. Further advancing broadband, field-deployable MIR sources would enable a multitude of applications in areas such as rapid portable health monitoring and wide-coverage greenhouse gas detection. However, currently-available sources still suffer from significant limitations. For instance, compact quantum- and interband- cascade lasers have dramatically improved their output power and efficiency, making them prominent sources for MIR spectroscopy <cit.>. However, material-defined gain bandwidths restrict tuning to hundreds of cm^-1 <cit.>, limiting potential multi-species detection. Meanwhile, optical parametric oscillator (OPO) sources allow efficient conversion of low-noise, wavelength-agile near-IR lasers over extremely broad tuning ranges (often thousands of cm^-1) <cit.>. However, their conventional use of bulk optics creates large footprints, high threshold powers, high cost, and demanding stabilization requirements. These factors limit widespread field applications of OPOs, despite many laboratory spectroscopic studies <cit.>. Because of the limitations of bulk systems, OPO miniaturization has been actively pursued. Well-established systems include integrated weakly-confining waveguide cavities <cit.>, polished crystals <cit.>, and whispering-gallery resonators <cit.>. Moreover, recent nanofabrication breakthroughs have led to on-chip planar nanophotonic circuits in strongly nonlinear materials such as lithium niobate (LN). Sub-wavelength transverse mode confinement in these architectures allows enhanced nonlinear efficiency <cit.>, dispersion engineering for ultrabroadband operation <cit.>, and capability for complex nonlinear photonic circuits <cit.>. As a result, the first on-chip OPOs integrated with highly-scalable, small-footprint, nanophotonic circuits have recently been developed <cit.>. Despite these rapid advances, recent nanophotonic integrated OPOs thus far have limited capability for MIR spectroscopy. One reason for this is that established nonlinear integrated photonic platforms utilize a silica undercladding that becomes strongly absorptive past 3 µm <cit.>, limiting MIR performance. Another crucial reason is that engineering nanophotonic OPOs with sufficiently stable and precise tuning over fine spectroscopic lines is challenging. Bulk OPO-based spectroscopy systems usually achieve ideal tuning behavior by engineering the cavity in a singly-resonant configuration with a resonant signal wave and non-resonant, freely-tunable MIR idler wave <cit.>. Developing such wavelength-selective behavior within a high-quality-factor nanophotonic cavity is difficult. This has led previous integrated OPOs to simultaneously resonate signal and idler beams in either doubly- <cit.> or triply-resonant <cit.> configurations, creating complex tuning dynamics undesirable for spectroscopy. Here we demonstrate an efficient, broadly-tunable, continuous-wave integrated MIR OPO and use it for gas spectroscopy. This single-wavelength MIR source complements broadband integrated MIR frequency comb sources <cit.> that can exhibit more complex dynamics, difficult calibration/stabilization, low efficiency, and limited resolution. Pumped with continuous-wave light at λ_p=1 µm, a single dispersion-engineered device exhibits broad tuning over an octave from 1.5–3.3 µm. By engineering a wavelength-selective, high-quality-factor cavity, we realize pump-enhanced singly-resonant MIR OPO operation. The OPO's reliable tuning behavior allows us to measure the spectra of methane and ammonia, demonstrating the spectrosopic potential of OPOs within a fully-chip-integrated platform. We discuss clear paths towards further enhancing the current OPO for widespread, practical use by improving overall system efficiency, near-degenerate performance, and gap-free tuning range. § RESULTS §.§ Device concept and operation Fig. <ref>a illustrates our OPO design concept. An optical cavity incorporates a χ^(2) nonlinear crystal that provides parametric amplification between λ_p = 1 µm pump light and generated signal/idler light at λ_s = 1.5 µm and λ_i = 3 µm (Fig. <ref>a.i). We design the cavity to be strongly resonant for λ_s, weakly resonant for λ_p, and non-resonant for λ_i, classifying it as a pump-enhanced singly-resonant OPO (SRO) <cit.>. This design allows the MIR idler to freely tune for spectroscopy. An effective, simple SRO fine tuning method <cit.> sweeps λ_p while λ_s clamps on a strong cavity resonance, so λ_i tunes freely by energy conservation, e.g., over molecular absorption peaks (Fig. <ref>a.ii). Tuning the temperature and pump wavelength broadly adjusts the OPO output over 1.5–3.3 µm (Fig. <ref>a.iii), which overlaps fundamental vibrational transitions of dozens of small molecules (e.g. CO_2, CH_4, H_2O, and NH_3) important for spectroscopic monitoring. We implement the integrated OPO device (Fig. <ref>b) in a photonic circuit composed of etched LN-on-sapphire ridge waveguides. Deeply-etched LN-on-sapphire photonics, with substrate transparency up to 4.5 µm, have enabled dispersion-engineered broadband MIR generation up to 4 µm <cit.>. We fabricate 15 OPOs with different design parameters on a 12×12 mm LN-on-sapphire chip (Fig. <ref>b.i), then focus on the optimal device for the experiment. Periodically poling one of the LN waveguides (Fig. <ref>b.ii) compensates for phase-velocity mismatch and allows broadband parametric gain. We choose the parametric gain waveguide geometry (878 nm LN film, 600 nm etch, and 1.95 µm top width) to enable strong fundamental transverse electric mode confinement at pump/signal/idler wavelengths (Ext Fig. <ref>a) and large parametric gain from modal overlap. Moreover, choosing this geometry produces ultrabroadband gain at degeneracy resulting from near-zero signal/idler group velocity dispersion (GVD) (Sec. <ref> and Methods). The pump-enhanced SRO cavity combines waveguide bends with two crucial engineered elements: the output coupler and intracavity coupler. The output coupler (Fig. <ref>b.iii) is a directional coupler designed for ∼100% transfer of MIR light out of the cavity while only extracting ∼1% of telecom light. The intracavity coupler is an adiabatic coupler designed for broadband, ∼100% transfer of telecom-wavelength light to enable strong signal resonances. To verify the strong cavity modes at λ_s, we sweep resonances with a tunable telecom laser (Ext. Fig. <ref>a), revealing sharp, low-loss signal modes with total quality factor Q_tot = 1.3-1.6 × 10^6 (Fig. <ref>c). This corresponds to ≈12% round-trip loss in the 22.3 mm-length cavity. Extracted intrinsic/extrinsic Q-factors for the undercoupled cavity are Q_i = 1.35-1.7 × 10^6 and Q_ex≈ 20 × 10^6, respectively. High Q-factors extend over our telecom laser's whole tuning range (1500–1640 nm, Ext. Fig. <ref>b). Meanwhile, the 1-µm pump only weakly resonates, with cavity finesse F=2.5–3.5, corresponding to ≈11% total power recirculation and 2× intracavity power enhancement (Ext. Fig. <ref>). To operate the device, we couple continuous-wave pump light onto the chip using a lensed fiber with (33 ± 2)% coupling efficiency (Methods). When parametric gain provided by the pump exceeds round-trip signal loss, the device oscillates, generating signal and idler photons. We collect output light with a multimode fiber (∼3% MIR chip-to-fiber collection efficiency, Methods) and use the idler beam for MIR spectroscopy (Fig. <ref>b). We attribute the few-percent chip-to-fiber collection efficiency to the roughly-cleaved output fiber facet and mismatch between high-NA LN waveguide and NA = 0.2 fiber. Chip-fiber and fiber-chip coupling efficiencies could be improved dramatically to 80–90% using cladding mode-matching waveguides <cit.> and/or placing a high-NA lens on the output (>70–80% efficiency measured on a different chip/setup). §.§ Power characterization Utilizing the characterization setup in Fig. <ref>a, we tune the device to 170 °C and pump near λ_p=1.051 µm to obtain clean non-degenerate parametric oscillation at λ_s=1.56 µm, λ_i=3.21 µm. Because of weak pump resonances (Fig. <ref>b, top), intracavity pump intensity and hence generated signal/idler output (Fig. <ref>b, bottom) varies periodically with λ_p. Because the pump resonance is weak, tuning to a specific λ_p leads to stable continuous-wave oscillation for >10–15 minutes without any cavity or laser stabilization (Ext Fig. <ref>). We then scan λ_p for different pump powers and record maximum generated signal/idler power. We observe clear pump depletion but do not precisely quantify it due to background pump light scattering into the multimode collection fiber. The device begins oscillating with 80 ± 6 mW on-chip threshold pump power (Fig. <ref>c). Above threshold, the generated signal/idler powers monotonically increase with pump power. With ∼200 mW on-chip pump power, the device produces a maximum of 29 ± 3 mW on-chip power at 3.2 µm. This power level has been used for portable sensor systems <cit.> and exceeds the typical required power for shot-noise-limited MIR detection (∼0.1 mW) <cit.>. The on-chip power conversion efficiency of signal/idler also increases monotonically within the range of pump power sweep (Fig. <ref>d). We measure a maximum of (15 ± 2)% on-chip power conversion efficiency (45% quantum efficiency) from pump to MIR idler. An ideal OPO produces nearly 100% quantum efficient conversion (≈ 33% power conversion at these wavelengths) <cit.>. Our device's deviation is likely caused by modal/radiative scattering of pump light in waveguide tapers (see Methods), MIR losses from e.g. surface-adsorbed molecules <cit.>, and inefficient MIR light transfer in the output coupler. The measured dependence of the emitted MIR light on input pump power aligns well with numerical modeling of a weakly pump-enhanced SRO (Fig. <ref>c-d, solid lines), verifying that the device behaves as designed. In our modeling (Methods) we assume the measured values of total Q-factor (1.6 million), pump recirculation (11%, Methods), and normalized efficiency (41 %/(W·cm^2)). The numerically-modeled on-chip idler output powers are scaled by 0.46 to account for the effective MIR extraction efficiency, and intracavity signal powers are scaled by 0.013 to account for the intended small (∼1%) signal extraction from the cavity. §.§ Tunability §.§.§ Coarse tunability We tune our OPO's output wavelength over an octave of bandwidth using a combination of temperature and pump wavelength (Fig. <ref>a,b). At the higher temperatures of 100–200 °C we access the “far-from-degenerate” regime with widely-separated signal and idler (λ_s=1.5–1.7 µm, λ_i=3–3.3 µm) (Fig. <ref>a). We observe sufficiently reliable tuning for spectroscopy at these operating temperatures and clean output spectra (Fig. <ref>b). In this regime, we measure MIR output wavelengths up to 3.315 µm at 200 °C, limited by the temperature control range and pump amplifier bandwidth. The high operation temperature is only due to phase matching in this device; future devices can extend deeper into the MIR at lower temperatures by lithographically defining a different poling period. In our device, lower temperatures from 70–90 °C access the “near-degenerate” regime (1.7 µm < λ_s, λ_i < 2.7 µm), exhibiting broad bandwidths and tunability but also some complex multimoded behavior. From 80–100 °C, the OPO sometimes oscillates simultaneously in the near-degenerate and far-from-degenerate regimes. Pump wavelength tuning at a fixed temperature tunes the device reliably and rapidly over a large range (Fig. <ref>a). From 80–200 °C, the >2.8 µm idler tunes roughly linearly with pump wavelength. The fitted tuning slope dλ_i/dλ_p≈ -2 at higher temperatures and increases to -4.2 at 100 °C. This equates to 100–200 nm MIR wavelength tuning at a given temperature with 50 nm of pump tuning. As we further decrease the temperature, wavelength tunability rapidly increases as the device begins oscillating at near-degenerate signal/idler waveguide modes with near-zero GVD. At 70 °C, we operate the device in the anomalous dispersion regime, resulting in a U-shaped tuning curve that spans over 800 nm (Fig. <ref>a) and agrees with simulations (Ext. Fig. <ref>a). Near degeneracy, the accessible gain bandwidth broadens from cancellation of odd-order dispersion, allowing oscillation at multiple different signal/idler pairs (Fig. <ref>b). At 80 °C, the device operates near the signal/idler zero-GVD point, resulting in broadband OPO output spanning 1.3 µm, a bandwidth approaching a full octave, at a single temperature (Fig. <ref>c,d). The increasingly broadband near-degenerate OPO as we increase λ_p and approach zero-GVD at 2λ_p agrees well with simulation (Fig. <ref>c) <cit.>. From λ_p≈1075–1090 nm, the single-device, single-temperature, OPO output spans 1.7–2.7 µm. This 65 THz-spanning ultrabroadband gain bandwidth matches that of state-of-the-art pulsed-pump dispersion-engineered thin-film-LN parametric amplifiers <cit.>. To fully harness the broadband OPO operation, future devices could employ an on-chip wavelength control element (e.g. <cit.>) rapidly tunable using LN's electro-optic effect and selective of a particular oscillating mode. Full near-degenerate dispersion-engineering details are described in Methods. §.§.§ Fine tunability We finely tune the SRO's MIR emission wavelength with sufficient control for use in spectroscopy. At a fixed temperature, we tune the pump laser wavelength. For small changes of λ_p, λ_s stays approximately constant without excessive mode hops while the MIR λ_i tunes by energy conservation (Fig. <ref>e). The vertical gaps visible in these fine tuning curves are caused by weak pump resonance enhancement, not signal mode hops. Typical gap-free tuning range is 60–80 pm at 3184 nm (1.8–2.4 GHz), reflecting that the OPO is activated for around one-third of the pump cavity FSR (5.5 GHz, Fig. <ref>a). Eliminating the weak pump resonance in an optimized fully-singly-resonant cavity design will allow broader gap-free tuning range. Despite this discontinuous tuning at a fixed temperature, adjusting the chip temperature by only 1 °C results in nearly uniform MIR wavelength coverage as different signal modes are selected to oscillate. The signal mode hops when λ_p is detuned sufficiently large amounts (Methods, Ext. Fig. <ref>a). §.§ Proof-of-concept spectroscopy §.§.§ OPO spectroscopy of methane We direct part of the output idler light to a low-pressure (20 Torr) methane gas cell (Fig. <ref>a) to measure its absorption spectrum. Tuning the device to 151 °C with λ_p ≈ 1041  nm shifts the OPO idler output to a cluster of methane absorption lines at 3184 nm. To sweep the generated MIR output over the methane lines, we sweep λ_p in a narrow range (Fig. <ref>a). During this measurement, the OPO output signal wavelength λ_s stays nearly constant, while the idler wavelength λ_i increases with time (Fig. <ref>a). The portion of the MIR light passing through the methane cell couples into a photodiode, generating the voltage signal V_1(t). The reference beam of MIR light couples into a second photodiode, generating a reference signal V_2(t). We plot an example trace of the unprocessed relative gas cell transmission scan V_1(t)/V_2(t) in Fig. <ref>b, which scans over two absorption peaks. To calibrate the wavelength axis λ_i(t) of the swept MIR beam, we measure λ_p(t) and λ_s(t) and infer λ_i(t) by energy conservation. This method allows us to use precise and more readily available near-IR wavelength measurement tools to infer MIR emission properties. We measure λ_p(t) with a wavemeter in the input path (Fig. <ref>b). Meanwhile, λ_s(t) is measured by beating a portion of the generated signal beam against a reference laser on a fast photodiode. The beatnote is read in an RF spectrum analyzer, from which we extract λ_s(t) (Fig. <ref>b). The observed small (5 pm) redshift of λ_s(t) is much smaller than the cavity free spectral range (∼45 pm), indicating that the signal mode does not mode hop during this scan, but only shifts slightly, likely due to heating at higher OPO power. We tune the device to four different absorption transitions of methane near 3184 nm and collect spectra (Fig. <ref>c). After background subtraction (see Methods), collected experimental spectra agree well with HITRAN reference curves <cit.>. The clean, stable MIR OPO output easily resolves the low-pressure, Doppler-broadened methane peaks with linewidths down to 10 pm/300 MHz/.01 cm^-1. This spectral resolution highlights an advantageous aspect of the widely-tunable single-wavelength integrated source compared to an integrated frequency comb, which in integrated incarnations have few GHz–100 GHz resolution limited by the cavity free spectral range <cit.>. §.§.§ Resonant DFG spectroscopy of ammonia In addition to operating as an OPO, the broadband operation and singly-resonant nature of our device makes it attractive as a source of MIR light generated by difference frequency generation (DFG). Here, we pump the OPO cavity below threshold, now leaving λ_p(t) constant over time (Fig. <ref>d). We instead seed the device with a scanning telecom-band laser λ_s(t). Injected seed builds up strongly when λ_s(t) matches a signal cavity resonance and generates a bright MIR idler beam by DFG. Hence, λ_i(t) consists of discretely-spaced MIR peaks (Fig. <ref>d). In our device's SRO cavity, peaks at λ_i will be equally spaced in frequency at the signal cavity FSR (≈5.6 GHz) over the entire gain bandwidth. By contrast, in a doubly- or triply-resonant device, generated MIR peaks would be much more sparse because of the requirement of simultaneous signal and idler resonance. The wide availability of rapidly tunable telecom-band lasers, including on-chip and LN-integrated devices <cit.>, makes this resonant DFG technique highly accessible. We demonstrate the broadband resonant DFG spectroscopy by detecting atmospheric pressure ammonia, which exhibits broad lineshapes with 0.5–10 nm peak widths. As in the methane experiment, the MIR output splits into a gas cell and reference path. We measure λ_p with a wavemeter, assume λ_s(t) sweeps linearly with time, then infer λ_i(t) by energy conservation. Fig. <ref>e shows a typical trace of discrete MIR peaks at detector 2 vs. λ_i(t), where λ_p = 1043 nm and λ_s(t) sweeps from 1535 to 1620 nm. Generated equally-spaced MIR lines (Fig. <ref>e, inset) are strong over an ∼100 nm bandwidth (equates to sweeping λ_s only 30 nm), and the >10 µW MIR output can be detected directly by a DC-coupled photodiode. By dividing the signal path's discrete peak heights by those from the reference path, we obtain broadband spectra of ammonia with 5 GHz resolution (Fig. <ref>f). The presented data consists of two scans, each with high signal-to-noise ratio over 100 nm MIR bandwidth. Adjusting temperature and λ_p tunes the center wavelength of the two scans exactly as the OPO is coarsely tuned (Fig. <ref>b). We resolve ammonia's narrower features with ∼0.5 nm peak width alongside broader 10 nm peaks in agreement with the HITRAN database. § DISCUSSION In summary, we have designed and implemented an integrated nanophotonic OPO and demonstrated operation for MIR spectroscopy. Such a device inherits the useful advantages of bulk OPOs as MIR spectroscopic light sources (widely-available near-IR laser pumps, high efficiency, broad tunability, and high resolution) while adding the benefits of nanophotonic integration (reduced footprint, better stability, lower threshold powers, broadband operation via dispersion engineering, and integration capability). The key enabling advance here is the fabrication of high-quality factor, wavelength-selective cavities built from the MIR-compatible LN-on-sapphire platform. With the miniaturization of such a useful MIR spectroscopic technology onto a fully-chip-integrated platform, a plethora of applications can be envisaged, from deployable gas monitoring systems to portable, real-time MIR biosensors. Our work outlines a clear path for improving the device sufficiently to realize powerful and deployable sensors. As highlighted in the text, including an electro-optically-tunable wavelength-selective intracavity etalon would allow precise, rapid, and low-power control over the broad demonstrated gain bandwidths. In addition, further gains in efficiency are important and within reach. These will come from improvements in input fiber-to-chip and output chip-to-detector coupling efficiencies. Simulations show that utilizing cladding mode matching waveguides and/or free space optics would raise edge coupling efficiencies to >70%. Moreover, the simulated normalized efficiency is ∼7× larger than the experimentally-obtained value (43 %/(W·cm^2)), likely due to fabrication imperfections preventing coherent nonlinear enhancement over the full waveguide length. By improving waveguide fabrication we expect threshold powers as low as ∼10 mW, within the output range of heterogeneously integrated lasers near 1 µm <cit.> and thus potentially enabling full pump-OPO on-chip integration. § METHODS §.§ Coupler, bend, and taper design details §.§.§ Intracavity coupler The intracavity coupler (Ext. Fig. <ref>b) is an adiabatic coupler designed to weakly couple pump light at 1 µm and strongly couple signal light at >1.5 µm. The coupler was designed using local coupled mode theory simulations of the slow transfer of light from the waveguide emerging from the resonator bend to the poled section waveguide. We utilize a symmetric adiabatic waveguide coupler where two neighboring waveguides of width 0.7/1.0 µm are tapered to widths 1.0/0.7 µm width, respectively, over 1 mm length. In order to maximize the adiabatic transition near the degenerate point where both waveguides have equal width (0.85 µm), the coupler is divided into three sections: two 150 µm-length fast-tapered couplers at the beginning and end of the coupler and a slowly-varying 700 µm section in the middle of the coupler. The slowly-varying middle section accounts for 20% of the total waveguide width change, while the fast-varying sections account for the remaining 80%. §.§.§ Output and diagnostic couplers The output coupler (Ext. Fig. <ref>c) is a simple directional coupler consisting of two identical 2 µm-width waveguides separated by a 0.9 µm coupling gap over a length of 50 µm. The diagnostic coupler (Ext. Fig. <ref>) uses the same coupling geometry as the output coupler in order to obtain ∼1% coupling of telecom light in/out of the cavity for measuring the cavity resonances (Fig. <ref>c). §.§.§ Resonator bends Following the poled section and the output coupler, the waveguides are tapered to 1 µm width to ensure that the resonator is single-moded at telecom wavelengths for cleaner mode structure. This also effectively filters out MIR light >3 µm that cannot be well-confined in the smaller waveguide. The waveguide bends are Euler bends <cit.>. §.§.§ Waveguide tapers Pump light incident onto the chip edge couples into a 1.7 µm-width waveguide, then tapers down to 0.7 µm as it reaches the adiabatic coupler. After the adiabatic coupler, the light is confined in a 1 µm-width waveguide before it tapers up to the 1.95 µm-width periodically-poled 9.3 mm gain section. At each of these tapers, fundamental TE mode pump light can be scattered into other modes or free space, but the exact loss rate cannot be extracted from the current chip. §.§ Device fabrication Device fabrication starts with a commercial MgO-doped, x-cut LN film on a c-cut sapphire substrate (NGK Inc.). The LN film is thinned using an ion mill, then poling electrodes (poling period Λ=6.72 µm) are patterned using electron-beam lithography and Cr metal liftoff. The LN is poled using high-voltage pulses (∼900 V), then poled domains are monitored using second-harmonic-generation microscopy. Electrodes are stripped using Cr etchant. Waveguides are patterned with electron-beam lithography (JEOL 6300FS 100 kV) and HSQ FOX-16 resist followed by argon ion mill etching (Intlvac). Finally, the chip is laser stealth diced to create clean edge facets for light in/out coupling. §.§ Measurement setup and calibrations §.§.§ General setup A block diagram of the general setup used for measurements is shown in Ext. Fig. <ref>. Our pump light source is a tunable external cavity diode laser (Toptica DL Pro). The laser can tune coarsely from 1010–1100 nm with 0.1 nm resolution, and finely using a voltage-controlled piezo within 40 GHz. The laser output is fiber coupled, then routed through a 99:1 splitter where the 1% tap is sent to a near-IR wavemeter (Bristol Instruments Model 621). Light from the 99% port is amplilfied in a ytterbium-doped fiber amplifier with 1040–1090 nm operating bandwidth (Civil Laser), then sent to a variable optical attenuator (OZ Optics). To calibrate power sent to the chip, 1% of the light is tapped off to a powermeter (Newport), then the rest is sent to a 1/1.5 µm wavelength division multiplexer, then into a lensed Hi1060 single mode fiber that couples light to the OPO device. The amount of intracavity pump light is measured by collecting the light exiting the bottom bus waveguide on the left of Ext. Fig. <ref> with a lensed multimode silica fiber. The light is then sent through a fiber collimator, and a short-pass filter to remove 1.5-µm output generated by the OPO, and focused onto an InGaAs detector. The output at telecom and MIR wavelengths is collected using a flat-cleaved MIR-compatible multimode fiber (Zinc fluoride glass, La Verre Fluoré). The output light is split into several paths. To detect the MIR light, we focus with a CaF lens (Thorlabs) then through a ZnSe OD1 ND filter (Thorlabs) to avoid saturating the detector. Finally, the light passes through a longpass filter (Ge) so that only MIR light reaches the MCT detector (Thorlabs PDAVJ5). To detect telecom, we use a 1350 nm longpass filter (Thorlabs) before focusing light onto an InGaAs detector (which does not detect MIR photons). Finally, a portion of the output light is coupled into an InF MIR-compatible multimode fiber (Thorlabs) sent into a Yokogawa Optical Spectrum Analyzer (AQ6376). §.§.§ Pump fiber-to-chip coupling efficiency We couple pump light with wavelength 1046–1056 nm in/out of a straight waveguide using two lensed SMF fibers. By dividing the power collected from the output lensed fiber by the power sent to the input lensed fiber, we infer the pump power coupling efficiency per edge of η̃_p, fiber-to-chip = (33 ± 2) %. We assume here that pump propagation loss is negligible. The uncertainty in the pump fiber-chip coupling comes from ripples in the waveguide throughput observed as λ_p is tuned from 1046–1056 nm. We attribute the ripples to excitation of higher-order pump modes in the multimoded LN waveguide, because a simultaneous measurement of parametric gain (which depends only on fundamental TE0 mode power) during the same wavelength scan does not follow the same ripples. §.§.§ Nonlinear efficiency The normalized efficiency of the periodically-poled gain section is calculated by measuring optical parametric amplification (OPA) on a straight waveguide adjacent to the OPO that was poled using the same electrodes (Ext. Fig. <ref>c). In this experiment we couple both 1 µm pump and 1.5 µm signal onto the straight waveguide. We modulate the pump with a 10 kHz square-wave using an acousto-optic modulator (Aerodiode). The periodic pump modulation periodically provides gain to the signal wave, which we measure with a lock-in amplifier. The relationship between measured signal gain and nonlinear efficiency can be derived starting from the coupled wave equations: ∂_z A_p = -ω_p/ω_i√(η_DFG) A_s A_i ∂_z A_s = ω_s/ω_i√(η_DFG) A_p A_i^* ∂_z A_i = √(η_DFG) A_p A_s^*, where A_p,s,i(z) are the power-normalized pump (ω_p), signal (ω_s), and idler (ω_i) amplitudes with units of √(W) and η_DFG is defined as the normalized efficiency and with units of %/(W·cm^2). For optical parametric amplification with an undepleted pump, these equations can be reduced to: ∂_z a_s = γ a_i^* ∂_z a_i = γ a_s^*, where a_s,i are photon flux-normalized signal and idler amplitudes and γ = -i√(ω_s/ω_i)√(η_DFG)A_p(0). The solution to this system is well-known <cit.>: a_s(z) = cosh(|γ|z) a_s(0) -i sinh(|γ|z) a_i^*(0) a_i^*(z) = isinh(|γ|z) a_s(0) + cosh(|γ|z) a_i^*(0), so with zero intial idler input (a_i=0) and fixed poling length L_pol, the telecom amplitude experiences the power gain: signal power gain ≡|a_s(L_pol)|^2-|a_s(0)|^2/|a_s(0)|^2 = cosh^2(|γ|L_pol) - 1 ≈η_DFG(ω_s/ω_i)P_p(0)L_pol^2 in the low gain limit. At a fixed λ_p and pump power, we can sweep λ_s and monitor the OPA gain (Ext. Fig. <ref>d). The OPA gain is maximized when phase-matching is optimized. We then track the maximal phase-matched OPA gain as a function of P_p(0) (Ext. Fig. <ref>e). Fitting signal power gain as a function of pump power (using Eq. <ref>) for known L_pol = 0.93 cm allows extraction of the nonlinear efficiency η_DFG = 43.5 %/(W·cm^2). The simulated normalized efficiency is around 300 %/(W·cm^2). §.§.§ MIR collection efficiency for power sweep We calibrate the relationship between MIR on-chip power and detected voltage in the MIR MCT detector by simultaneously comparing OPA and difference-frequency-generation (DFG) processes in a straight nonlinear waveguide. For this calibration, we send a modulated pump (10 kHz square wave) along with a CW telecom signal wave onto a straight periodically-poled waveguide. The modulated pump produces parametric gain modulation in the telecom signal according to Eq. <ref>. Because of photon number conservation, the amplification of telecom photons is also accompanied by generation of the same number of MIR photons by DFG. The expected amount of detected MIR idler power is then: P_i,det = η̃_i,chip-to-det P_i(L_pol) = η̃_i,chip-to-detη_DFGP_p(0)P_s(0)L_pol^2. Dividing Eqs. <ref> and <ref> we can solve for the MIR chip-to-detector collection efficiency: η̃_i,chip-to-det = P_i,det/signal power gain· (ω_i/ω_s)· P_s(0). Using this method eliminates the contribution of any uncertainties in nonlinear efficiency η_DFG and on-chip pump power. The uncertainties are dominated instead by P_i,det and P_s(0). With an on-chip pump power of 47.9 mW and on-chip signal power of P_s(0) = 0.8 ± 0.06 mW, we measure a telecom signal gain = 3.28 %. We also measure DFG idler of P_i,det=17.8 ± 1.1 nW. Combining these values leads to a MIR chip-to-detector collection efficiency of η̃_i,chip-to-det = (0.16 ± 0.016)%. Dividing out the attenuation from the OD1 filter and 50:50 beamsplitter, this means that the MIR chip-to-fiber collection efficiency is around 3%. §.§.§ Telecom collection efficiency for power sweep For the telecom calibration, we first in-couple and out-couple 1550 nm telecom light onto a straight waveguide using Hi1060 lensed fibers. By measuring the power sent into the input fiber and collected from the output fiber, we extract a telecom fiber-to-chip power coupling efficiency of (30%± 2) %. Then we switch the output telecom detection chain to that shown in Ext. Fig. <ref>. By comparing the collected telecom power measured directly before the InGaAs telecom detector to the known on-chip power, we extract the telecom chip-to-detector collection efficiency of (2 ± 0.2)%, which is on the same order as that for MIR (previous section). We also calibrate the detector conversion efficiency to be 6.2 V/mW, allowing for measured voltage to be converted to on-chip power. §.§ Pump resonance characterization Using the detection setup shown in Ext. Fig. <ref>a (more detail described in Ext. Fig. <ref>), pump resonances are monitored for pump powers below the OPO threshold and are plotted in Ext. Fig. <ref>b. To fit these curves, we develop a simple model for the pump cavity. In steady-state, the intracavity pump field A_p,cav obeys the equation: A_p,cav = √(T_p)A_p,in + √(R_p)√(1-ℓ_p)A_p,cave^-ikL where R_p is the pump power coupling ratio across the waveguide gap inside the intracavity coupler, T_p = 1-R_p is the pump power transmission ratio of light that stays on the same waveguide through the coupler, A_p,in is the pump amplitude on the input waveguide, ℓ_p is the round-trip power loss of the pump within the cavity excluding the intracavity coupler region, k is the propagation constant of pump, and L is the round-trip cavity length. Hence the intracavity pump power buildup is found to be of the common Airy function form: |A_p,cav/A_p,in|^2 = B/1+(2F/π)^2sin^2(kL/2) where B = T_p/(1-√(R_p (1-ℓ_p)))^2 is a constant scaling factor that represents the pump power buildup on-resonance and (2F/π)^2 = 4√(R_p (1-ℓ_p))/(1-√(R_p (1-ℓ_p)))^2. Here, F represents the cavity finesse. The output power |A_p,out|^2 will be directly proportional to the intracavity power. The pump resonance curve shape is entirely determined by F, while any collection/detection efficiencies can be incorporated into the scaling factor B. Since we only want to determine F, we fit the resonances measured in Ext. Fig. <ref>b to Eq. <ref> along with a constant offset factor that comes from stray light coupling into the multimode collection fiber. The resultant curve fits are plotted in Ext. Fig. <ref>b along with the fitted value of finesse F. The unfitted peaks do not contribute to OPO and thus represent TM modes. The fitted value of finesse varies from 2.5–3.5 (Ext. Fig. <ref>c). The variation in finesse arises because the pump cavity spectrum is sensitive to small temperature variations. From the value of finesse, Eq. <ref> can be solved for the total pump power recirculation, ζ = R_p(1-ℓ_p). For the fitted values of finesse, ζ ranges between 8–14%. Given ζ, we can estimate the pump power buildup inside the cavity on resonance using Eq. <ref> and assuming T_p = 1-R_p. R_p is only known if we assume a value of ℓ_p. We can reasonably assume ℓ_p is small and similar to the round-trip loss at telecom wavelengths (12% for measured Q_tot = 1.5×10^6). In this regime, the pump losses are dominated by the intracavity coupler, and R_p≈ R_p(1-ℓ_p) (Ext. Fig. <ref>d). With this result in Eq. <ref>, from Ext. Fig. <ref>d we find that the pump power buildup on resonance ranges from 1.75–2.15 for the fitted values of total power recirculation ζ = 8-14%. §.§ Power sweep modeling To model the power out vs. power in data presented in Fig. <ref>b,c, we solve numerically Eq. <ref> for field evolution through the periodically poled gain section for many round trips until the device reaches steady state. To implement this, for the first round trip we initialize pump, signal, and idler amplitudes inside the cavity at z=0 (beginning of the poled region) as: A^(k=1)(z=0) ≡[ A^(1)_p(z=0); A^(1)_s(z=0); A^(1)_i(z=0) ] = [ √(P_p,in(1-R_p)); A_s,0; 0 ] where the superscript k=1 denotes the first round-trip, A_s,0 is a small value representing the random subthreshold signal field fluctuations, and the factor of (1-R_p) within A^(1)_p(0) comes because the pump power injected onto the chip P_p,in needs to be multiplied by (1-R_p) to represent the pump power that enters the periodically poled section (see Ext. Fig. <ref>a). Next, the three waves are propagated through the periodically poled region by numerically solving Eq. <ref>, yielding A^(k=1)(z=L_pol), where η_DFG is assumed to be 40 %/(W·cm^2) (see Sec. <ref>) and L_pol = 0.93 cm. For successive round-trips, k>1 and the inital amplitudes at z=0 are: A^(k)(0) = [ √(P_p,in(1-R_p)) + A^(kk-1)_p(L_pol) √(1-ℓ_p)√(R_p); A^(kk-1)_s(L_pol) √(1-ξ_s); 0 ] where (1-ℓ_p)R_p is the round-trip pump power recirculation, chosen to be 11% (see Sec. <ref>), and ξ_s ≈ 12% is the round-trip signal power loss based on measured Q_tot (Fig. 1c). The idler is explicitly assumed not to resonate. We run the simulation for N_RT = 5000 round trips, which allows the system to reach steady-state. The outputs of the simulation used for Fig. <ref>b,c are steady-state idler output power |A^(N_RT)_i(L_pol)|^2 and steady-state intracavity signal power |A^(N_RT)_s(L_pol)|^2. §.§ Coarse tuning measurements The obtain the OPO coarse tuning data presented in Fig. <ref>a-c and Ext. Fig. <ref>e, a portion of the output light is coupled into an OSA as shown in Ext. Fig. <ref>. For each temperature, the pump wavelength is tuned coarsely in 2–5 nm steps from 1040–1090 nm. At each coarse wavelength step, the pump wavelength is finely tuned until oscillation occurs, then wide-spanning OSA scans are taken. The noise floor of the scans is around -70 dBm. The peaks of the OSA scan are then extracted and plotted as in Fig. <ref>a. For temperatures above 100 °C, the device has clean, non-degenerate output at 1.5 µm and 3 µm, with typical measured power around -40 to -30 dBm after being coupled into the OSA. §.§ Near-degenerate OPO: measurement and simulation For temperatures below 100 °C, the OPO approaches degenerate operation. Because of the broad gain bandwidth near degeneracy, the device can oscillate at different OPO wavelengths with slight perturbations to pump wavelength within a given coarse wavelength step. Moreover, the device can sometimes exhibit multimode oscillation with 2 or more signal/idler mode pairs. To capture all the wavelengths the device oscillates at, we take several OSA scans for each coarse wavelength step. In Fig. <ref>a, Fig. <ref>d, and Ext. Fig. <ref>e we plot the locations of OSA trace peaks with peak power >-45 dBm. Choosing our specific waveguide geometry enables near-degenerate OPO operation near the zero-GVD point. To illustrate this, we simulate the OPO tuning curves for the design geometry: 875 nm LN film thickness, 600 nm etch, and 1.95 µm top width. Simulating the modal effective index for pump wavelengths 1000–1100 nm and signal/idler wavelengths from 1400–3500 nm allows us to plot the phase mismatch Δ k_0 = k(λ_p) - k(λ_s) - k(λ_i) vs. OPO signal wavelength (Ext. Fig. <ref>a-b). Temperature-dependent refractive indices of LN are obtained from Umemura et. al. <cit.> and of sapphire from Thomas et. al. <cit.>. In both simulation temperatures 70 and 80 °C, the Δ k_0(λ_s) curves exhibit upward curvature near degeneracy for λ_p≤1060 nm that gradually flattens as λ_p increases to 1100 nm. The curvature of Δ k_0(λ_s) at degeneracy is directly related to the GVD, which can be seen by Taylor-expanding the phase mismatch around the degenerate frequency (≡Δω = 0): Δ k_0 (Δω) = k(ω_p) - k(ω_p/2 + Δω) - k(ω_p/2 - Δω) ≈Δ k_0(Δω=0) - ( β_2 )_ω_p/2 (Δω)^2 where the GVD β_2 = ∂^2 k /∂ω^2. Hence positive curvature of Δ k_0 indicates anomalous dispersion (β_2 < 0), and as β_2→ 0, the phase mismatch curves should flatten. This is verified in Ext. Fig. <ref>a-b by plotting GVD as a function of wavelength. To quasi-phasematch the process, we include the poling period Λ(T). Incorporating the thermal expansion of LN at temperature T [degree C] and the period at 25 °C Λ_0 = 6.57 µm, results in Λ(T) = Λ_0[1+(1.59×10^-5)(T-25) + (4.9×10^-9)(T-25)^2] <cit.>. With the addition of the periodic poling the total phasematch becomes Δ k = Δ k_0 - G where G = 2π/Λ(T). The signal-wave gain from propagation through the periodically poled region can be found analytically by solving Eq. <ref> in the presence of total phase mismatch Δ k <cit.>: a_s(z)e^iΔ k z/2 = [ cosh(gz) + iΔ k/2gsinh(gz)] a_s(0), where g = |γ|√(1-(Δ k/2|γ|)^2). The simulated gain vs. Δ k results are plotted in Ext. Fig. <ref>a-b for P_pump=600 mW, η_DFG=40 %/(W·cm^2), and L=0.93 cm. Plotting the signal gain experienced for each combination of λ_p and λ_s constructs the OPO tuning color plots. At 70 °C, the Δ k(λ_s) curves with strong upward curvature and hence anomalous dispersion experience the highest gain, leading to a U-shaped tuning curve (Ext. Fig. <ref>a). In contrast, at 80 °C, the Δ k(λ_s) curves with the highest gain have flat curvature and hence near-zero-GVD, leading to a T-shaped tuning curve (Ext. Fig. <ref>b) and ultrabroad OPA gain bandwidth. To match the experimental tuning behavior with simulation, we include a small variation in film thickness across the poled waveguide length. Namely, we assume the LN film thickness Y(z) = Y_0 + Δ Y(z) where the nominal film thickness is 875  nm and the spatially-dependent film thickness change Δ Y(z) = -Δ Y_tot/2 + bz + az^2 where the total film thickness variation Δ Y_tot = 4 nm, b = b_0 - aL, a = ϵ b_0 and b_0 = (Δ Y(L) - Δ Y(0))/L. The 4 nm simulated total film thickness variation over 9.3 mm length was chosen as it is the minimum film thickness where simulated results qualitatively match experiment. Moreover, the chosen thickness variation agrees well with thickness measurements performed by the LN-on-sapphire vendor, which indicate around 0.4 nm LN thickness variation per mm length. The factor ϵ describes the curvature of the film thickness variation, as depicted in Ext. Fig. <ref>c. To calculate how film thickness variation profiles affect the signal wave gain, we solve Eq. <ref> numerically in the presence of spatially-dependent phase mismatch Δ k = Δ k_0 - G + Δ k_h(z) where the phase mismatch due to height variations Δ k_h(z) = dΔ k_h/dYΔ Y(z) and the ratio of phase mismatch shift to change in film thickness dΔ k_h/dY = 8.5 cm^-1 / nm is found from simulation. Specifically, we solve: d/dz a_s = γexp[-i∫_0^z Δ k(z')dz' ] a_i^* d/dz a_i^* = γ^* exp[i∫_0^z Δ k(z')dz' ] a_s The resultant signal gain as a function of the constant part of phase mismatch Δ k_0 - G is plotted in Ext. Fig. <ref>d. For linear film thickness variation (ϵ = 0), the gain curve vs. phase mismatch has broadened, reduced in magnitude, and exhibits two major peaks instead of one major peak found when Δ k_h(z) = 0 (Ext. Fig. <ref>a,b). The addition of quadratic film thickness variation (ϵ > 0) makes the gain curve slightly asymmetric, which matches experimental results. Simulated OPO tuning curves along with experimental data for T = 30–87.5 °C, Λ_0 = 6.58 µm, and ϵ=1 are shown in Ext. Fig. <ref>e. The experimental data qualitatively matches simulation. As temperature increases in both simulation and experiment, the observed OPO output tuning curves shift upwards in the plot (towards longer pump wavelengths). 40 °C (labeled with ⋆) and 70 °C (⋆⋆) both present U-shaped tuning curves, while 80 °C (⋆⋆⋆) presents a T-shaped tuning curve. To understand the results, we highlight the simulation results of 40 °C, 70 °C, and 80 °C (Ext. Fig. <ref>f-h). At 40 °C, the U-shaped tuning curve arises from a secondary gain peak amplifying anomalous dispersion regions (Ext. Fig. <ref>f). By 70 °C, a stronger U-shaped tuning curve arises from the main gain peak amplifying the same anomalous dispersion regions (Ext. Fig. <ref>g). Finally, at 80 °C, the U-shape transforms into a T-shape when the main gain peak amplifies regions of near-zero-GVD, leading to broadband OPO in both simulation and experiment (Ext. Fig. <ref>h). §.§ Fine tuning characterization OPO fine tuning data (Fig. <ref>e, Ext. Fig. <ref>a) is obtained by taking ∼10 100-pm-wide pump wavelength piezo scans and recording the wavelength of both pump and generated OPO signal. To do so, the measurement setup in Ext. Fig. <ref> is modified; generated telecom-wavelength OPO output is collected in a single-mode lensed fiber to increase wavelength resolution. Directly connecting this fiber to a rapidly-scanning OSA allows determination of the telecom-band wavelength. The plotted idler wavelength is calculated based on energy conservation. OPO wavelengths collected during a single OPO “cluster" (Fig. <ref>a) are plotted as dots connected by lines. Data is recorded at 11 temperatures between 150.7–151.7 °C. The presented fine tuning data (Ext. Fig. <ref>a) exhibits three regimes: (1) Clean tuning from λ_p=1041–1041.5 nm at λ_s≈1547.5 nm, (2) Strong mode hop at λ_p= 1041.6 nm between λ_s≈ 1547.5, 1549.5 nm, and (3) Clean tuning from λ_p=1041.7–1042 nm at λ_s≈1549.5 nm. The signal mode transition from λ_s = 1547.5 nm → 1549.5 nm as λ_p = 1041 nm → 1042 nm comes because changing λ_p shifts the nonlinear gain spectrum. To clarify how the shift in gain spectrum affects which modes oscillate, we measure the amplification experienced by cavity modes (Ext. Fig. <ref>b-e). With no pump laser power, the cavity mode spectrum is obtained by sweeping the wavelength of the tunable telecom laser (Santec TSL-710) input to the chip and detecting the output in an InGaAs photodiode (Newport 1623). The reduced-height peaks in the mode-spectrum structure comes from mode crossings. Then we pump the device below the OPO threshold while scanning the tunable telecom laser and measuring the output at telecom wavelengths. The pump laser amplifies all the telecom cavity modes within its gain bandwidth (Ext. Fig. <ref>c-e). The highest-peaked modes in these plots are those that experienced the highest net gain and thus oscillate when the device is above threshold. When λ_p=1041.33 nm (Ext. Fig. <ref>c), the cavity modes near 1547.5 nm experience the most gain. When λ_p shifts near 1041.6 nm, two groups of modes near 1547.5 and 1549.5 nm compete for gain (Ext. Fig. <ref>d), explaining the mode crossing observed in (Ext. Fig. <ref>a). Finally, when λ_p= 1041.94 nm, the modes at 1549.5 nm have high gain. §.§ Methane OPO spectroscopy As shown in Fig. <ref>a, the measurement setup for methane spectroscopy slightly modifies the general setup shown in Ext. Fig. <ref>. The methane cell (7.5 cm length, 20 Torr, Triad Technologies) is placed before a MCT detector (Thorlabs PDAVJ5), while the reference arm uses an InAsSb detector (Thorlabs PDA07P2). The generated signal wavelength is measured by heterodyning it with blueshifted output from a tunable telecom laser (Santec TSL-710) on a 12 GHz fast photodiode (Newport 1554-B). The wavelength of the reference telecom laser is read directly before and after a given measurement sweep using the wavemeter (Bristol Instruments Model 621). The beatnote is read using an RF spectrum analyzer (Rohde and Schwartz FSW26). In Fig. <ref>c we present methane absorption features for four separate scans. The MIR wavelength axis is obtained by combining the interpolated λ_p and λ_s axes by energy conservation. The presented scans in Fig. 4c are background-corrected to match the theoretical curves from HITRAN by fitting the experimental transmission data to T(λ) = V_1(λ) / V_2(λ) = x_1 e^-α(λ) L + x_2 where the fitting parameter x_1 accounts for transmission/detection efficiency differences in the two paths generating voltages V_1(t) and V_2(t), while x_2 accounts for MIR light hitting detector V_1 that did not couple into the gas cell (the MIR beam diameter was ∼2x the gas cell diameter when viewed with an IR viewing card). The background fitting process did not affect the wavelength axis. As a result, the experimental backround-subtracted data plotted in Fig. <ref>c is (T(λ) - x_2)/x_1. HITRAN data plotted in Fig. <ref>c comes from calculating absorption coefficient α(λ) from the HITRAN database for methane at 20 Torr, then plotting T=exp(-α(λ) L). §.§ Resonant DFG spectroscopy For resonant DFG spectroscopy, we use the same setup as for methane OPO spectroscopy but instead use a gas cell of ammonia (1.5 cm length, 740 Torr, purchased from Wavelength References). λ_p is determined with a wavemeter before scanning λ_s from 1540–1620 nm, which is assumed to vary linearly across the scan range. Generated experimental data, consisting of discrete peaks of generated MIR output (see Fig. <ref>e), is processed by fitting Lorentzians to each peak, then calculating the transmission for the k-th peak T(λ_k) based on the ratio of peak areas from the sample path and reference path. ∼10 scans are taken, then averaged, to improve SNR. Experimental absorbance data plotted in Fig. <ref>f is A=-log(T(λ_k)/T_bg) where T_bg = 0.7 to account for the difference in transmission between sample and reference paths. HITRAN data plotted in Fig. <ref>f is obtained by calculating the absorption coefficient α(λ) from the HITRAN database for ammonia at 740 Torr then plotting A = α(λ) L. § ACKNOWLEDGEMENTS We thank NTT Research for their financial and technical support. We thank the United States government for their support through the Department of Energy Grant No. DE-AC02-76SF00515, the Defense Advanced Research Projects Agency (DARPA) LUMOS program (Grant No. HR0011-20-2-0046), the DARPA Young Faculty Award (YFA, Grant No. D19AP00040), the U.S. Department of Energy (Grant No. DE-AC02-76SF00515) and Q-NEXT NQI Center, and the U.S. Air Force Office of Scientific Research MURI grant (Grant No. FA9550-17-1-0002). A.Y.H. acknowledges NSF GRFP, Grant. No. 2146755. H.S.S. acknowledges support from the Urbanek Family Fellowship, and V.A. was partially supported by the Stanford Q-Farm Bloch Fellowship Program and the Max Planck Institute in Erlangen. This work was also performed at the Stanford Nano Shared Facilities (SNSF), supported by the National Science Foundation under award ECCS-2026822. We also acknowledge the Q-NEXT DOE NQI Center and the David and Lucille Packard Fellowship for their support. We thank Leo Hollberg for many useful discussions and lending the cells for the gas spectroscopy experiment. § AUTHOR CONTRIBUTIONS A.Y.H., H.S., C.L., V.A., and A.H.S-N. designed the device. A.Y.H., H.S, and T.P. fabricated the device. A.Y.H, H.S., T.P.M., and T.P. developed fabrication procedures together. A.Y.H. measured the device. A.Y.H. analyzed data with support from H.S., M.J., and J.M. M.M.F. and A.H.S.-N. advised the project and provided experimental/theoretical support. A.Y.H. drafted the manuscript with input from all the authors.
http://arxiv.org/abs/2307.04447v1
20230710095733
Combinatorial Nullstellensatz and Turán numbers of complete $r$-partite $r$-uniform hypergraphs
[ "Alexey Gordeev" ]
math.CO
[ "math.CO" ]
Combinatorial Nullstellensatz and Turán numbers of complete r-partite r-uniform hypergraphs Alexey Gordeev =========================================================================================== In this note we describe how Lasoń's generalization of Alon's Combinatorial Nullstellensatz gives a framework for constructing lower bounds on the Turán number (n, K^(r)_s_1,…,s_r) of the complete r-partite r-uniform hypergraph K^(r)_s_1,…,s_r. To illustrate the potential of this method, we give a short and simple explicit construction for the Erdős box problem, showing that (n, K^(r)_2,…,2) = Ω(n^r - 1/r), which asymptotically matches best known bounds when r ≤ 4. § INTRODUCTION §.§ Turán numbers of complete r-partite r-uniform hypergraphs A hypergraph H = (V, E) consists of a set of vertices V and a set of edges E, each edge being some subset of V. A hypergraph is r-uniform if each edge in it contains exactly r vertices. An r-uniform hypergraph is r-partite if its set of vertices can be represented as a disjoint union of r parts with every edge containing one vertex from each part. The complete r-partite r-uniform hypergraph with parts of sizes s_1, …, s_r contains all s_1 ⋯ s_r possible edges and is denoted by K^(r)_s_1, …, s_r. Let H be an r-uniform hypergraph. The Turán number (n, H) is the maximum number of edges in an r-uniform hypergraph on n vertices containing no copies of H. A classical result of Erdős <cit.> implies that for s_1 ≤…≤ s_r, (n, K^(r)_s_1, …, s_r) = O( n^r - 1/s_1 ⋯ s_r - 1). In <cit.>, Mubayi conjectured that bound (<ref>) is asymptotically tight. Recently, Pohoata and Zakharov <cit.> showed that this is true whenever s_1, …, s_r ≥ 2 and s_r ≥ ((r - 1)(s_1 ⋯ s_r - 1 - 1))! + 1, extending earlier results of Alon, Kollár, Rónyai and Szabó <cit.> and Ma, Yuan and Zhang <cit.>. Nevertheless, the conjecture remains open even in a special case (n, K^(r)_2,…, 2), which is often referred to as the Erdős box problem. The best known lower bound is due to Conlon, Pohoata and Zakharov <cit.>, who showed that for any r ≥ 2, (n, K^(r)_2,…, 2) = Ω( n^r - ⌈2^r - 1/r⌉^-1). §.§ Generalized Combinatorial Nullstellensatz Let be an arbitrary field, and let f ∈[x_1,…,x_r] be a polynomial in r variables. A monomial x_1^d_1⋯ x_r^d_r is a monomial of a polynomial f if the coefficient of x_1^d_1⋯ x_r^d_r in f is non-zero. Recall the famous Combinatorial Nullstellensatz by Alon (see Theorem 1.2 in <cit.>). Let x_1^d_1⋯ x_r^d_r be a monomial of f, and let f ≤ d_1 + … + d_r. Then for any subsets A_1, …, A_r of with sizes |A_i| ≥ d_i + 1, f does not vanish on A_1×…× A_r, i.e. f(a_1,…,a_r) ≠ 0 for some a_i ∈ A_i. A monomial x_1^d_1⋯ x_r^d_r of f is maximal if it does not divide any other monomial of f. Lasoń showed the following generalization of Combinatorial Nullstellensatz (see Theorem 2 in <cit.>). It should be mentioned that an even stronger theorem was proved by Schauz in 2008 (see Theorem 3.2(ii) in <cit.>). Let x_1^d_1⋯ x_r^d_r be a maximal monomial of f. Then for any subsets A_1, …, A_r of with sizes |A_i| ≥ d_i + 1, f does not vanish on A_1×…× A_r, i.e. f(a_1,…,a_r) ≠ 0 for some a_i ∈ A_i. Notably, in most applications of Combinatorial Nullstellensatz the condition f ≤ d_1 + … + d_r from Theorem <ref> turns out to be sufficient and thus the more general Theorem <ref> is not needed. Below we give a rare example of an application in which the full power of Theorem <ref> is essential. § THE FRAMEWORK For subsets B_1, …, B_r of a field denote the set of zeros of a polynomial f ∈[x_1, …, x_r] on B_1 ×…× B_r as Z(f; B_1,…, B_r) := { (a_1, …, a_r) ∈ B_1 ×…× B_r | f(a_1, …, a_r) = 0 }. In the case B_1 = … = B_r = B we will write Z(f; B, r) instead of Z(f; B_1, …, B_r). The set Z(f; B_1,…, B_r) can be viewed as the set of edges of an r-partite r-uniform hypergraph H(f; B_1, …, B_r) with parts B_1, …, B_r. Our key observation is the following lemma which immediately follows from Theorem <ref>. Let x_1^d_1⋯ x_r^d_r be a maximal monomial of f. Then for any subsets B_1, …, B_r of the hypergraph H(f; B_1,…, B_r) is free of copies of K^(r)_d_1 + 1, …, d_r + 1. This lemma gives us a new tool for constructing lower bounds on (n, K^(r)_s_1, …, s_r). In Section <ref> we give a simple example of such construction for (n, K^(r)_2, …, 2) which asymptotically matches (<ref>) when r ≤ 4. Combining Lemma <ref> with (<ref>), we also get the following Schwartz–Zippel type corollary, which may be of independent interest. Let x_1^d_1⋯ x_r^d_r be a maximal monomial of f, where d_1 ≤…≤ d_r. Then for any subsets B_1, …, B_r of with sizes |B_i| = n, | Z(f; B_1, …, B_r) | = O( n^r - 1/(d_1 + 1) ⋯ (d_r - 1 + 1)). The described framework was also recently discussed in an article by Rote (see Section 8 in <cit.>). § CONSTRUCTION Here _p^r is the finite field of size p^r and _p^r^* = _p^r∖{0}. Let p be a prime number, and let f ∈_p^r[x_1, …, x_r] be the following polynomial: f(x_1, …, x_r) = x_1 ⋯ x_r + ∑_i = 1^r ∏_j = 1^r - 1 x_i + j^p^r - p^j, where indices are interpreted modulo n, i.e. x_r + 1 = x_1, x_r + 2 = x_2, etc. Then |Z(f; _p^r^*, r)| = p^r - 1 ( p^r - 1 )^r - 1. Note that for any a_1, …, a_r ∈_p^r^* we have a_i^p^r = a_i, so f(a_1, …, a_r) = a_1 ⋯ a_r ( 1 + ∑_i = 1^r ∏_j = 0^r - 1 a_i + j^-p^j) = a_1 ⋯ a_r ( 1 + ( a_1^-1 a_2^-p⋯ a_r^-p^r - 1) ), where (a) = a + a^p + … + a^p^r - 1 is the trace of the field extension _p^r / _p. Now let us fix a_2, …, a_r ∈_p^r^*. As a_1 runs over all values of _p^r^*, so does a_1^-1 a_2^-p⋯ a_r^-p^r - 1. There are exactly p^r - 1 elements a ∈_p^r^* for which (a) = -1, i.e. for any fixed a_2, …, a_r there are exactly p^r - 1 values of a_1 for which f(a_1, …, a_r) = 0. For any r ≥ 2, (n, K^(r)_2, …, 2) = Ω( n^r - 1/r). Note that x_1 ⋯ x_r is a maximal monomial of the polynomial f from Lemma <ref>. Thus, due to Lemma <ref>, a hypergraph H_p = H(f; _p^r^*, r) with r(p^r - 1) vertices and p^r - 1 ( p^r - 1 )^r - 1 edges is free of copies of K^(r)_2, …, 2 for every prime p, which gives the desired bound. § CONCLUDING REMARKS The construction from Section <ref> in the case r = 3 is structurally similar to the one given by Katz, Krop and Maggioni in <cit.>. Their construction can be generalized to higher dimensions giving an alternative proof of Theorem <ref> (private communication with Cosmin Pohoata; see also Proposition 11.2 in <cit.>). Our approach gives a simpler construction and a much shorter proof. Motivated by the ideas discussed in Section <ref>, Rote posed a problem (see Problem 1 in <cit.>), equivalent to asking how large can the set Z(f; B_1, B_2) be for a polynomial of the form f(x, y) = xy + P(x) + Q(y) and sets B_1, B_2 of size n each. Lemma <ref> answers this question asymptotically if sets B_1 and B_2 are allowed to be taken from the finite field _p^2. § ACKNOWLEDGEMENTS I would like to thank Danila Cherkashin and Fedor Petrov for helpful discussions, and Günter Rote for useful comments on a draft of this note. abbrv
http://arxiv.org/abs/2307.05953v1
20230712065007
Reward Selection with Noisy Observations
[ "Kamyar Azizzadenesheli", "Trung Dang", "Aranyak Mehta", "Alexandros Psomas", "Qian Zhang" ]
cs.GT
[ "cs.GT" ]
Ellipsoid Fitting Up to a Constant Jun-Ting HsiehCarnegie Mellon University. . Supported by NSF CAREER Award #2047933. Pravesh K. KothariCarnegie Mellon University, . Supported by NSF CAREER Award #2047933, Alfred P. Sloan Fellowship and a Google Research Scholar Award. Aaron PotechinThe University of Chicago, . Supported in part by NSF grant CCF-2008920. Jeff XuCarnegie Mellon University, . Supported in part by NSF CAREER Award #2047933, and a Cylab Presidential Fellowship. August 12, 2023 ====================================================================================================================================================================================================================================================================================================================================================================================================================================================================== We study a fundamental problem in optimization under uncertainty. There are n boxes; each box i contains a hidden reward x_i. Rewards are drawn i.i.d. from an unknown distribution . For each box i, we see y_i, an unbiased estimate of its reward, which is drawn from a Normal distribution with known standard deviation σ_i (and an unknown mean x_i). Our task is to select a single box, with the goal of maximizing our reward. This problem captures a wide range of applications, e.g. ad auctions, where the hidden reward is the click-through rate of an ad. Previous work in this model <cit.> proves that the naive policy, which selects the box with the largest estimate y_i, is suboptimal, and suggests a linear policy, which selects the box i with the largest y_i - c ·σ_i, for some c > 0. However, no formal guarantees are given about the performance of either policy (e.g., whether their expected reward is within some factor of the optimal policy's reward). In this work, we prove that both the naive policy and the linear policy are arbitrarily bad compared to the optimal policy, even when is well-behaved, e.g. has monotone hazard rate (MHR), and even under a “small tail” condition, which requires that not too many boxes have arbitrarily large noise. On the flip side, we propose a simple threshold policy that gives a constant approximation to the reward of a prophet (who knows the realized values x_1, …, x_n) under the same “small tail” condition. We prove that when this condition is not satisfied, even an optimal clairvoyant policy (that knows ) cannot get a constant approximation to the prophet, even for MHR distributions, implying that our threshold policy is optimal against the prophet benchmark, up to constants. En route to proving our results, we show a strong concentration result for the maximum of n i.i.d. samples from an MHR random variable that might be of independent interest. § INTRODUCTION Suppose that you are given n boxes, with box i containing a hidden reward x_i. Rewards are drawn independently and identically distributed (i.i.d.) from an unknown distribution . For each box i, you see an unbiased estimate y_i of its reward: nature draws noise ϵ_i ∼(0,σ_i) with known σ_i, and you observe y_i = x_i + ϵ_i. Your goal is to select the box with the highest reward x_i. This fundamental problem, originally introduced by Bax et al. <cit.>, captures a wide range of applications. The original motivation of Bax et al. <cit.> is ad auctions, where one can think of the hidden reward x_i as the click-through rate of an ad, and the observed value y_i as an estimation of the click-through rate produced by a machine learning algorithm; these algorithms typically have different amounts of data, and therefore different variance in the error, across different populations. If the distribution is known, the optimal policy simply calculates the posterior expectation R_i(y_i) = [ X_i | Y_i = y_i ] for each box i and selects the box with the largest R_i(y_i). However, when is not known, this calculation is, of course, not possible. Furthermore, if ϵ_is were drawn i.i.d. (that is, if all σ_is were equal), it should be intuitive that , the policy that picks the box with the largest observation y_i, is optimal, since R_i(y_i) = [ X_i | X_i + ϵ_i = y_i ] “should” be a monotone non-decreasing function of y_i.[As we show in one of our technical lemmas, this happens to be true when ϵ_i is drawn from (0,σ), but, perhaps surprisingly, this is not true for an arbitrary noise distribution. To see this, consider the case that X_i is uniform in the set { -1 , +1 } and ϵ_i is uniform in the set { -10, +10 }. In this case, [ X_i | X_i + ϵ_i = -9 ] = 1 > -1 = [ X_i | X_i + ϵ_i = 9 ].] Bax et al. <cit.> show that is suboptimal when the σ_is are not equal. Specifically, they consider a family of linear policies. A linear policy with parameter c selects the box with the largest y_i - c ·σ_i; for c=0 we recover . Bax et al. <cit.> show that the derivative of the expected reward is strictly positive at c=0; that is, the policy is not optimal, even within the family of linear policies. However, and this brings us to our interest here, no other formal guarantees are given. Is the best linear policy, or even the policy, a good (e.g. constant) approximation to the optimal policy? Are there better policies, outside the family of linear policies? §.§ Our contribution Without loss of generality, we assume that = (σ_1, …, σ_n) satisfies σ_1 ≤ ... ≤σ_n. Naturally, if σ_i is extremely large for almost all i, no policy, including a clairvoyant policy that knows , can hope to achieve any non-trivial performance guarantees (e.g., perform better than picking a random box). We start by making this intuition precise. Informally, given , n and c, has large noise if σ_n^c is at least Ω̃( [_n^c:n^c] ).[Recall that _k:n is the k-th lowest of n i.i.d. samples from .] Under this condition, we show that, even for the case of a distribution with monotone hazard rate (MHR),[A distribution has monotone hazard rate (MHR) if 1-F(x)/f(x) is a non-increasing function.] an optimal clairvoyant policy (which knows ) cannot compete with [_n:n], the expected reward of a prophet that knows the rewards x_1, …, x_n. Despite the fact that the prophet is a very strong benchmark, we note that, as we see later in the paper, our policies compete against the prophet, in similarly “noisy” environments. We further show that, assuming a bit more noise, σ_cn∈Ω̃([_cn:cn]) for cn ∈ O(1), an optimal clairvoyant policy has reward comparable to the reward of picking a box uniformly at random. See <Ref> for the precise definitions, and <Ref> for the formal statements and proofs. We henceforth assume that the environment has “small noise.” We proceed to analyze the performance of known policies under this assumption. In <Ref> we study the policy, which selects the box with the highest reward, and show that not only is it suboptimal, but that it can be made suboptimal for every distribution (<Ref>). Specifically, given an arbitrary distribution , there exist choices for n and (satisfying the aforementioned “small noise” assumption) such that the optimal (non-clairvoyant) policy has reward at least [ _n:n]/2, while the policy has a reward of at most 4[]. Our construction has a small number, Θ(log(n)), boxes with large noise, with the remaining boxes having no noise. The intuition is that, with high probability, a random large noise box is chosen by , while picking among the no noise boxes yields a reward of almost [_n:n]. Selecting such that [_n:n] ∈Θ( n[] ), we have that provides only a trivial approximation to the optimal reward. In <Ref> we study linear policies. Surprisingly, this family of policies can also be made suboptimal in a similarly strong way. Given an arbitrary MHR distribution , there exist choices for n and (again, satisfying the aforementioned “small noise” assumption) such that the optimal policy has reward at least a constant times [ _n:n], but no linear policy can get expected reward more than a constant times [] (<Ref>). By letting be the exponential distribution, we get a lower bound of Ω(log(n)) for the approximation ratio of linear policies. Constructing a counter-example for linear policies is more delicate. First, observe that on all 's and realizations y's, every linear policy's performance is at most the best _c policy, which discounts all boxes by a weight c tailored to and y. For a fixed and small c, a construction similar to the one for works. For a fixed and large c, _c “over-discounts”, and therefore a construction with many small noise boxes (that are not picked with high probability) works. We show how to combine these two ideas into a single construction where all _c policies fail with high probability, and then use a union bound to relate to the best linear policy. Combined, Theorems <ref> and <ref> show that, even if we know that belongs to the (arguably very well-behaved) family of monotone hazard rate distributions, we need a new approach. En route to showing <Ref>, we prove a lemma about the concentration of the maximum of n i.i.d. samples from an MHR distribution which might be of independent interest. It is known that order-statistics of MHR distributions also satisfy the MHR condition <cit.>. Furthermore, MHR distributions exceed their mean with probability at least 1/e. Therefore, [ _n:n≥[_n:n] ] ≥ 1/e. Here, we show that _n:n does not exceed twice its mean with high probability (<Ref>): [ _n:n≤ 2[_n:n] ] ≥ 1 - 1/n^3/5, implying a very small tail for _n:n. The proof of this result is based on a new lemma (which again might be of independent interest) which states that the (1-1/n)-quantile value of an MHR distribution is within a constant of [_n:n]. At a high level, the downfall of both and linear policies is that they treat very different types of boxes in a virtually identical manner: does not take in the noise information at all, while linear policies utilize this information in a very crude way, and discount boxes with massively different order of noises using the same weight. Intuitively, a good policy should identify large noise boxes and ignore them. However, a non-trivial obstacle, is that a noise being “large” is relative to , which is unknown. In <Ref> we propose our new policy, that circumvents this issue. The policy is quite simple: pick α∼ U[0,1], and run on the α fraction of the boxes with the lowest noise (i.e. boxes 1 through α n). Therefore, if, e.g. a constant fraction of the boxes has small noise, we have a constant probability of keeping a constant fraction of them. In more detail, if a c fraction of the boxes has low noise, and specifically, if σ_cn≤[_cn:cn]/5√(2ln(n)) (arguably, a very permissive bound), then our policy gives a c^2/20 approximation to [_n:n], the expected reward of a prophet. Clearly, if c is a constant, we get a constant approximation. Interestingly, our policy provides the same guarantees even in a setting with a lot less information, where the σ_is are unknown, and only their order is available to the policy. For the case of MHR distributions we further improve this result. The policy itself has a slight twist: pick α∼ U[0,1], and run on the n^α boxes with the lowest noise (i.e. boxes 1 through n^α). This time, if n^c boxes have low noise, and specifically if σ_n^c≤[_n^c:n^c]/18√(2ln(n^c)), this version of our policy guarantees a c^2/576 approximation to the prophet. For a constant c, our approximation to the prophet is again a constant, and we only require n^c boxes with bounded noise. §.§ Related Work <cit.>, whose contribution we already discussed, and <cit.>, are the two works most closely related to ours. <cit.> study a very similar model to ours, where the reward x_i for each box i is not stochastic, but adversarial, and the noise distribution is not (0,σ_i), but an arbitrary (known) zero-mean distribution A_i. <cit.> are interested in finding policies with small worst-case regret, defined as the difference between the maximum reward and the expected performance of the policy, where the expectation is over only the random noise. A policy is then a constant approximation if its regret is within a constant of the optimal regret; in contrast, for us, a policy is a constant approximation if its expected reward is within a constant of the expected reward of the optimal policy/a prophet. <cit.> show that in their model as well, the naive policy which picks the box with the highest observation y_i is arbitrarily bad (in terms of regret) even in the n=2 case. Similar to our results here, <cit.> show that there is a function θ from random variables to positive reals, such that picking the box with the largest y_i - θ(A_i) is a constant approximation (in terms of regret) to the optimal policy. Note that, in the case of our policy, this function is especially simple: θ(A_i) = 0 if σ_i is small, otherwise θ(A_i) is infinite. A phenomenon related to the naive policy being suboptimal, both in the model studied here/the model of <cit.>, as well as the model of <cit.>, is the winner's curse <cit.>, where multiple bidders, with the same ex-post value for an item, estimate this value independently and submit bids based on those estimates; the winner tends to have a bid that's an overestimate of the true value. Our problem is also related to robust optimization which studies optimization in which we seek solutions that are robust with respect to the realization of uncertainty; see <cit.> for a survey. Finally, there has been a lot of work on the related problem of finding the maximum (or the top k elements) given noisy information, see, e.g., <cit.>. Many of our theorems can be strengthened by additionally assuming that is MHR. MHR distributions are known to satisfy a number of interesting properties, see <cit.> for a textbook. In algorithmic economics, such properties have been exploited to enable strong positive results for a number of problems, including the sample complexity of revenue maximization <cit.>, the competition complexity of dynamic auctions <cit.>, and the design of optimal and approximately optimal  <cit.>. § PRELIMINARIES There are n boxes. The i-th box contains a reward x_i. These rewards are drawn i.i.d. from an unknown distribution with a cumulative distribution function F and density function f. We assume that is supported on [0,∞). Rewards are not observed by our algorithm. Instead, nature draws unbiased estimates, y_1, …, y_n, where y_i is drawn from a normal distribution with (an unknown) mean x_i and a known standard deviation σ_i. We refer to y_i as the i-th observation. We often write X_i and Y_i for the random variable for the i-th reward and i-th observation, respectively. Note that Y_i can be equivalently thought as Y_i = X_i + ϵ_i, where the noise ϵ_i is drawn from (0,σ_i). Our goal is to select a single box i with the goal of maximizing the (expected) realized reward. Policies and expected rewards Formally, a policy A maps the public information, the pair (, ), = (σ_1, …, σ_n) and = (y_1, …, y_n), to a distribution over boxes. We write R_A(, , ) for the expected reward of a policy A under true reward distribution and observations = (y_1, …, y_n), where the standard deviation of the noise is according to = (σ_1, …, σ_n), and where this expectation is with respect to the randomness of A and the randomness in the rewards. In order to evaluate a policy under a fixed reward distribution we need to take an additional expectation over the random observations = (y_1, …, y_n). We overload notation and write R_A(, ) = _[ R_A(, , ) ] for the expected reward of a policy A under true reward distribution , where the standard deviation of the noise is according to = (σ_1, …, σ_n). Previous policies and benchmarks <cit.> consider two simple policies. The policy always selects the box i with the largest observation y_i. A linear policy _γ, parameterized by a function γ : ^n ×^n →, chooses the box i which maximizes y_i - γ(,) ·σ_i. We use the following two policies as useful benchmarks: the optimal policy, and the prophet. The optimal policy for a distribution , _, selects the box i with maximum [ X_i | Y_i = y_i ]. Its expected reward in outcome is precisely max_i [ X_i | Y_i = y_i ]. That is, R__(, ) = _[ max_i ∈ [n][ X_i | Y_i = y_i ] ]. Finally, the (expected) reward of a prophet who knows x_1, …, x_n, for a distribution , is equal to [ _n:n ], the expected maximum of n i.i.d. draws from . Formalizing “small” and “large” noise environments Clearly, if σ_i is large for almost all i ∈ [n], then no policy can hope to get a non-trivial guarantee. Therefore, we intuitively need a condition that captures the fact that we need small noise for enough boxes. In the following couple of definitions, we formalize precisely what we mean by “small” and “enough”. For any distribution , any n and any c ∈ (0,1], let n,c be the set of vectors ∈_+^n where at least cn values in are at most [_cn:cn]/5√(2ln n). Formally, n,c = {∈_+^n |σ_1 ≤…≤σ_n and σ_cn≤[_cn:cn]/5√(2ln n)}. For the case of MHR distributions, we only need a weaker condition to guarantee strong positive results. We state this condition in <Ref>. For any MHR distribution and any n, let n,c be the set of vectors ∈_+^n where at least n^c values in are at most [_n^c:n^c]/18√(2c ln n). Formally, n,c = {∈_+^n |σ_1 ≤…≤σ_n and σ_n^c≤[_n^c:n^c]/18√(2c ln n)}. Ideally, we would like to, whenever ∉n,c or n,c, have strong negative results for, say, the optimal policy. We show such strong negative results for the optimal clairvoyant policy, even for MHR distributions, even under a condition close to n,c. On the negative side, the precise condition is not the complement of n,c, but we lose an extra √(c) factor. Under the following “medium noise” condition, we cannot hope to compete against the prophet (<Ref>). For any distribution , any n and any c ∈ (0, 1], let n,c be the set of vectors ∈_+^n where at most n^c values in is at most [_n^c:n^c]}/18 c√(2ln n). Formally, n,c = {∈_+^n |σ_1 ≤…≤σ_n and σ_n^c > [_n^c:n^c]}/18 c√(2ln n). Finally, under the following “large noise” condition, closer to the complement of n,c (with an extra √(ln(n)/ln(cn)) factor), we cannot hope to do better than picking a box uniformly at random (<Ref>). For any distribution , any n and any c ∈ (0,1], let n,c be the set of vectors ∈_+^n where at most cn values in are at most [_cn:cn] ·√(ln n)/ln(cn). Formally, n,c = {∈_+^n |σ_1 ≤…≤σ_n and σ_cn > [_cn:cn] ·√(ln n)/ln(cn)}. §.§ Technical Lemmas Here, we present some definitions and a few technical lemmas that will be useful throughout the paper. All missing proofs can be found in Appendix <ref>. We often use the following lemma (<Ref>) about the CDF of the standard normal distribution, and a lemma (<Ref>) about the relation between the expected maximum of a and b i.i.d. samples from an arbitrary distribution . We write _k:n for the k-th lowest order statistic out of n i.i.d. samples, that is, _1:n≤_2:n≤…≤_n:n. Throughout the paper, Φ(x) is the CDF of the standard normal distribution, and ϕ(x) is the PDF of the standard normal distribution. For all t > 0, we have 1 - 1/√(2 π)1/t e^-t^2/2≤Φ(t) ≤ 1 - 1/√(2 π)t/t^2+1 e^-t^2/2. Furthermore, this implies directly that for all t > 0, 1 - ϕ(t)/t≤Φ(t) ≤ 1 - t ϕ(t)/t^2 + 1. For any distribution supported on [0, ∞) and for any two integers 1 ≤ a < b, we have [_a:a]/a≥[_b:b]/b. The following definitions will be crucial in describing our lower bounds. For a distribution , let α^()_m = inf{x | F(x) ≥ 1 - 1/m} be the (1 - 1/m)-th quantile of . For a distribution , let β^()_m = inf{x |[|≥ x] ·[≥ x] ≤[]/m} be the smallest threshold such that the contribution to [] from values at least this threshold is at most []/m. Technical lemmas for MHR distributions Here, we prove a technical lemma for the concentration of the maximum of n i.i.d. samples of an MHR distribution, that might be of independent interest. It is known that the maximum of i.i.d. draws from an MHR distribution is also MHR <cit.>. This implies that the probability that the maximum exceeds its mean, [ _n:n≥[_n:n] ], is at least 1/e. In <Ref> we show that, in fact, this maximum concentrates around its mean: it does not exceed twice its mean with high probability. We note that a related, but incomparable, statement is given by <cit.>, who show that at least a (1-ϵ)-fraction of [max_i X_i] is contributed by values no larger than [max_i X_i] ·log(1/ϵ), where the X_is are (possibly not identical) MHR distributions. For any MHR distribution and any n ≥ 4, we have [_n:n < 2 ·[_n:n]] ≥ 1 - 1/n^3/5. <Ref> is an immediate consequence of the following two lemmas. The first is shown in <cit.>; the second we prove in <Ref>. If the distribution of a random variable X satisfies MHR, m ≥ 1 and d ≥ 1, then d α^(X)_m ≥α^(X)_m^d. For any MHR distribution and any n ≥ 4, we have 1/3·[_n:n] ≤α^()_n ≤5/4·[_n:n]. Together the lemmas give that α^()_n^8/5≤^(<Ref>)8/5α^()_n ≤^(<Ref>) 2 [_n:n]. Therefore, [_n:n≤ 2 [_n:n]] ≥[_n:n≤α^()_n^8/5] = (1 - 1/n^8/5)^n ≥^(Bernoulli's inequality) 1 - 1/n^3/5. § NEGATIVE RESULTS FOR LARGE NOISE ENVIRONMENTS Before discussing small noise environments, we show strong lower bounds for the optimal clairvoyant policy (an optimal policy that knows ) in large noise environments, even under the assumption that the distribution is MHR. All missing proofs can be found in <Ref>. Starting with “medium” noise, <Ref> shows that, for an MHR distribution , when ∈n,c, even an optimal clairvoyant policy cannot approximate the prophet to some absolute value proportional to √(c). First, as we discussed in <Ref>, note that n,c is almost, but not exactly, the complement of n,c; the complement of n,c includes where at least n^c values in are at most [_n^c:n^c]}/18 c√(2ln n), while n,c is characterized by 's containing at least n^c values upper bounded by [_n^c:n^c]/18√(2c ln n), implying that n,c is a strict subset of the complement of n,c as c ≤ 1. This leaves a gap (arguably insignificant, but a gap nonetheless) in our understanding. On the flip side, our negative result holds against the (well-behaved) class of MHR distributions, even against the strong benchmark of the optimal clairvoyant policy. There exists a MHR distribution where [_k:k] ∈ω([D]) for k ∈ω(1), such that for all n ≥ n_0, for some constant n_0, for all c ∈ [1/400 √(ln n), 1], and all ∈n, c, we have R__(, ) ∈ O ( √(c)·[_n:n] ). One way to interpret <Ref> is that, for any desired constant approximation α, for all large enough n, one can select a small enough c and that satisfies the “medium” noise condition (noting that this condition also depends on c), such that the optimal clairvoyant policy does not achieve an α approximation. We include the fact that [_k:k] ∈ω([D]), to highlight that the distribution is not trivial. For example, it is not the case that the expectation is already a constant away from the expected maximum. The distribution that witnesses <Ref> is the standard half-normal distribution = |(0, 1)|. We start by proving that this distribution is MHR, and bounding its expected maximum value. = |(0, 1)| is MHR, [] = √(2/π), and 4/5·√(ln n)≤[_n:n] ≤ 3 √(2)·√(ln n) for n ≥ 8. Since order statistics are preserved under affine transformations, an immediate corollary is the following. For all σ > 0, 4/5·σ√(ln n)≤[|(0, σ^2)|]_n:n≤ 3 √(2)·σ√(ln n) for n ≥ 8. Towards bounding the optimal policy, we can compute the exact expression for [X_i | Y_i = y_i]. Given Y_i = X_i + ϵ_i where X_i ∼ and ϵ_i ∼(0, σ_i^2), we have [X_i | Y_i = y_i] = y_i/σ_i^2 + 1 + ϕ(-y_i/σ_i √(σ_i^2 + 1))/1 - Φ(-y_i/σ_i √(σ_i^2 + 1))·σ_i/√(σ_i^2 + 1). Unfortunately, while this form is exact, it is not easy to work with. We instead consider the following upper bound on [X_i | Y_i = y_i]. Let U_σ(y) = √(%s/%s)2π + max{0, y/σ^2+1}, then [X_i | Y_i = y_i] ≤ U_σ_i(y_i) for all σ_i and y_i. We first consider the case where y_i ≥ 0. In this case, U_σ_i(y_i) = √(%s/%s)2π + y_i/σ^2_i+1. Observe that ϕ(x) ≤1/√(2 π) for all x, 1 - Φ(x) ≥1/2 for all x ≤ 0, and σ_i/√(σ_i^2 + 1)≤ 1 for all σ_i ≥ 0. Therefore, [X_i | Y_i = y_i] = y_i/σ_i^2 + 1 + ϕ(-y_i/σ_i √(σ_i^2 + 1))/1 - Φ(-y_i/σ_i √(σ_i^2 + 1))·σ_i/√(σ_i^2 + 1)≤y_i/σ_i^2 + 1 + √(%s/%s)2π = U_σ_i(y_i). If y_i < 0, we use the property that [X_i | Y_i = y_i] ≤[X_i | Y_i = 0] (this is due to the monotonicity of [X_i | Y_i = y_i]; see <Ref>): [X_i | Y_i = y_i] ≤[X_i | Y_i = 0] ≤ U_σ_i(0) = U_σ_i(y_i). We are now ready to prove <Ref>. Let = |(0, 1)|, and consider = ∈n, c where, without loss of generality, we have σ_1 ≤σ_2 ≤…≤σ_n. This means that σ_n^c > [_n^c:n^c]}/18 c√(2ln n)≥^(<Ref>)4√(ln(n^c))/90c √(2ln n) = √(2)/45 √(c). Note that the expected reward of the optimal policy is at most the expected reward of the optimal policy that picks 2 boxes u and v where u ∈ [1, n^c - 1] and v ∈ [n^c, n], and then enjoys the rewards of both boxes. The expected reward from choosing box u is at most [max_i ∈ [1, n^c - 1] x_i] ≤[_n^c:n^c]. The expected reward from choosing box v is at most the expected reward of _, restricted to choosing boxes from n^c to n, which in turn is at most max_i ∈ [n^c, n][X_i | Y_i = y_i]. Therefore, the expected reward from box v is upper bounded by: _[max_i ∈ [n^c, n][X_i | Y_i = y_i]] ≤^(<Ref>)_[max_i ∈ [n^c, n] U_σ_i(y_i)] = [max_i ∈ [n^c, n] U_σ_i(X_i + (0, σ_i^2))] ≤^(U_σ_i(y) is monotone)[max_i ∈ [n^c, n] U_σ_i(X_i + |(0, σ_i^2)|)] = [max_i ∈ [n^c, n]√(%s/%s)2π + (X_i + |(0, σ_i^2)|)/σ_i^2 + 1] ≤[√(%s/%s)2π + max_i ∈ [n^c, n]X_i/σ^2_i + max_i ∈ [n^c, n]|(0, σ_i^2)|/σ_i^2] ≤√(%s/%s)2π + [ |(0, 1)|_n:n]/σ^2_n^c + [ max_i ∈ [n^c, n]|(0, 1/σ_i^2)| ] ≤^(<Ref>)√(%s/%s)2π + 3√(2)·√(ln n)/σ_n^c^2 + 1/σ_n^c· 3√(2)·√(ln n) ≤^( σ_n^c > √(2)/45 √(c))√(%s/%s)2π + 6075 · c √(ln n)/√(2) + 135 √(c ln n) ≤^(1/400 √(ln n)≤ c ≤ 1) 4497 √(c ln n) Combining, we have that R__(, ) ≤ 4500 √(c ln n) + [_n^c:n^c] ≤^(<Ref>) 4497 √(c ln n) + 3/√(2)√(c ln n)≤ 4500 √(c ln n). Using <Ref>, this is at most 4500√(c)·5/4[_n:n] ≤ 5625 √(c)·[_n:n]. The following theorem shows that, if the environment has “large” noise, then the optimal clairvoyant policy is comparable to the policy that picks a random box. There exists an MHR distribution where [_k:k] ∈ω([D]) for k ∈ω(1), such that for all n ≥ n_0, for some constant n_0, for all c ∈ [1/n,1], all and all ∈n, c, we have R__(, ) ∈ O( √(ln(cn))[] ). One way to interpret this theorem is that, given any constant target ratio α and any large enough n, one can pick c small enough (e.g. such that c n ∈ O(1)) and that satisfies the “large” noise condition, such that the optimal clairvoyant policy is not α times better than the policy that picks a box uniformly at random. The [_k:k] ∈ω([D]) is crucial in this theorem, since, for the theorem to have bite, it must be that √(ln(cn))[] is a lot smaller than [_n:n] the reward of a prophet. § NEGATIVE RESULTS FOR SMALL NOISE ENVIRONMENTS In this section, we show negative results for (<Ref>) and _γ (<Ref>). All missing proofs can be found in <Ref>. §.§ Warm-up: Negative results for For every distribution , all n ≥ 46, and all c ≤n-6ln(n)/n, there exists ^* = (σ^*_1, …, σ^*_n) such that ^* ∈n,c, and R_(, ^*) ≤8 []/[_n:n]· R__(, ^*) As an immediate consequence of <Ref>, by picking a distribution such that [_n:n] ∈Θ( n [] ), we get that only gives a (trivial) n approximation to the optimal policy. For all n ≥ 46 and c ≤n-6ln(n)/n, there exists and ^* = (σ^*_1, …, σ^*_n) such that R__(, ^*) ∈Ω( n ) R_(, ^*). Consider the distribution that takes the value 0 with probability 1-1/n, and the value n with probability 1/n. Then, [] = 1, and [ _n:n ] = n·( 1 - ( 1 - 1/n)^n ) ≥ n ·( 1- 1/e). Applying <Ref> implies the corollary. Our construction of ^* works as follows, where c_b = 6 ln n and σ_b = 6 β^(_n:n)_n^2√(ln n): σ^*_i = 0 i ∈ [1, n - c_b] σ_b i ∈ [n - c_b + 1, n] We refer to the boxes with σ^*_i = 0 as “exact”, while the boxes with σ^*_i = σ_b as having “large noise.” It is straightforward to confirm that ^* ∈n,c, for c ≤n-6ln(n)/n (according to <Ref>). <Ref> will be an immediate consequence of two facts. First, intuitively, a large noise box will have large ϵ_i with high probability, and therefore be selected by , but its expected reward won't be much better than 4[] (<Ref>). On the other hand, even the policy that selects the best exact box gets reward at least 1/2[_n:n] (<Ref>). For every distribution , for all n ≥ 46, R__(, ^*) ≥1/2[_n:n]. The optimal policy is as least as good as the policy that selects the box with the largest y_i among the exact boxes. Since x_i = y_i for these boxes, the reward of this policy is at least [_n-c_b:n-c_b] ≥^(<Ref>)n - c_b/n·[_n:n] = n - 6 ln n/n·[_n:n] ≥^(n ≥ 46)1/2[_n:n]. For every distribution , for all n ≥ 22, we have that R_(, ^*) ≤ 4 []. On a high level, our proof works as follows. Consider the event ^* that X_i ≤β_n^2^(_n:n) for all boxes i. We prove that conditioned on ^*, gets an expected reward of at most 3 []. On the other hand, when ^* does not occur, even if performs as well as taking _n:n = max_i X_i, the contribution to the final expected reward is also upper bounded by []. The second fact can be shown directly from the definition of β_n^2^(_n:n). For the first fact, we first show that with high probability ϵ_i is not too small for some large box i (<Ref>); conditioned on ^* and this event, this implies that picks a large noise box. It is also true that with high probability ϵ_i is not too big, for any large noise box i (<Ref>). Additionally conditioning on ϵ_i being not too big for every large noise box, we have that both the noise and the reward are not too big (and there is a box with large noise). We can then upper bound the reward of by the reward of a “clairvoyant” policy which knows , but is required to pick a large noise box; for this step, we need a technical lemma (<Ref>) that will also be useful in our lower bound for linear policies. In all other events, we upper bound by max_i X_i. With probability at least 1 - 1/n^3, ϵ_i > β_n^2^(_n:n) for at least one large noise box i. For any large noise box i, we have [ ϵ_i ≤ 12 β_n^2^(_n:n)ln n ] ≥ 1 - 1/n^2. For any non-negative and bounded random variable Z supported on [0, V] and any σ > 2V, we have that [Z | Z + (0,σ^2) = y] ≤ 2 [Z] for all y ≤σ^2/2V. We define the following events. * _1 be the event that ϵ_j ≤ 12 β_n^2^(_n:n)ln n for all large noise boxes j. * _1' be the event that Y_j ≤ 18 β_n^2^(_n:n)ln n for all large noise boxes j. * _2 be the event that ϵ_j > β_n^2^(_n:n) for at least one large noise box j. * _2' be the event that Y_j > β_n^2^(_n:n) for at least one large noise box j. Recall that ^* is the event that X_i ≤β_n^2^(_n:n) for all i ∈ [n]. We first explore the relationship between these events. First, notice that if X_i ≤β_n^2^(_n:n) and ϵ_i ≤ 12 β_n^2^(_n:n)ln n, we have that Y_i = X_i + ϵ_i ≤β_n^2^(_n:n) + 12 β_n^2^(_n:n)ln n ≤ 18 β_n^2^(_n:n)ln n. Therefore, _1 ∩^* ⊆_1' ∩^*. Since X_i ≥ 0 for all i, _2' occurs every time _2 occurs, i.e. _2 ⊆_2', and thus _2 ∩^* ⊆_2' ∩^*. Therefore, _1 ∩_2 ∩^* ⊆_1' ∩_2' ∩^*, or _1 ∩_2∩^* ⊇_1' ∩_2'∩^*. First, we will bound [max X_i |_1' ∩_2'∩^*] ·[_1' ∩_2'|^*], which is an upper bound on the contribution of outcomes in _1' ∩_2'∩^* to the overall expected reward of . Since the contribution of an event A to the expectation of a random variable ([X|A][A]) is smaller than the contribution of an event B to the expectation if A ⊆ B, we have [max_i X_i |_1' ∩_2'∩^*] ·[_1' ∩_2'|^*] ≤[max_i X_i |_1 ∩_2∩^*] ·[_1 ∩_2|^*]. By <Ref>, [_1] ≥( 1 - 1/n^2)^c_b≥ 1 - 6 ln n/n^2. By <Ref>, [_2] ≥ 1 - 1/n^3. Therefore, [ _1 ∩_2 ] ≥[ _1] + [_2] - 1 ≥ 1 - 6 ln n/n^2 + 1 - 1/n^3 - 1 ≥ 1 - 7 ln n/n^2. Observe that, _1 and _2 are independent of the X_is, while ^* only dependent on X_is. Therefore, _1 ∩_2 and ^* are independent, and hence [_1 ∩_2 |^*] = [_1 ∩_2] ≥ 1 - 7 ln n/n^2, or [_1 ∩_2|^*] ≤7 ln n/n^2. Additionally, [max_i X_i |_1 ∩_2∩^*] = [max_i X_i |^*], as _1 and _2 are events regarding ϵ_is and therefore is independent of X_i. Furthermore, [max_i X_i |^*] = [max_i X_i | X_i ≤β_n^2^(_n:n)] ≤[max_i X_i] = [_n:n]. Putting everything together, we have [max_i X_i |_1' ∩_2'∩^*] ·[_1' ∩_2'|^*] ≤[max_i X_i |_1 ∩_2∩^*] ·[_1 ∩_2|^*] ≤[_n:n] ·7 ln n/n^2 ≤^(<Ref>)7 ln n/n^2· n ·[] ≤^(n ≥ 22)[]. Second, we will upper bound the contribution of outcomes in _1' ∩_2' ∩^* to the expected reward of . Note that in such outcomes, must choose a large noise box, by the definition of _2' (Y_j > β_n^2^(_n:n) for some large noise box j) and ^* (X_i ≤β_n^2^(_n:n) for all i, and therefore the exact boxes). Therefore, in such an outcome, the reward of is at most the reward of an optimal policy which also knows , but is conditioned to pick a large noise box. When selecting box i such a policy makes expected reward [X_i | Y_i = y_i, ^*, _1', _2'] = [X_i | Y_i = y_i, ^*], where the equality holds since X_i is independent of Y_j, for j ≠ i, and _1' ∩_2' have less information about Y_i than { Y_i = y_i }. Let R_i(y_i) = [ X_i | Y_i = y_i, ^* ]. The reward of an optimal policy which knows and is conditioned to pick a large noise box is then _[ max_i ∈ [n - c_b+1,n] R_i(y_i) |_1' ∩_2' ∩^* ]. We prove that R_i(y_i) ≤ 2 [] for all y_i consistent with _1' ∩_2' ∩^*, which in turn implies an upper bound of 2[] for the expected reward of conditioned on in _1' ∩_2' ∩^*. Consider any large noise box i. Let X_i = X_i | X_i ≤β_n^2^(_n:n).[Equivalently, we can think of sampling from X_i by sampling from X_i, until X_i ≤β_n^2^(_n:n).] Then, conditioned on _1' ∩_2' ∩^*, for any realization of , we note that R_i(y_i) = [X_i | Y_i = y_i, ^*] = [X_i |X_i + (0, σ_i^2) = y_i]. Furthermore, as y_i is a realization conditioned on _1' ∩_2' ∩^*, we have y_i ≤ 18 β_n^2^(_n:n)ln n. Using <Ref> for V = β_n^2^(_n:n) and σ = σ_b = 6 β_n^2^(_n:n)√(ln n), we have [X_i |X_i + (0, σ_i^2) = y_i] ≤ 2 [X_i] ≤ 2 [X_i] = 2 []. Overall, conditioned on ^*, if _1' ∩_2' occurs, 's expected reward is at most 2 []; otherwise, the contribution to the expected reward is at most []. Thus, the reward of conditioned on ^* is at most [_1' ∩_2' |^*] · 2 [] + [max X_i |_1' ∩_2'∩^*] ·[_1' ∩_2'|^*] ≤ 2 [] + [] = 3 []. Finally, conditioned on ^* not happening, the best can do is _n:n = max_i X_i, whose expected reward is [_n:n|^*]. Therefore: R_(, ^*) ≤ 3 [] ·[^*] + [_n:n|^*] ·[^*] ≤ 3 [] + [_n:n|_n:n≥β_n^2^(_n:n)] ·[_n:n≥β_n^2^(_n:n)] ≤^(<Ref>) 3 [] + [ _n:n ]/n^2 ≤^(<Ref>) 3 [] + n ·[]/n^2 ≤ 4 []. The theorem is implied by Lemmas <ref> and <ref>. §.§ Negative results for Linear policies In this section, we give our negative results for linear policies. Recall that a linear policy parameterized γ: ^n ×^n → selects the box which maximizes y_i - γ(, ) ·σ_i. For every MHR distribution , for all n ≥ n_0, for some constant n_0, there exists ^* = (σ^*_1,…,σ^*_n), such that ^* ∈n,1/5626, and for every function γ : ^n ×^n →, we have R__γ(, ^*) ∈ O ( []/[_n:n]) R__(, ^*) . An immediate corollary is that linear policies give, in the worst case, a logarithmic approximation, even for MHR distributions, by considering to be the exponential distribution with parameter λ=1, for which [_n:n] = ∑_i=1^n 1/i≥ln n. Note also that [_n:n] ≤ln n + 1 for all MHR random variables (<Ref>; <Ref>), so the exponential distribution minimizes the ratio in <Ref> (up to constants). There exists , such that for all n ≥ n_0, for some constant n_0, there exists ^* ∈n,1/5626 such that R__(, ^*) ∈Ω( ln(n) ) · R__γ(, ^*). Our construction of ^* works as follows. It contains one box such that σ^* = 0, a small number of boxes with some small noise σ_s, and the remaining boxes have large noise σ_b: σ^*_i = 0 i = 1 σ_s i ∈ [2, c_s + 1] σ_b i ∈ [c_s + 2, n] where c_s = n^1/5626, σ_s = 37/9√(2)[_c_s:c_s]/√(ln n), and σ_b = 6 α_n^1/10000^(_n-c_s:n-c_s)√(ln n). We refer to the first box as the “exact box,” the boxes with σ^*_i = σ_s as “small noise” boxes, and the rest as “large noise” boxes. One can easily confirm that ^* ∈n,1/5626. We first lower bound the expected reward of the optimal policy. For every MHR distribution , all n ≥ n_0, for some constant n_0, R_(, ^*) ∈Ω( [_n:n] ). The optimal policy is at least as good as the policy that picks the box with the largest y_i among the small noise boxes. Consider the event that |ϵ_i| ≤√(2)/75σ_s √(ln n) for all small noise boxes i: [ max_i ∈ [2, c_s + 1] |ϵ_i| ≤√(2)/75σ_s √(ln n)] = [ |(0, σ_s^2)| ≤√(2)/75σ_s √(ln n)]^c_s = (2 Φ( √(2 ln n)/75) - 1)^c_s ≥^(<Ref>)(2 ( 1 - 1/√(2 π)75/√(2 ln n)exp( -1/2·2/5625ln n ) ) - 1)^c_s = (1 - 75/√(πln n) n^-1/5625)^n^1/5626 ≥^(Bernoulli's inequality) 1 - 75/√(πln n) n^1/5626 - 1/5625 ≥ 1 - 1/ln n When this event occurs, the reward from picking a small noise box i is at least y_i - √(2)/75√(ln n)σ_s, and therefore the overall reward of picking from small noise boxes is at least max_i=2,…,c_s+1 x_i - 2√(2)/75√(ln n)σ_s. Noting that the noise and reward are independent random variables, we have: R_(, ^*) ≥( 1 - 1/ln n) ·( [ max_i ∈ [2, c_s + 1] X_i ] - 2 √(2)/75√(ln n)·σ_s ) = ( 1 - 1/ln n)( [_c_s:c_s] - 2 √(2)/75√(ln n)·5/√(2)[_c_s:c_s]/√(ln n)) ≥( 1 - 1/ln n) ·1/75[_c_s:c_s]. The following lemma allows us to bound [_c_s:c_s] as a function of [_n:n]: For any MHR distribution supported on [0, ∞), for any n ≥ 4 and a ≥ 1, we have [_n^a:n^a] ≤ 4a ·[_n:n]. Continuing our derivation R__(, ^*) ≥^(<Ref>)( 1 - 1/ln n) 1/75·1/4 · 5626[_n:n]≥^(n≥ e^606)1/2 000 000[_n:n]. Our next (and final) task is to upper bound the expected reward of . The main lemma for this stage is as follows. For every MHR distribution , for all n ≥ n_0, for some constant n_0, and for all γ, it holds that R__γ(, ^*) ≤ 8 []. The proof structure is similar to <Ref>. We first prove (<Ref>) that conditioned on an event ^*, _γ's expected reward is upper bounded, while the contribution to the reward of other events is negligible, even if _γ performs as well as taking max_i X_i. Here, ^* is the event that X_i ≤α_n^1/10000^(_c_s:c_s) for all small noise boxes i, and X_j ≤α_n^1/10000^(_n - c_s:n - c_s) for all remaining boxes j. For every MHR distribution , for all n ≥ n_0, for some constant n_0, and for all γ, the expected reward of a policy _γ conditioned on the event ^* is at most 7 []. To prove <Ref>, we first consider a slightly different family of policies. Let _c be the policy that chooses the box with the largest y_i - c σ_i, where c is a constant independent of and . We show that with high probability, all policies make poor choices. We can use this fact to get bounds on the performance of _γ (conditioned on certain events), since, fixing and , _γ is only as good as the best policy. We consider two cases on c: c > θ^* and c ≤θ^*, where θ^* = √(ln n/2). To make the presentation cleaner, we define the following events. Let * _1 be the event of max_i ∈ [2, c_s + 1]ϵ_i ≤θ^* σ_s/37. * _2 be the event of max_i ∈ [c_s + 2, n]ϵ_i - θ^* σ_b ≥σ_b. * _2' be the event of max_i ∈ [c_s + 2, n] Y_i - c σ_b ≥σ_b for all c < θ^*. * _3 be the event of max_i ∈ [c_s + 2, n]ϵ_i ≤ 12 α_n^1/10000^(_n-c_s:n-c_s)ln n. * _3' be the event of max_i ∈ [c_s + 2, n] Y_i ≤ 18 α_n^1/10000^(_n-c_s:n-c_s)ln n. Recall that ^* is the event that X_i ≤α_n^1/10000^(_c_s:c_s) for all small noise boxes i, and X_j ≤α_n^1/10000^(_n - c_s:n - c_s) for all remaining boxes j. We state some technical lemmata. <Ref> and <Ref> say that if various combinations of the above events occur, _c policies make bad choices. For all n ≥ n_0, for some constant n_0, [_1] ≥ 1 - 1/ln n. For all n ≥ n_0, for some constant n_0, [_2] ≥ 1 - 1/ln n. For all n ≥ n_0, for some constant n_0, for any large noise box i, [ Y_i ≤ 18 α_n^1/10000^(_n-c_s:n-c_s)ln n ] ≥ 1 - 1/n^2. If ^* ∩_1 occurs, for all c ≥θ^*, _c does not choose a small noise box. If ^* ∩_1 ∩_2' occurs, for all c < θ^*, _c chooses some large noise box. We can now prove <Ref>: We first explore the relationship between the events defined in <Ref>. First, note that _2 ⊆_2': if max_i ∈ [c_s + 2, n]ϵ_i - θ^* σ_b ≥σ_b, then for all c < θ^* we have max_i ∈ [c_s + 2, n] Y_i - c σ_b = max_i ∈ [c_s + 2, n] (X_i + ϵ_i) - c σ_b ≥max_i ∈ [c_s + 2, n]ϵ_i - θ^* σ_b ≥σ_b. Second, note that ^* ∩_3 ⊆_3', or ^* ∩_3 ⊆^* ∩_3': if max_i ∈ [c_s + 2, n] X_i ≤α_n^1/10000^(_n - c_s:n - c_s) and max_i ∈ [c_s + 2, n]ϵ_i ≤ 12 α_n^1/10000^(_n-c_s:n-c_s)ln n, then max_i ∈ [c_s + 2, n] Y_i = max_i ∈ [c_s + 2, n] X_i + ϵ_i ≤max_i ∈ [c_s + 2, n] X_i + max_i ∈ [c_s + 2, n]ϵ_i ≤α_n^1/10000^(_n - c_s:n - c_s) + 12 α_n^1/10000^(_n-c_s:n-c_s)ln n ≤ 18 α_n^1/10000^(_n-c_s:n-c_s). Ultimately, we have _1 ∩_2 ∩_3 ∩^* ⊆_1 ∩_2' ∩_3' ∩^*, or _1 ∩_2 ∩_3∩^* ⊇_1 ∩_2' ∩_3'∩^*. We now bound [max_i X_i |_1 ∩_2' ∩_3'∩^*] ·[_1 ∩_2' ∩_3'|^*], which is an upper bound on the contribution of outcomes in _1 ∩_2' ∩_3'∩^* to the overall expected reward of _γ. [max_i X_i |_1 ∩_2' ∩_3'∩^*] ·[_1 ∩_2' ∩_3'|^*] ≤[max_i X_i |_1 ∩_2 ∩_3∩^*] ·[_1 ∩_2 ∩_3|^*] By <Ref>, [_1] ≥ 1 - 1/ln n. By <Ref>, [_2] ≥ 1 - 1/ln n. Using <Ref>, [_3] ≥ (1 - 1/n^2)^n - c_s - 1≥ 1 - 1/n. Therefore, by the a union bound, [_1 ∩_2 ∩_3] ≥ 1 - 2/ln n - 1/n≥ 1 - 3/ln n. Observe that, _1, _2, and _3 are independent of the X_is, while ^* only dependent on X_is. Therefore, _1 ∩_2 ∩_3 and ^* are independent, and hence [_1 ∩_2 ∩_3 |^*] = [_1 ∩_2 ∩_3] ≥ 1 - 3/ln n, or [_1 ∩_2 ∩_3|^*] ≤3/ln n. Additionally, [max_i X_i |_1 ∩_2 ∩_3∩^*] = [max_i X_i |^*], as _1, _2, and _3 are events regarding ϵ_is and therefore independent of X_i. Finally, [max_i X_i |^*] ≤[max_i X_i] = [_n:n], as ^* is an event which upper bounds X_i. Putting everything together: [max_i X_i |_1 ∩_2' ∩_3'∩^*] ·[_1 ∩_2' ∩_3'|^*] ≤[max_i X_i |_1 ∩_2 ∩_3∩^*] ·[_1 ∩_2 ∩_3|^*] ≤3/ln n·[_n:n] ≤^(<Ref>)3/ln n· (ln n + 1) [] ≤ 4 []. Next, we will upper bound the contribution of outcomes in _1 ∩_2' ∩_3' ∩^* to the expected reward of _γ. Note that in such outcomes, for every c_1 ≥θ^* and c_2 < θ^*, _c_1 does not choose a small noise box (<Ref>) and _c_2 chooses some large noise box (<Ref>). Hence, in such outcomes, _γ does not choose a small noise box. Therefore, in such an outcome, the reward of _γ is at most the reward of an optimal policy that knows , but is conditioned to not pick a small noise box. When selecting box i, such a policy has expected reward [X_i | Y_i = y_i, ^*, _1, _2', _3']. We first observe that [X_i | Y_i = y_i, ^*, _1, _2', _3'] = [X_i | Y_i = y_i, _2', _3'] as _1 regards ϵ_j of all small noise boxes j, which are never picked in this policy. Secondly, [X_i | Y_i = y_i, _2', _3'] = [X_i | Y_i = y_i, ^*] as X_i is independent of Y_j, for j ≠ i, and _2' ∩_3' have less information about Y_i than { Y_i = y_i }. Let R_i(y_i) = [ X_i | Y_i = y_i, ^* ]. The reward of an optimal policy which knows and is conditioned to not pick a small noise box is then _ [ max_i ∈{1}∪ [n - c_b+1,n] R_i(y_i) |_1 ∩_2' ∩_3' ∩^* ] ≤^(R_1(y_1) ≥ 0)_[ R_1(y_1) + max_i ∈ [n - c_b+1,n] R_i(y_i) |_1 ∩_2' ∩_3' ∩^* ] =^(σ_1 = 0)[ X_1 |_1 ∩_2' ∩_3' ∩^* ] + _[max_i ∈ [n - c_b+1,n] R_i(y_i) |_1 ∩_2' ∩_3' ∩^* ] = [X_1 |^*] + _[max_i ∈ [n - c_b+1,n] R_i(y_i) |_1 ∩_2' ∩_3' ∩^* ], where the last inequality holds since _1, _2, and _3 are events regarding small noise and large noise boxes, and hence is independent of X_1. Consider any small noise box i. Let X_i = X_i | X_i ≤α_n^1/10000^(_n - c_s:n - c_s). Then, conditioned on _1 ∩_2' ∩_3' ∩^*, for any realization of , we note that R_i(y_i) = [X_i | Y_i = y_i, ^*] = [X_i |X_i + (0, σ_i^2) = y_i]. Furthermore, as y_i is a realization conditioned on _1 ∩_2' ∩_3' ^*, we have y_i ≤ 18 α_n^1/10000^(_n - c_s:n - c_s)ln n. Using <Ref> with V = β_n^2^(_n:n) and σ = σ_b = 6α_n^1/10000^(_n - c_s:n - c_s)√(ln n), we have [X_i |X_i + (0, σ_i^2) = y_i] ≤ 2 [X_i] ≤ 2 [X_i] = 2 []. As this is true for any small noise box i on any realization of , we then have _ [ max_i ∈{1}∪ [n - c_b+1,n] R_i(y_i) |_1 ∩_2' ∩_3' ∩^* ] ≤[X_1 |^*] + _[max_i ∈ [n - c_b+1,n] R_i(y_i) |_1 ∩_2' ∩_3' ∩^* ] ≤[X_1 | X_1 ≤α_n^1/10000^(_n - c_s:n - c_s)] + _[2 []] ≤[X_1] + 2 [] = 3[]. Overall, conditioned on ^*, if _1 ∩_2' ∩_3' occurs, 's expected reward is at most 3 [], while otherwise, the contribution to the expected reward is at most 4[]. Therefore, the reward of conditioned on ^* is at most 7 []. With <Ref> at hand, we can prove <Ref>. We decompose ^* as ^*_1 ∩^*_2, where ^*_1 and ^*_2 are two independent events defined as follows. ^*_1 is the event that X_i ≤α_n^1/10000^(_c_s:c_s) for all small noise boxes i ∈ [2, c_s+1]. ^*_2 is the event that X_j ≤α_n^1/10000^(_n - c_s:n - c_s) for all remaining boxes j. Observe that [^*_1] = [max_i ∈ [2, c_s + 1] X_i > α_n^1/10000^(_c_s:c_s)] = [_c_s:c_s > α_n^1/10000^(_c_s:c_s)] = 1/n^1/10000. Similarly, [^*_2] = 1/n^1/10000. Therefore, [^*] = [^*_1∪^*_2] ≤[^*_1] + [^*_2] = 2/n^1/10000. Next, we upper bound the contribution of ^* to the overall reward of _γ. Overloading notation, let R__γ(,^* |^*) be the expected reward of _γ when ^*occurs. Then, we have R__γ(,^* |^*) ·[^*] ≤[max_i X_i |^*_1∪^*_2] ·[^*_1∪^*_2] ≤([max_i ∈ [2, c_s + 1] X_i |^*_1∪^*_2] + [max_i ∈ [1, n] ∖ [2, c_s + 1] X_i |^*_1∪^*_2]) ·[^*_1∪^*_2] = ([max_i ∈ [2, c_s + 1] X_i |^*_1] + [max_i ∈ [1, n] ∖ [2, c_s + 1] X_i |^*_2]) ·[^*_1∪^*_2] = ([_c_s:c_s|_c_s:c_s > α_n^1/10000^(_c_s:c_s)] + [_n - c_s:n - c_s|_n - c_s:n - c_s > α_n^1/10000^(_n - c_s:n - c_s)]) ·[^*_1∪^*_2] ≤ 2 ([_c_s:c_s|_c_s:c_s > α_n^1/10000^(_c_s:c_s)]/n^1/10000 + [_n - c_s:n - c_s|_n - c_s:n - c_s > α_n^1/10000^(_n - c_s:n - c_s)]/n^1/10000). Note that [_c_s:c_s|_c_s:c_s > α_n^1/10000^(_c_s:c_s)]/n^1/10000 = [_c_s:c_s|_c_s:c_s > α_n^1/10000^(_c_s:c_s)] ·[_c_s:c_s > α_n^1/10000^(_c_s:c_s)], and similarly for the second term. In the appendix, we show, stated as <Ref>, that for every MHR distribution , n ≥ 1 and m ≥ 2: [_n:n|_n:n > α_m^(_n:n)] ·[_n:n > α_m^(_n:n)] ≤15 (ln m + ln n + 1)[]/2m. Applied here (noting that _a:a is MHR for all a ≥ 1; see <Ref>), we have: [_c_s:c_s|_c_s:c_s > α_n^1/10000^(_c_s:c_s)]/n^1/10000≤15 (ln(n^1/10000) + ln(c_s) + 1)/2 n^1/10000[] ≤[]/4. Similarly, [_n - c_s:n - c_s|_n - c_s:n - c_s > α_n^1/10000^(_n - c_s:n - c_s)]/n^1/10000≤[]/4, for an overall bound of R__γ(,^* |^*) ·[^*] ≤ 2 ([]/4 + []/4) =[]. Putting everything together, we have R__γ(,^* ) = R__γ(,^* |^*) ·[^*] + R__γ(,^* |^*) ·[^*] ≤^(<Ref>) 7 [] + [] = 8 []. Combining <Ref> and <Ref> gives us the result. § A THRESHOLD ALGORITHM FOR SELECTING THE BEST BOX In this section, we propose a new policy, , and give sufficient conditions under which 's expected reward is at most a constant factor of the expected reward of a prophet who knows x_1, …, x_n. We will describe two versions of this policy. The first version works for all distributions; the second one is a slight modification that works for MHR distributions, under a weaker condition on the instance. Without loss of generality, we will assume that boxes are ordered in increasing σ_i, that is, σ_1 ≤σ_2 ≤…≤σ_n. * : Pick α∈ [0, 1] uniformly at random. Return _1 ≤ i ≤α n y_i. * : Pick α∈ [0, 1] uniformly at random. Return _1 ≤ i ≤ n^α y_i. In <Ref> we present our guarantee for arbitrary distributions. Intuitively, if there is a universal constant c, e.g. c=0.01, such that a c fraction of boxes have bounded noise (and specifically, σ_i at most [_cn:cn]/5 √(2 ln n)), then our policy gives a constant approximation to the reward of a prophet. For all c ∈ (0,1], for all distributions , all n ≥ 4, and all ∈n,c, we have R_(, ) ≥c^2/20·[_n:n] Consider = (σ_1, σ_2, …, σ_n) ∈n,c where, without loss of generality, we have σ_1 ≤σ_2 ≤…≤σ_n. As ∈n,c, we have σ_cn≤[_cn:cn]/5 √(2 ln n). Consider the event that |ϵ_i| ≤σ_i √(2 ln n) for all 1 ≤ i ≤ cn. For any such box i, we have [|ϵ_i| ≤σ_i √(2 ln n)] = [ |(0, σ_i^2)| ≤σ_i √(2 ln n)] = 2 Φ(√(2 ln n)) - 1 ≥^(<Ref>) 2 ( 1 - 1/√(2 π)1/√(2 ln n)exp( -1/2· 2 ln n ) ) - 1 = 1 - 1/n √(πln n), and therefore [|ϵ_i| ≤σ_i √(2 ln n), ∀ i ∈ [1, cn]] ≥(1 - 1/n √(πln n))^cn≥^(Bernoulli's inequality) 1 - c/√(πln n)≥1/2, where the last inequality holds for all n ≥ 4 ≥ e^4c^2/π. Observe that, since σ_i ≤[_cn:cn]/5√(2ln n) for all i ∈ [1, cn], we can conclude that [max_i ∈ [1, cn] |ϵ_i| ≤1/5·[_cn:cn]] ≥1/2. Conditioned on this event we have x_i - 1/5·[_cn:cn] ≤ y_i ≤ x_i + 1/5·[_cn:cn] for all i ∈ [1, cn]; therefore, for all k ≤ cn, we have max_i ∈ [1, k] y_i ≥max_i ∈ [1, k] x_i - 2/5·[_cn:cn] We analyze the performance of under this event. Recall that draws α∈ [0, 1] uniformly at random in its sampling step, and then outputs _i ∈ [1, α n] y_i. There are two cases for α: * If α > c, we will lower bound the expected reward of by 0. * If α≤ c, is going to pick the box with the largest y_i among the first α n boxes. By our observation, 's reward in this case is at least max_i ∈ [1, α n] x_i - 2/5·[_cn:cn], and therefore the expected reward of in this case is at least E[_α n : α n] - 2/5·[_cn:cn] ≥^(<Ref>)α/c[_cn:cn] - 2/5·[_cn:cn]. Therefore, conditioned on the event that max_i ∈ [1, cn] |ϵ_i| ≤1/5·[_cn:cn], 's expected reward is lower bounded by ∫_α = 0^cα/c[_cn:cn] - 2/5·[_cn:cn] d α = c/10·[_cn:cn]. When this event does not occur, we lower bound 's expected reward by 0. Combining everything together, 's expected reward is R_(, ) ≥1/2·c/10·[_cn:cn] ≥^(<Ref>)c^2/20·[_n:n]. In <Ref> we present an analog to <Ref> for MHR distributions. Here, our condition for getting a constant approximation is a lot weaker. Intuitively, if there is a universal constant c, such that n^c boxes have bounded noise (and specifically, σ_i at most [_cn:cn]/18 √(2c ln n)), then our policy gives a constant approximation to the reward of a prophet. For all c ∈ (0,1], for all MHR distributions , all n ≥ e^4/c π, and all ∈n,c, we have R_(, ) ≥c^2/576·[_n:n]. The proof of <Ref> follows a similar structure to the proof of <Ref> and is deferred to <Ref>. alpha § A TECHNICAL LEMMA The following technical lemma will be useful throughout this appendix. For a random variable Y = X + ϵ, where ϵ∼(0, σ^2), it holds that [X | Y = y] is monotone non-decreasing in y. Let A(y) = ∫_0^∞ x · f(x) · f_(y - x) dx and B(y) = ∫_0^∞ f(x) · f_(y - x) dx, then [X | Y = y] = A(y)/B(y). We first compute the derivative of f_(y - x): df_(y - x) /dy = d/dy(1/σ√(2 π)exp( -1/2·(y - x/σ)^2 ) ) = 1/σ√(2 π)exp(-1/2·(y - x/σ)^2) ·x - y/σ^2 = f_(y - x) ·x - y/σ^2. Let C(y) = ∫_0^∞ x^2 · f(x) · f_(y - x) dx. The derivative for A(y) is dA(y)/dy = d/dy(∫_0^∞ x · f(x) · f_(y - x) dx) = ∫_0^∞ x · f(x) · f_(y - x) ·x - y/σ^2 dx = 1/σ^2(C(y) - y · A(y)). The derivative for B(y) is dB(y)/dy = d/dy(∫_0^∞ f(x) · f_(y - x) dx) = ∫_0^∞ f(x) · f_(y - x) ·x - y/σ^2 dx = 1/σ^2(A(y) - y · B(y)) Finally, the derivative for [X | Y=y] is d/dy[X | Y = y] = d/dyA(y)/B(y) = dA(y)/dy· B(y) - dB(y)/dy· A(y)/B(y)^2 = ( 1/σ^2(C(y) - y A(y)) ) · B(y) - ( 1/σ^2(A(y) - y B(y)) ) · A(y)/B(y)^2 = B(y)C(y) - y A(y) B(y) - A(y)^2 + yA(y)B(y)/(σ B(y))^2 = B(y)C(y) - A(y)^2/(σ B(y))^2. Since (x · f(x) · f_(y - x))^2 = (f(x) · f_(y - x)) ·(x^2 · f(x) · f_(y - x)), the Cauchy-Schwarz inequality implies that B(y)C(y) ≥ A(y)^2. Therefore d/dy[X | Y = y] = B(y)C(y) - A(y)^2/(σ B(y))^2≥ 0. § PROOFS MISSING FROM SECTION <REF> It is sufficient to prove that [_ℓ:ℓ]/ℓ≥[_ℓ+1:ℓ+1]/ℓ+1 for all integers ℓ≥ 1. For all t ∈ [0, 1], we have ∑_i = 0^ℓ-1 t^i ≥ℓ t^ℓ (1-t)∑_i = 0^ℓ-1 t^i ≥ℓ (1-t) t^ℓ 1 - t^ℓ ≥ℓ(t^ℓ - t^ℓ+1) ℓ+1-(ℓ+1)t^ℓ ≥ℓ - ℓ t^ℓ+1 1 - t^ℓ/ℓ ≥1 - t^ℓ+1/ℓ+1 Substituting t = F(x) and taking integrals on both sides, we get ∫_0^∞ 1 - F(x)^ℓ/ℓ dx ≥∫_0^∞ 1 - F(x)^ℓ + 1/n + 1 dx, which proves our statement. Lemmas about MHR distributions We will heavily use the fact that order statistics of MHR distributions are also MHR (Theorem 5.5 on page 39 of <cit.>): For any MHR[<cit.> use the term IFR (increasing failure rate).] random variable X and any integers 1 ≤ k ≤ n, X_k:n is also MHR. Define ζ_p^() = inf{x | F(x) ≥ p} as the p-th quantile of . For the lower bound, we first observe that [_n:n≤α^()_n] = [≤α^()_n]^n = (1 - 1/n)^n, where with n ≥ 4 we get 81/256≤ (1 - 1/n)^n ≤1/e. Therefore, ζ_81/256^(_n:n)≤α_n ≤ζ_1/e^(_n:n). We use the following result from <cit.> (Theorem 4.6 on page 30): Assume X is MHR[1] with mean μ_1. If p ≤ 1 - 1/e, then -ln(1 - p) ·μ_1 ≤ζ_p^X≤ -ln(1 - p)/p·μ_1. From <Ref>, we know that _n:n is also MHR. Since 81/256≤ 1/e ≤ 1 - 1/e, we can invoke <Ref> on ζ^(_n:n)_81/256 and ζ^(_n:n)_1/e. For the lower bound we have α^()_n ≥ζ_81/256^(_n:n)≥ -ln(1 - 81/256) ·[_n:n] ≥1/3·[_n:n]. For the upper bound we have α^()_n ≤ζ_1/e^(_n:n)≤ -ln(1 - 1/e)/1/e·[_n:n] ≤5/4·[_n:n]. § PROOFS MISSING FROM SECTION <REF> [] = √(2/π) is a standard property to the half-normal distribution (and can also be confirmed by computing the mean of a folded-normal with parameter μ = 0 <cit.>). For the MHR property, it suffices to show that f_(x)/1 - F_(x) is an increasing function. Note that its derivative is f'_(x)(1 - F_(x)) + f_^2(x)/(1 - F_(x))^2, so we need the numerator to be non-negative. As f_(x) = √(2/π)exp(-x^2/2) = 2 ϕ(x) and F_(x) = (x/√(2)) = 2 Φ(x) - 1, the numerator is f'_(x)(1 - F_(x)) + f_^2(x) = -2xϕ(x) (2 - 2Φ(x)) + 4 ϕ^2(x) = 4ϕ(x) (ϕ(x) - x(1 - Φ(x))), where the last quantity is non-negative as ϕ(x) ≥ 0 and by <Ref>, proving our claim. Finally, since is MHR, we use results from <Ref> to bound [_n:n]. Observe that F_(√(ln n)) = 2 Φ(√(ln n) - 1) ≤^(<Ref>) 2(1 - 1/√(2 π)√(ln n)/1 + ln nexp(-1/2·ln n)) - 1 = 1 - √(%s/%s)2π·√(ln n)/n^1/2 (1 + ln n) ≤ 1 - 1/n, where the last inequality holds for all n ≥ 8. Therefore, α_n^()≥√(ln n), which implies [_n:n] ≥^(<Ref>)4/5√(ln n). Similarly, F_(√(2 ln n)) = 2 Φ(√(2 ln n) - 1) ≥^(<Ref>) 2(1 - 1/√(2 π)1/√(2 ln n)exp(-1/2· 2 ln n)) - 1 = 1 - √(%s/%s)2π·1/n√(2 ln n) ≥ 1 - 1/n. Therefore, α_n^()≤√(2 ln n), which means [_n:n] ≤^(<Ref>) 3√(2)√(ln n). We have [X_i | Y_i = y_i] = ∫_0^∞ x · f_(x) · f_(0, σ_i^2)(y_i - x) dx/∫_0^∞ f_(x) · f_(0, σ_i^2)(y_i - x) dx. We first transform the numerator. ∫_0^∞ f_(x) · f_(0, σ_i^2)(y_i - x) dx = ∫_0^∞√(2)/√(π)exp(-x^2/2) ·1/σ_i √(2 π)exp(-(y_i - x)^2/2σ_i^2) dx = 1/σ_i π∫_0^∞exp(-1/2(x^2 + (y_i-x/σ_i)^2)) dx Let's focus on x^2 + (y_i-x/σ_i)^2: x^2 + (y_i-x/σ_i)^2 = (xσ_i)^2 + y_i^2 - 2 y_i x + x^2/σ_i^2 = (x √(σ_i^2 + 1))^2 - 2 y_i x + y_i^2/σ_i^2 =^(let λ = √(σ_i^2 + 1))(λ x )^2 - 2 y_i/λ·λ x + (y_i/λ)^2 + y_i^2(1 - 1/λ^2)/σ_i^2 =^(let ρ = y_i^2(1 - 1/λ^2)/σ_i^2)(λ x - y/λ/σ_i)^2 + ρ Observe that λ and ρ only depends on σ_i and y_i. Therefore, coming back to the previous integral: ∫_0^∞ f_(x) · f_(0, σ_i^2)(y_i - x) dx = 1/σ_i π∫_0^∞exp(-1/2((λ x - y/λ/σ_i)^2 + ρ)) dx = e^-ρ / 2λ√(2)/√(π)∫_0^∞1/√(2 π)·λσ_iexp(-1/2(x - y_i/λ^2/λσ_i)^2) dx = e^-ρ / 2λ√(2)/√(π)∫_0^∞ f_(y_i/λ^2, (σ_i/λ)^2)(x) dx Calculated similarly, we have ∫_0^∞ x · f_(x) · f_(0, σ_i^2)(y_i - x) dx = e^-ρ / 2λ√(2)/√(π)∫_0^∞ x · f_(y_i/λ^2, (σ_i/λ)^2)(x) dx Therefore [X_i | Y_i = y_i] = ∫_0^∞ x · f_(x) · f_(0, σ_i^2)(y_i - x) dx/∫_0^∞ f_(x) · f_(0, σ_i^2)(y_i - x) dx = e^-ρ / 2λ√(2)/√(π)∫_0^∞ x · f_(y_i/λ^2, (σ_i/λ)^2)(x) dx/e^-ρ / 2λ√(2)/√(π)∫_0^∞ f_(y_i/λ^2, (σ_i/λ)^2)(x) dx = ∫_0^∞ x · f_(y_i/λ^2, (σ_i/λ)^2)(x) dx/∫_0^∞ f_(y_i/λ^2, (σ_i/λ)^2)(x) dx = [t | t ∼(y_i/λ^2, (σ_i/λ)^2) ∩ t ≥ 0]. This last quantity is the mean of the normal distribution (y_i/σ_i^2 + 1, (σ_i/√(σ_i^2 + 1))^2) truncated to [0, ∞) (as λ = √(σ_i^2 + 1)). We can conclude that [X_i | Y_i = y_i] = y_i/σ_i^2 + 1 + ϕ(-y_i/σ_i √(σ_i^2 + 1))/1 - Φ(-y_i/σ_i √(σ_i^2 + 1))·σ_i/√(σ_i^2 + 1). We follow the same proof structure as in <Ref>. Consider = |(0, 1^2)|. Consider = (σ_1, σ_2, …, σ_n) ∈n, c where, without loss of generality, we have σ_1 ≤σ_2 ≤…≤σ_n. This means that σ_cn > [_cn:cn] ·√(ln(n))/ln(cn). Note that the expected reward of the optimal policy is at most the expected reward of the optimal policy that picks 2 boxes u and v where u ∈ [1, cn - 1] and v ∈ [cn, n], and then enjoys the rewards of both boxes. The expected reward from choosing box u is at most [max_i ∈ [1, cn - 1] x_i] ≤[_cn:cn]. The expected reward from choosing box v is at most the expected reward of _ conditioned on it choosing boxes from cn to n, which in turn is at most max_i ∈ [cn, n][X_i | Y_i = y_i]. Therefore, the expected reward from box v is upper bounded by: _[max_i ∈ [cn, n][X_i | Y_i = y_i]] ≤^(<Ref>)_[max_i ∈ [cn, n] U_σ_i(y_i)] = [max_i ∈ [cn, n] U_σ_i(X_i + (0, σ_i^2))] ≤^(U_σ_i(y) is monotone)[max_i ∈ [cn, n] U_σ_i(X_i + |(0, σ_i^2)|)] = [max_i ∈ [cn, n]√(%s/%s)2π + (X_i + |(0, σ_i^2)|)/σ_i^2 + 1] ≤[√(%s/%s)2π + max_i ∈ [cn, n]X_i/σ^2_i + max_i ∈ [cn, n]|(0, σ_i^2)|/σ_i^2] ≤√(%s/%s)2π + [ |(0, 1)|_n:n]/σ^2_cn + [ max_i ∈ [cn, n]|(0, 1/σ_i^2)| ] ≤^(<Ref>)√(%s/%s)2π + 3√(2)·√(ln n)/σ_cn^2 + 1/σ_cn· 3√(2)·√(ln n) ≤[] + 6√(2)·√(ln n)/σ_cn ≤^( σ_cn > [_cn:cn] ·√(ln(n))/ln(cn))[] + 6√(2)·ln(cn)/[_cn:cn] ≤^(<Ref>)[] + 6√(2)·25/16 ([_cn:cn])^2/[_cn:cn] ≤^(c n ≥ 1) 15 [_cn:cn]. Combining, we get R__(, ) ≤ 16 [_cn:cn]. Noting that, by <Ref>, [_cn:cn] ≤ 3√(2)√(ln(cn)) = 3 √(π)√(ln(cn))[], we have R__(, ) ≤ 16 · 3 √(π)√(ln(cn)) E[] ≤ 86 √(ln(cn)) E[], as desired. § PROOFS MISSING FROM SECTION <REF> §.§ Proofs missing from Section <ref> Formally, this event is max_i ∈ [n - c_b + 1, n]ϵ_i > β_n^2^(_n:n). We have [max_i ∈ [n - c_b + 1, n]ϵ_i > β_n^2^(_n:n)] = 1 - [max_i ∈ [n - c_b + 1, n]ϵ_i ≤β_n^2^(_n:n)] = 1 - [(0, σ_b^2) ≤β_n^2^(_n:n)]^c_b = 1 - [(0, σ_b^2) ≤σ_b/6 √(ln n)]^c_b ≥ 1 - [(0, σ_b^2) ≤σ_b/6]^6 ln n Using the fact that [ (μ, σ^2) ≤ x ] = Φ(x-μ/σ), where Φ(x) = 1/√(2π)∫_-∞^x e^-t^2/2 dt is the CDF of the standard normal distribution, we have that [max_i ∈ [n - c_b + 1, n]ϵ_i > β_n^2^(_n:n)] ≥ 1 - Φ( 1/6) ^6 ln n. Since Φ( 1/6) < 0.6 we have [max_i ∈ [n - c_b + 1, n]ϵ_i > β_n^2^(_n:n)] ≥ 1 - ((0.6)^2)^3 ln n≥ 1 - ( 1/e)^3 ln n≥ 1 - 1/n^3. Note that as ϵ_i ∼(0, σ_b^2) and σ_b = 6 β^(_n:n)_n^2√(ln n) we have [ϵ_i ≤ 12 β_n^2^(_n:n)ln n] = [ϵ_i ≤ 2 √(ln n)·σ_b)] = Φ(2 √(ln n)) ≥^(<Ref>) 1 - 1/√(2 π)1/2 √(ln n)·exp(-2 ln n) = 1 - 1/2 √(2 π)1/n^2 √(ln n) ≥ 1 - 1/n^2. Slightly overloading notation, let f(x) be the PDF of Z. Let A(y) = ∫_0^V x · f(x) · f_(y - x) dx and B(y) = ∫_0^V f(x) · f_(y - x) dx, then [Z | Z + (0,σ^2) = y] = A(y)/B(y). From <Ref> we know that [Z | Z + (0,σ^2) = y] is monotone non-decreasing in y. Let r = σ/V. Consider y^* = σ^2/2V = σ·r/2. As σ > 2V or r > 2, we then have y^* > σ > V, which implies that f_(y^* - V) ≥ f_(y^* - x) for all x ∈ [0, V]. We then have the following bound on A(y^*): A(y^*) = ∫_0^V x · f(x) · f_(y^* - x) dx ≤∫_0^V x · f(x) · f_(y^* - V) dx = [Z] · f_(y^* - V) = [Z] ·1/σ√(2 π)exp(- 1/2( y^* - V/σ)^2 ). Recalling that y^* = σ·r/2 and that V = σ/r, we have: A(y^*) = 1/σ√(2 π)[Z] ·exp(-1/2(r/2 - 1/r)^2 ) = 1/σ√(2 π)[Z] ·exp(-r^2/8 + 1/2 - 1/2 r^2) ≤1/σ√(2 π)[Z] ·√(e)/exp(r^2/8). Meanwhile, for B(y^*), we have B(y^*) = ∫_0^V f(x) · f_(y^* - x) dx ≥^(y^* ≥ V)∫_0^V f(x) · f_(y^*) dx = f_(y^*) ·∫_0^V f(x) dx = f_(y^*) = 1/σ√(2 π)exp(- 1/2(y^*/σ)^2 ) = 1/σ√(2 π)·1/exp(r^2/8). Therefore A(y^*) ≤ 2 [Z] · B(y^*), and thus [Z | Z + (0,σ^2) = y^*] = A(y^*)/B(y^*) is at most 2 [Z]. Since [Z | Z + (0,σ^2)=y] is monotone non-decreasing in y (<Ref>), we can conclude that [Z | Z + (0,σ^2)=y] ≤ 2 [Z] for all y ≤ y^* = σ^2/2V. §.§ Proofs missing from Section <ref> Since n ≥ 4 we have that n^a ≥ 4 for all a ≥ 1. Therefore, [_n^a:n^a] ≤^(<Ref>) 3α^()_n^a ≤^(<Ref>) 3a ·α^()_n ≤^(<Ref>)15a/4·[_n:n] < 4a ·[_n:n]. Observe that ϵ_i are values drawn from (0, σ_s^2). We then have [ max_i ∈ [2, c_s + 1]ϵ_i ≤θ^* σ_s/37] = [ (0, σ_s^2) ≤θ^* σ_s/37]^c_s = Φ( θ^*/37)^n^1/5626 ≥^(<Ref>)(1 - 1/√(2 π)37√(2)/√(ln n)exp( -1/2·1/2738ln n ) )^n^1/5626 ≥^(Bernoulli's inequality) 1 -37/√(π)√(ln n) n^1/5626-1/5476 ≥ 1 - 1/ln n. Observe that ϵ_i are values drawn from (0, σ_b^2). We then have [ max_i ∈ [c_s + 2, n]ϵ_i - θ^* σ_b ≥σ_b ] = 1 - [ max_i ∈ [c_s + 2, n]ϵ_i ≤θ^* σ_b + σ_b ] = 1 - [ (0,σ_b^2) ≤θ^* σ_b + σ_b ]^n - c_s - 1 = 1 - (Φ(θ^* + 1))^n - c_s - 1 ≥ 1 - ( Φ(√(2)θ^*) )^n/2 ≥^(<Ref>) 1 - ( 1 - 1/√(2 π)√(2)θ^*/2 (θ^*)^2 + 1exp(-(θ^*)^2) )^n/2 ≥^(Bernoulli's inequality) 1 - 1/1 + n/21/√(2 π)√(ln n)/ln n + 1exp( - ln n/2) = 1 - 1/1 + √(n)/21/√(2 π)√(ln n)/ln n + 1 ≥ 1 - 1/ln n. The proof is similar to that of <Ref>. Note that as ϵ_i ∼(0, σ_b^2) and σ_b = 6 β^(_n:n)_n^2√(ln n) we have [ϵ_i ≤ 12 β_n^2^(_n:n)ln n] = [ϵ_i ≤ 2 √(ln n)·σ_b)] = Φ(2 √(ln n)) ≥^(<Ref>) 1 - 1/√(2 π)1/2 √(ln n)·exp(-2 ln n) = 1 - 1/2 √(2 π)1/n^2 √(ln n) ≥ 1 - 1/n^2. Consider any c ≥θ^*. Observe that Y_1 - c σ^*_1 = X_1 ≥ 0. We show that conditioned on _1 ∩^*, we have max_i ∈ [2, c_s + 1] Y_i ≤θ^* σ_s. We first note that from <Ref>, we have [_c_s:c_s < 2 [_c_s:c_s]] ≥ 1 - 1/c_s^3/5 = 1 - 1/n^1/5626 · 3/5 > 1 - 1/n^1/10000. Therefore, by <Ref>, 2 [_c_s:c_s] ≥α_n^1/10000^(_c_s:c_s). Then, conditioned on both _1 and ^*, we have that for any small noise box i: Y_i = X_i + ϵ_i <^(<Ref>)α_n^1/10000^(_c_s:c_s) + θ^* σ_s/37≤ 2 [_c_s:c_s] + θ^* σ_s/37 = 36 θ^* σ_s/37 + θ^* σ_s/37 = θ^* σ_s. Therefore, conditioned on _1 and ^*, we have max_i ∈ [2, c_s + 1] Y_i - c σ^*_i ≤θ^* σ_s - c σ_s < 0, i.e. Y_1 - c σ^*_1 > Y_i - max_i ∈ [2, c_s + 1] Y_i - c σ^*_i and hence _c does not choose any small box i. Consider any c ≥θ^*. Observe that conditioned on _2', max_i ∈ [c_s + 2, n] Y_i - c σ^*_i ≥^(Dfn <ref>)σ_b. From <Ref>, we have [_c_s:c_s < 2 [_c_s:c_s]] ≥ 1 - 1/c_s^3/5 = 1 - 1/n^1/5626 · 3/5 > 1 - 1/n^1/10000. Therefore, 2 [_c_s:c_s] ≥α_n^1/10000^(_c_s:c_s). Then, conditioned on ^* ∩_1, we have that for all i ∈ [2, c_s + 1]: Y_i = X_i + ϵ_i < α_n^1/10000^(_c_s:c_s) + θ^* σ_s/37≤ 2 [_c_s:c_s] + θ^* σ_s/37 = 36 θ^* σ_s/37 + θ^* σ_s/37 = θ^* σ_s. Therefore, max_i ∈ [1, c_s + 1] Y_i - c σ^*_i = max{Y_1, max_i ∈ [2, c_s + 1] Y_i - c σ_s} ≤max{X_1, max_i ∈ [2, c_s + 1] Y_i } ≤^(Dfn <ref>)max{α_n^1/10000^(_n - c_s:n - c_s), θ^* σ_s} < σ_b, where the last inequality follows from the facts that θ^* σ_s < σ_b (see <Ref> in the appendix) and that α_n^1/10000^(_n - c_s:n - c_s) < σ_b = 6 α_n^1/10000^(_n - c_s:n - c_s)√(ln n). Therefore, max_i ∈ [c_s + 2, n] Y_i - c σ^*_i > max_i ∈ [1, c_s + 1] Y_i - c σ^*_i, and so _c chooses a large noise box. For any MHR distribution supported on [0, ∞) and for all n ≥ 1, we have [_n:n] ≤ (ln n + 1) ·[]. The lemma is an immediate consequence of the following result from <cit.> (Corollary 4.10 on page 33): If X_i, i=1,…,n, are MHR[1] random variables with mean μ_i and cdf F_i(.), and G_i(x) = 1 - exp(-x/μ_i), then: ∫_0^∞ 1-∏_i=1^n F_i(x) dx ≤∫_0^∞ 1-∏_i=1^n G_i(x) dx. Applying this result for the case of F(x) = F_i(x) for all i, we have that [_n:n] = ∫_0^∞ 1-F^n(x) dx ≤∫_0^∞ 1-(1 - e^-x/[])^n dx = [] ∑_i=1^n1/i Using the fact that ∑_i=1^n1/i≤ln(n) + 1, we get the lemma. For any n ≥ 1 and m ≥ 2, we have [_n:n|_n:n > α_m^(_n:n)] ·[_n:n > α_m^(_n:n)] ≤15 (ln m + ln n + 1)[]/2m. We use the following result from <cit.> (Lemma 36): For any MHR distribution and any m ≥ 2, we have [|≥α_m^()] ·[≥α_m^()] ≤6 α^()_m/m. Since order statistics of MHR distributions are also MHR (<Ref>), _n:n and (_n:n)_m:m = _nm:nm are MHR. Then, by <Ref> we have that α_m^(_n:n)≤5/4·[_nm:nm]. Towards proving the lemma, we then get [_n:n|_n:n > α_m^(_n:n)] ·[_n:n > α_m^(_n:n)] ≤^(<Ref>)6 α_m^(_n:n)/m ≤^(<Ref>) 6 ·5/4·[_nm:nm]/m ≤^(<Ref>)15(ln(nm) + 1)/2m·[] = 15(ln(n) + ln(m) + 1)/2m·[]. σ_b > θ^* σ_s. From <Ref>, we know that _a:a is MHR for any a ≥ 1. Then, by <Ref> we have that α_n^1/10000^(_n-c_s:n-c_s)≥1/3·[( _n-c_s:n-c_s)_n^1/10000:n^1/10000] Towards proving <Ref>: σ_b = 6 α_n^1/10000^(_n-c_s:n-c_s)√(ln n) ≥^(<Ref>) 6 ·1/3[_(n - c_s) · n^1/10000:(n - c_s) · n^1/10000] √(ln n) > 5/2[_(n - c_s) · n^1/10000:(n - c_s) · n^1/10000] >^(c_s = n^1/5626)5/2[_c_s:c_s] = θ^* σ_s. § PROOFS MISSING FROM SECTION <REF> Consider = (σ_1, σ_2, …, σ_n) ∈n where, without loss of generality, we have σ_1 ≤σ_2 ≤…≤σ_n. As ∈n, there exists a constant c = c_(,n)∈ (0, 1] such that σ_n^c≤[_n^c:n^c]/18√(2 c ln n). Consider the event that |ϵ_i| ≤σ_i √(2c ln n) for all 1 ≤ i ≤ n^c. Following the same analysis as the proof of <Ref>, for any box i ∈ [1, n^c], we have [|ϵ_i| ≤σ_i √(2 c ln n)] = [|ϵ_i| ≤σ_i √(2 ln n^c)] = [ |(0, σ_i^2)| ≤σ_i √(2 ln n^c)] = 2 Φ(√(2 ln n^c)) - 1 ≥^(<Ref>) 2 ( 1 - 1/√(2 π)1/√(2 ln n^c)exp( -1/2· 2 ln n^c ) ) - 1 = 1 - 1/n^c √(c πln n), , and therefore [|ϵ_i| ≤σ_i √(2c ln n), ∀ i ∈ [1, n^c]] ≥(1 - 1/n^c √(c πln n))^n^c≥^(Bernoulli's inequality) 1 - n^c/n^c √(c πln n)≥1/2, where the last inequality holds for all n ≥ e^4/c π. Since σ_i ≤[_n^c:n^c]/18√(2c ln n) for all i ∈ [1, n^c], we can conclude that [max_i ∈ [1, n^c] |ϵ_i| ≤1/18·[_n^c:n^c]] ≥1/2. Conditioned on this event, for all i ∈ [1, n^c], we have x_i - 1/18·[_n^c:n^c] ≤ y_i ≤ x_i + 1/18·[_n^c:n^c]; therefore, for all k ≤ n^c, we have max_i ∈ [1, k] y_i ≥max_i ∈ [1, k] x_i - 1/9·[_n^c:n^c]. We analyze the performance of conditioned on this event. Recall that draws α∼ U[0, 1], and then outputs _i ∈ [1, n^α] y_i. We consider two cases for α: * If α > c, we will lower bound the expected reward of by 0. * If α≤ c, is going to pick the box with the largest y_i among the first n^α boxes. By our observation, 's reward in this case is at least max_i ∈ [1, n^α] x_i - 1/9·[_n^c:n^c], and therefore the expected reward of in this case is at least [_n^α : n^α] - 1/9·[_n^c:n^c]. By <Ref>, since c/α≤ 1, we have [_n^c:n^c] ≤4c/α·[_n^α : n^α]. Continuing our derivation, the expected reward of is at least [_n^α : n^α] - ·[_n^c:n^c] ≥α/4c·[_n^c:n^c] - 1/9·[_n^c:n^c]. Therefore, conditioned on the event that max_i ∈ [1, n^c |ϵ_i| ≤1/18·[_n^c:n^c], 's expected reward is lower bounded by ∫_α = 0^cα/4c·[_n^c:n^c] - 1/9·[_n^c:n^c] d α = 1/72·[_n^c:n^c]. In outcomes outside this event, we can lower bound 's expected reward by 0. Combining everything, 's expected reward is R_(, ) ≥1/2·1/72·[_n^c:n^c] ≥^(<Ref>)c^2/576·[_n:n].
http://arxiv.org/abs/2307.04396v1
20230710075746
Diffusion and fluctuations of open charmed hadrons in an interacting hadronic medium
[ "Kangkan Goswami", "Kshitish Kumar Pradhan", "Dushmanta Sahu", "Raghunath Sahoo" ]
hep-ph
[ "hep-ph", "hep-ex", "nucl-ex", "nucl-th" ]
http://arxiv.org/abs/2307.07561v1
20230714180818
Stability in Quasineutral Plasmas with Thermalized Electrons
[ "Megan Griffin-Pickering", "Mikaela Iacobelli" ]
math.AP
[ "math.AP" ]
datavisualization 1 ( [theoremTheoremlemma[theorem]Lemmacor[theorem]Corollaryprop[theorem]Propositiondefi[theorem]Definitionremark[theorem]Remarkexmp[theorem]Example theoremsectionequationsectionHlistenumerate1[Hlist]label=(H*) showonlyrefsStability in Quasineutral Plasmas with Thermalized Electrons Megan Griffin-Pickering University College London, Department of Mathematics, 25 Gordon Street, London WC1H 0AY, United Kingdom; and Heilbronn Institute for Mathematical Research. Email: Mikaela Iacobelli ETH Zürich, Department of Mathematics, Rämistrasse 101, 8092 Zürich, Switzerland. Email: August 12, 2023 ================================================================================================================================================================================================================================================================================================================= In this paper, we establish the validity of the quasineutral limit for the ionic Vlasov-Poisson system for rough initial data that are exponentially small perturbations of analytic data. Exponential smallness is also required in the electron case, and it is essentially sharp due to the presence of instabilities if only polynomial smallness is assumed. The nonlinear Poisson coupling leads to several new challenges compared to the electron case. To overcome them, we enhance the existing theory of the growth of characteristics for Vlasov systems with nonlinear couplings, and we combine stability estimates in kinetic-Wasserstein distances with improved regularity bounds on the elliptic coupling. § GENERAL OVERVIEW §.§ Vlasov-Poisson type systems. The Vlasov-Poisson system is the classical kinetic model describing dilute, totally ionized, unmagnetized plasma. In its most common form, the unknown f is the distribution function of the electrons moving in a self-induced electrostatic field, while the ions are assumed to act as a fixed background. Here, instead, we consider solutions of the Vlasov-Poisson system for ions, also known as the Vlasov-Poisson system with massless or thermalized electrons (VPME): (VPME)_:= {[ _t f_+v·∇_x f_+ E_·∇_v f_=0,; E_=-∇ U_,; ^2Δ U_=e^U_- ∫_^d f_ v=e^U_- ρ_,; f_|_t=0=f_0,≥0, ∫_^d ×^d f_0, x v=1. ]. Here, at each time t ≥0 the phase-space is assumed to be ^d×^d, and f = f(t,x,v) is the distribution function of ions with position x and velocity v. The parameter ε stands for the Debye length of the plasma, whose role in the plasma's stability will be clarified later. In the VPME system, electrons are thermalized and therefore distributed according to a Maxwell-Boltzmann law e^U_. Indeed, as the mass ratio between an electron and a proton is of order 10^-3, the disparity between the relative masses of an electron and an ion justifies the approximation that the electrons are in thermal equilibrium. The interested reader is directed to the survey <cit.> for a more detailed overview of the background to the VPME model, including a formal derivation in the massless electrons limit and a discussion of the progress on rigorous results in this direction, such as <cit.>; see also the more recent work <cit.>. In the physics literature, the VPME system (<ref>) has appeared in applications to, for example, the formation of ion-acoustic shocks <cit.> and the expansion of ion plasma into vacuum <cit.>. See Gurevich-Pitaevsky <cit.> for an introduction to the model (<ref>) from the point of view of astrophysics. The Vlasov-Poisson system for ions has drawn interest from the mathematical community comparatively more recently than the more well-known Vlasov-Poisson system for electrons <cit.> The critical difference is that in the ion model (<ref>), the electrostatic potential U satisfies the nonlinear Poisson-Boltzmann equation ^2 Δ U = e^U - ρ, rather than the linear Poisson equation ^2 Δ U = 1 - ρ. The exponential nonlinearity introduces several mathematical difficulties in the ion case that are not present in the electron case. The effect of this can be seen, for example, in the respective development of the well-posedness theory for the electron and ion models. For the electron model, the existence of weak solutions was proved by Arsenev <cit.> in the 70s, while the global well-posedness of classical solutions was obtained in dimension d=2 in the 70s by Ukai and Okabe <cit.> and in ^3 around the 90s by Pfaffelmoser <cit.> and Lions-Perthame <cit.> using two different methods. Since then, these results have been extended to the three-dimensional torus ^3<cit.>, and many works have refined the assumptions and techniques, such as <cit.>—this list is non-exhaustive, see <cit.> for a more detailed discussion about the well-posedness of Vlasov type systems. In contrast, the solution theory for the ion model was developed only more recently: weak global solutions in ^3 were obtained in the 90s by Bouchut <cit.>, while global well-posedness theory for classical solutions in two and three dimensions was the subject of a series of recent works by the authors <cit.> (x ∈^3, ^3) and Cesbron and the second author <cit.> (x in bounded domains). The first step in understanding the dynamics of the ion model has been to study the equation with linearized Poisson coupling, which led to the study of the (partially) linearized VPME system where the Vlasov equation is coupled with - ^2Δ U_ + U_ = ρ_ - 1. This system is closely related to the screened Vlasov-Poisson system, up to a difference in the scaling with respect to the Debye length: in the screened Vlasov-Poisson system, the potential U satisfies - Δ U_ + ^-2 U_ = ρ_ - 1. For more information about linearized VPME and screened VP see also <cit.>. The Quasineutral Limit and Kinetic Euler Equations. Since plasmas are highly conductive, any developed charges are readily screened; thus, they can be treated as quasineutral. Conversely, quasineutrality is no longer verified at small spatial and time scales. The Debye length λ_D is the distance over which quasineutrality may break down, and it varies according to the physical characteristics of the plasma. The Debye length is usually considerably short compared to the typical observation scale. Therefore, we can define the parameter := λ_D/L and consider the limit as tends to zero. This procedure is known as quasineutral limit. For the ion model, the limit is formally identified by setting = 0 in system (<ref>). This results in the following equation, known as the kinetic isothermal Euler equation (KIsE): (KIsE) := ∂_t f + v ·∇_x f - ∇_x U ·∇_v f = 0, U = logρ_f, f |_t=0 = f_0, ∫_^d ×^d f_0 (x,v) x v = 1. Kinetic Euler systems can be thought of as a type of Vlasov equation with `very singular' potential. In the seminal paper <cit.>, Brenier considered the kinetic incompressible Euler system (KInE) as a kinetic formulation of the incompressible Euler equations. In this case, the force E=-∇_x U is implicitly defined through the incompressibility constraint ρ=1, and may be considered a Lagrange multiplier associated with this constraint. In particular, (KInE) also arises in the quasineutral limit of the electron Vlasov-Poisson system. Another example of a kinetic Euler-type system is the Vlasov-Dirac-Benney system (VDB) where the acceleration in the Liouville equation is U = ρ_f. Bardos named this system VDB <cit.> due to a connection with the Benney equations for water waves through a formulation by Zakharov <cit.>. The VDB equation demonstrates perhaps most clearly the interpretation of kinetic Euler systems as Vlasov equations with very singular potential since the potential U may formally be written as U = δ∗ρ_f<cit.>. The VDB system is formally obtained in the quasineutral limit from the linearized VPME system. In general, in performing the formal quasineutral limit, we pass from a transport system where the force field is given by a (possibly nonlinear) elliptic equation to a transport-type system coupled to a singular force field. Thus, it is clear that the Cauchy theory for the limit systems and the quasineutral limit are intimately related. For an exhaustive discussion about these models and their Cauchy theory, see also the survey <cit.> and the research papers <cit.>. On the other hand, the loss of derivatives in the limiting system is reflected in the presence of spectral instabilities in the linearized system and consequent ill-posedness of the complete system around any smooth linearly unstable profile <cit.>. Before concluding our digression on the limiting systems, let us mention that the VDB system also appears as the semiclassical limit of an infinite dimensional system of coupled nonlinear Schrödinger equations <cit.>. For a discussion about semiclassical limits involving the KIsE model, see <cit.>. See also <cit.> for combined semiclassical and quasineutral limits. Previous results on the quasineutral limit. The mathematical study of the quasineutral limit can be traced back to the first pioneering works of Brenier and Grenier <cit.> on the electron model, which used an approach based on defect measures and gave a mathematically rigorous description of the `plasma oscillations' which appear in the electron case. Grenier <cit.> then showed further that the limit holds in the sense of strong convergence, in one dimension, for smooth `single bump' type profiles. This structural assumption is critical in understanding the quasineutral limit. As observed by Grenier <cit.>, an instability mechanism inherent to the physics, known as the two-stream instability, presents an obstruction to the quasineutral limit. It is well-known in plasma physics <cit.> that velocity distributions with multiple sharp peaks, such as a beam injected into a bulk of lower energy plasma, are unstable profiles. Solutions evolving from initial data that are perturbations of this 'double bump' form exhibit phase-space vortices. This behaviour is observed in electron and ion <cit.> models. Mathematically this corresponds to the linearization of the Vlasov equation around this profile having an exponentially growing mode. The connection between these growing modes and the structure of the distribution was investigated for the electron model by Penrose <cit.>, who gave a stability criterion that shows in particular that profiles with a single maximum have no exponentially growing modes, while profiles with sufficiently sharp minima, such as certain double-bump profiles, do have exponentially growing modes. It is then reasonable to expect that these modes are an obstacle to the quasineutral limit due to the connection with a long-time limit. Indeed, Han-Kwan and Hauray <cit.> used these unstable modes to construct counterexamples to the quasineutral limit in arbitrarily high Sobolev regularity. Other positive results were obtained for the electron model in the `cold electron' case where the velocity distribution is a Dirac mass (a kind of `extreme single bump') by Brenier <cit.> and Masmoudi <cit.>. Han-Kwan <cit.> obtained the limit for VPME (<ref>) in the corresponding `cold ions' setting. Later, Han-Kwan and Rousset <cit.> proved the quasineutral limit from the linearized VPME system to Vlasov-Dirac-Benney in Sobolev regularity, under a Penrose-type structural condition. The quasineutral limit has also been studied in the context of magnetized plasmas <cit.>. For general data without structural conditions, a major result was obtained by Grenier <cit.>, showing that the quasineutral limit holds for initial data with uniformly analytic spatial regularity. More recently, building on Grenier's result <cit.>, a new line of research was begun looking at regimes of rough data. The quasineutral limit with rough initial data. The investigation of the quasineutral limit for rough data originated from the work of Han-Kwan and the second author in the one-dimensional case <cit.>. The underlying idea is to consider data f_0, as perturbations around a distribution g_0, that satisfies the quasineutral condition, such as uniformly analytic distributions where Grenier's result <cit.> can be applied. The perturbed data f_0, takes the form: f_0, = g_0, + h_0,, where h_0, represents an L^∞ perturbation. The magnitude of the perturbation is measured using Monge-Kantorovich distances, with the assumption that there exists a function η: ℝ_+ →ℝ_+ satisfying: W_p(f_0,, g_0,) ≤η(), typically with p chosen as 1 or 2. The objective is then to identify admissible functions η such that the assumption (<ref>) implies the validity of the quasineutral limit for solutions with initial data f_0,. In <cit.>, Han-Kwan and Hauray demonstrated that the quasineutral limit fails if η(ϵ)∼ϵ^N for some N>0. In other words, the quasineutral limit is not valid under polynomially small perturbations of analytic data (even more, the result is false for data f_0, such that f_0,-g_0, is polynomially small in an arbitrarily strong Sobolev space, see also Remark <ref> below). In contrast to this negative outcome, Han-Kwan and the second author <cit.> established the validity of the quasineutral limit for the one-dimensional electron Vlasov-Poisson system under the condition (which is essentially optimal, as discussed above): W_1(f_0,, g_0,) ≤exp(- C ^-1), and for the one-dimensional ionic Vlasov-Poisson system (<ref>) under the condition: W_1(f_0,, g_0,) ≤ [expexp (C ^-2) ]^-1. In the higher-dimensional setting, the electron model was examined in <cit.>, where it was shown that the quasineutral limit holds under the condition: W_2(f_0,, g_0,) ≤ [expexp (C ^-ζ) ]^-1, where ζ > 0 is an exponent that depends on the dimension. More recently, in <cit.>, the second author improved upon the previous result and achieved the validity of the quasineutral limit under the (almost optimal) condition: W_2(f_0,, g_0,) ≤exp (-C ^-ζ). Finally, concerning the ionic case, in <cit.>, we were able to establish the quasineutral limit in dimensions d=2,3 under restrictive assumptions on the smallness of the perturbation, expressed as: W_2(f_0,, g_0,) ≤ [expexpexpexp(C ^-2) ]^-1. The objective of this paper is to obtain an essentially optimal result for the ionic model as well. For more details on the results discussed above, we refer to the survey <cit.>. §.§ Main Result In order to state our main result, we first recall the definition of the following analytic norm: for δ > 1, let g _B_δ : = ∑_k ∈Z^d |ĝ (k)| δ^|k| , where ĝ (k) denotes the Fourier coefficient of g with index k ∈Z^d. In the following, W_1 denotes the first-order Wasserstein distance (see Definition <ref> below). Let d=2,3. Let { g_0,}_≤ 1 and { f_0,}_≤ 1 be non-negative functions satisfying the following hypotheses: * Uniform spatial analyticity of { g_0,}_≤ 1: There exist k_0 > d, δ > 1, C_0 >0 and sufficiently small η > 0 such that sup_≤ 1sup_v ∈^d (1 + |v|^k_0) g_0,(·, v) _B_δ≤ C_0 sup_≤ 1∫_^d g_0,(·, v) v - 1 _B_δ≤η . * Uniform moment bounds: For all ≤ 1, f_0, (1 + |v|)^k_0_L^1 ∩ L^∞≤ C_0 . * Convergence of the data: g_0, converges to a limit g_0 as tends to zero, in the sense of distributions. Then there exists a time T_∗ > 0 and a constant C>0 such that, if [resume] * W_1(f_0, , g_0,) ≤exp(-C ^-ζ), ζ = 11 d=2 62 d=3, k_0 ≥ 13/4 14 + 12/k_0 - 3 d=3, k_0 < 13/4, then lim_→ 0sup_t ≤ T_∗ W_1(f_(t), g(t)) = 0, where f_ is the unique global bounded density solution of the (VPME)_ system (<ref>) with initial datum f_0,, and g is a solution of KIsE (<ref>) on the time interval [0,T_∗] with initial datum g_0. The main improvements achieved in Theorem <ref> compared to the most recent results on this problem can be summarized as follows: * The most significant enhancement is related to assumption (H4) concerning the size of the perturbation. We are able to replace the previous requirement of quadruple-exponential smallness condition (<ref>) with an almost optimal condition involving a single exponential (see Remark <ref> below). * In the previous work <cit.>, we required that f_0, have uniformly bounded energy E_[f_] (defined in Equation <ref> below) and L^∞ norm, as well as having compact support in velocity, with a bound on the rate of growth as tends to zero: for a certain function R(), f_0,(x,v) = 0 |v| > R() . In this work, these requirements have been replaced with assumption (H2), which is a uniform-in- version of the minimal assumptions currently known for the well-posedness of the VPME system <cit.>. Notably, the data no longer need to have compact support. Furthermore, assumption (H2) implies that the energy E_[f_] is uniformly bounded, eliminating the need for a separate assumption. It is also possible to formulate a statement involving a condition on the support similar to (<ref>), while retaining the single exponential structure in (H4), although we omit it here. Theorem <ref> brings the theory for the ion quasineutral limit into line with the electron case, where the best available result also requires an exponential condition on the smallness of the perturbation <cit.>. The single exponential condition <ref> is `almost optimal' since no polynomial rate ^N is admissible for anyN > 0. This is due to the existence of exponentially growing modes for the (full) linearization of the system around a kinetically unstable profile, see <cit.> for further discussion. As a consequence of the proof of Theorem <ref>, we are in fact able to improve the assumptions for the well-posedness result of <cit.> for the VPME system (<ref>). <cit.> states that the system (<ref>) has a unique global solution with spatial density bounded in L^∞(^d), locally uniformly in time, for any > 0 and any initial datum satisfying (1 + |v|^k_0) f_0,∈ L^∞(^d ×^d) and ( 1 + |v|^m_0) f_0,∈ L^1(^d ×^d) for k_0 > d and m_0 > d(d-1). An additional corollary of the techniques of Section <ref> is that we can relax the second assumption to require only m_0 > d as in hypotheses <ref>-<ref>. Thus, in particular, for f_0, satisfying our assumptions, the unique global bounded density solutions f_ referred to in the statement of Theorem <ref> exist. The next figure summarizes all the results discussed before. The long time behaviour of plasmas and the quasineutral limit. The quasineutral limit can be thought of as a form of long-time limit: as explained in <cit.> (see also <cit.>), by suitable scalings one sees a connection between the quasineutral limit and the study of the long-time behaviour. A particularly well-known phenomenon in this context is the Landau damping, see for example <cit.>. The paper is structured as follows: in Section <ref> we collect a series of preliminary results. In Section <ref>, we establish novel regularity estimates for the electric field and its stability concerning the spatial density. A notable improvement compared to prior findings is the derivation of constants that exhibit polynomial degeneracy in . Section <ref> combines the outcomes from Section <ref> with the employment of kinetic-Wasserstein distances, recently introduced by the second author. This combination leads to precise stability estimates for solutions of VPME with bounded density. To apply this result effectively in our context, Section <ref> presents new L^∞ bounds on the spatial density ρ_f for a solution f of the VPME system (<ref>). Finally, in Section <ref>, we provide the proof of our main theorem, Theorem <ref>. § PRELIMINARIES §.§ Representation of the torus ^d Throughout this work, ^d denotes the flat torus in d dimensions. For the purposes of defining integrals over the torus, we identify points in ^d with points in the unit box [ - 1/2, 1/2 )^d. This is equipped with the distance | · |_^d defined by | x |_^d : = inf_α∈^d |x + α| . In some arguments, it will necessary to keep track of the number of times a path z(t) : I →^d wraps around the torus. In this context, we will consider a lifted version of z(t) thought of as a path on ^d ×^d. In such cases, in order to evaluate quantities of the form f(z(t)), we identify functions on _+ × [ - 1/2, 1/2 )^d ×^d with their spatially periodic extensions on _+ ×^d ×^d in the natural way: f(t,x,v) = f(t,x+α,v), where α∈Z^d, x + α∈ [ - 1/2, 1/2 )^d. §.§ Wasserstein Distances We recall the definition of the Wasserstein distances W_p for measures on the phase space. Given two probability measures μ,ν on ^d ×^d, for any p ∈ [1, ∞), the Wasserstein distance of order p, denoted W_p, is defined by W_p^p(μ, ν) = inf_π∈Π(μ,ν)∫_(^d ×^d)^2(|x-y|^p_^d+|v-w|^p) π(x,v,y,w), where π∈P((^d ×^d)^2) belongs to the set of couplingsΠ(μ,ν), namely, for any Borel subsets A ⊂^d ×^d, π(A × (^d ×^d)) = μ(A) π((^d ×^d) × A) = ν(A). We note that W_p(μ, ν)<∞ for μ, ν∈P_p, where P_p denotes the set of probability measures γ for which ∫_^d ×^d |v|^p γ(x,v) < ∞. Our proof of Theorem <ref> relies on a new stability estimate for solutions of the VPME system (<ref>) in W_2 (Proposition <ref>). To prove this estimate, we make use of a new technique proposed in <cit.>, in which we consider a quantity related to the Wasserstein distance with a nonlinearly defined kinetic structure (see Section <ref>). In order to obtain our final result in W_1, we will need a couple of simple estimates between different powers of the Wasserstein distance. We consider only the cases p=1,2, since this is what is relevant for us. Let μ,ν be two probability densities on ^d ×^d such that ∫_^d ×^d |v|^k μ(x,v) ≤ C_k, ∫_^d ×^d |v|^k ν(x,v)≤ C_k for some C_k<∞ and k>2. Then W_1(μ,ν)≤√(2)W_2(μ,ν), W_2(μ,ν) ≤ 3(1+2C_k)^1/k-1W_1(μ,ν)^k-2/k-1. The first inequality is classical and follows from Hölder's inequality. Indeed, for any π∈Π(μ,ν), ∫_(^d ×^d)^2(|x-y|_^d+|v-w|)π(x,y,v,w) ≤(∫_(^d ×^d)^2(|x-y|_^d+|v-w|)^2π(x,y,v,w))^1/2 ≤(2∫_(^d ×^d)^2(|x-y|^2_^d+|v-w|^2)π(x,y,v,w))^1/2. Taking the infimum over all couplings, this proves the first inequality. For the second inequality we note that, again by Hölder's inequality, ∫_(^d ×^d)^2(|x-y|^2_^d+|v-w|^2)π(x,y,v,w) ≤∫_(^d ×^d)^2(|x-y|_^d+|v-w|)^2π(x,y,v,w) ≤(∫_(^d ×^d)^2(|x-y|_^d+|v-w|)π(x,y,v,w))^k-2/k-1· ·(∫_(^d ×^d)^2(|x-y|_^d+|v-w|)^kπ(x,y,v,w))^1/k-1. We now observe that, by the marginal constraint on π and the elementary inequality (a+b+c)^k ≤ 3^k-1(a^k+b^k+c^k) for a,b,c ≥ 0, we get ∫_(^d ×^d)^2(|x-y|_^d+|v-w|)^kπ(x,y,v,w) ≤∫_(^d ×^d)^2(1+|v|+|w|)^kπ(x,y,v,w) ≤ 3^k-1∫_(^d ×^d)^2(1+|v|^k+|w|^k)π(x,y,v,w) =3^k-1(1+ ∫_^d ×^d |v|^k μ(x,v)+∫_^d ×^d |w|^k ν(x,w))≤ 3^k-1(1+2C_k). Hence, ∫_(^d ×^d)^2(|x-y|^2_^d+|v-w|^2)π(x,y,v,w) ≤ 3(1+2C_k)^1/k-1(∫_(^d ×^d)^2(|x-y|_^d+|v-w|)π(x,y,v,w))^k-2/k-1. Taking the infimum over π, this proves the second inequality. §.§ Density Estimates Using Moments We recall the following well-known `interpolation' estimate (see for example <cit.>), which states that bounds on the velocity moments of f imply L^p bounds on the spatial density ρ. Let d ≥ 1. Let 0 ≤ f ∈ L^∞(^d ×^d) satisfy, for some k > 1, M_k : = ∫_^d ×^d |v|^k f(x,v) x v < + ∞ . Then the spatial density ρ(x) : = ∫_^d f(x,v) v belongs to L^1 + k/d with the estimate ρ_1 + k/d≤ C_k.d f ^k/d+k_L^∞ M_k^d/d+k . §.§ Energy Functional The energy of the VPME system (<ref>) is given by the functional E_[f_] := 1/2∫_^d ×^d |v|^2 f_ x v + ^2/2∫_^d |∇ U_|^2 x + ∫ U_ e^U_ x . This quantity is conserved by all sufficiently regular solutions of (<ref>), and in particular by the strong solutions constructed in <cit.> that we will use in the current work. Under hypothesis <ref>, the energy of the initial data f_0, is bounded uniformly in —we sketch the argument below in Lemma <ref>. Therefore, the energy of solutions to the VPME system (<ref>) starting from these data is bounded both uniformly in and uniformly for all time. Moreover, under hypothesis <ref>, the functions g_0, also satisfying <ref>, and thus the energy of solutions with initial data g_0, is bounded uniformly in both and time. Let f_0, satisfy <ref>. Then there exists a constant C_1 > 0 depending on C_0 only such that E_ [f_0,] ≤ C_1. Hypothesis <ref> implies that (1 + |v|^2) f_0,_L^1 + f_0,_L^∞≤ C_0, so that the kinetic energy term is uniformly bounded. Moreover, by Lemma <ref>, for some constant C_0 ' > 0 depending only on C_0, ρ_0,_L^(d+2)/d≤ C_0 ' . Since ^2 Δ_x U_ = e^U_ - ρ_0,, the remaining terms satisfy ^2/2∫_^d |∇ U_|^2 x + ∫_^d U_ e^U_ x = ∫_^d U_ρ_0, x ≤∫_^d (U_)_+ ρ_0, x . By Hölder's inequality, ∫_^d (U_)_+ ρ_0, x ≤ρ_0,_L^(d+2)/d (U_)_+ _L^(d+2)/2 ≤ρ_0,_L^(d+2)/d (U_)_+^d/2_L^(d+2)/d^2/d. Finally, note that there exists a constant c_d > 0 such that y^d/2≤ c_d e^y for all y ≥ 0. Hence, by Lemma <ref> below, ∫_^d (U_)_+ ρ_0, x ≤ c_d^2/dρ_0,_L^(d+2)/d e^U__L^(d+2)/d^2/d ≤ c_d^2/dρ_0,_L^(d+2)/d^(d+2)/d . Thus E_ [f_0,] ≤ C_1, where C_1 depends on C_0 only. We recall the following consequence. Since E[f_(t)] is uniformly bounded for all t and , f_(t) has uniformly bounded second velocity moment M_2(t). Since the transport equation also conserves the L^∞ norm of f_, the following uniform L^p-type bound on ρ[f_] can be deduced. Let f_ be a solution of the VPME system (<ref>) with initial datum f_0, that satisfies <ref> (f_ is then the global unique solution with bounded density). Then there exists a constant C_1 > 0 depending on C_0 only such that ρ[f_(t)] _L^d+2/d≤ C_1 for all t ≥ 0 . § ESTIMATES FOR THE ELECTRIC FIELD The main result of this section is the following proposition, concerning the regularity of the electric field and its stability with respect to the spatial density. The key improvement compared to previous results of this kind <cit.> is that we obtain constants that degenerate polynomially in . In fact, the dependence on is identical to that seen in the Vlasov-Poisson system for electrons <cit.>. Let d ≥ 1. (i) Let h ∈ L^∞ (^d). Then there exists a unique U ∈ W^1,2(^d) satisfying ^2Δ U =e^U-h . Moreover, ∇ U is a log-Lipschitz function satisfying |∇ U(x) - ∇ U(y)| ≤ C h _L^∞^-2 |x-y| ( 1+ (log|x-y|)_+ ) (ii) If, for i=1,2, 0 ≤ h_i ∈ L^∞, with U_i ∈ W^1,2 satisfying, ^2Δ U_i =e^U_i-h_i, and ∫_^d h_1 x = ∫_^d h_2 x, then ∇ U_1 - ∇ U_2 _L^2≤^-2max_i h_i _L^∞^1/2 W_2(h_1, h_2) The existence and uniqueness of U for h ∈ L^∞(^d) is obtained as in <cit.>. For the log-Lipschitz regularity, we first apply Lemma <ref> below so as to obtain the estimate e^U _L^∞≤ h _L^∞ . Thus Δ U _L^∞≤ 2 ^-2 h _L^∞ . The log-Lipschitz bound (<ref>) then follows from regularity estimates for solutions of the Poisson equation—see for example <cit.>. Below, in Lemma <ref>, we prove that ∇ U_1 - ∇ U_2 _L^2≤^-2∇Δ^-1 (h_1 - h_2) _L^2 We then control the H^-1 norm by applying the following estimate, due to Loeper <cit.>: ∇Δ^-1 (h_1 - h_2) _L^2≤max_i h_i _L^∞^1/2 W_2(h_1, h_2) . This concludes the proof of estimate (<ref>). Let d≥1 . Let h∈ L^∞(^d) and let U ∈ W^1,2(^d) be a solution of ^2Δ U =e^U-h . Then, for all p ∈ [1,+∞], e^U _L^p≤ h _L^p . We first consider the case p<∞. The proof follows from the following a priori estimate: formally testing the equation with the function e^(p-1)U and integrating by parts gives 0 ≤^2 (p-1) ∫_^d e^(p-1)U |∇ U|^2 x = ∫_^d e^(p-1)U h x - ∫_^d e^p U x . By rearranging terms and applying Hölder's inequality, we obtain e^U _L^p^p ≤ e^U _L^p^p-1 h _L^p, and thus e^U _L^p≤ h _L^p for all p<∞ . This argument can be made rigorous using a truncation procedure. Letting p→∞ in the bound above, we conclude the validity of our lemma also in the case p=∞. For i=1,2, let h_i ∈ L^∞(^d) and let U_i satisfy ^2Δ U_i =e^U_i-h_i . Then ∇ U_1 - ∇ U_2 _L^2≤^-2∇Δ^-1 (h_1 - h_2) _L^2 Subtracting the equations for U_1 and U_2 gives ^2Δ (U_1 - U_2) =(e^U_1-e^U_2)-(h_1 - h_2) . After testing with (U_1 - U_2) and integrating by parts, we obtain: ^2 ∫_^d |∇ U_1 - ∇ U_2|^2 x = ∫_^d (h_1 - h_2) (U_1 - U_2) x - ∫_^d (e^U_1-e^U_2) (U_1 - U_2) x . Since (e^x - e^y)(x-y) ≥ 0 for any x,y ∈, we have ^2 ∫_^d |∇ U_1 - ∇ U_2|^2 x ≤∫_^d (h_1 - h_2) (U_1 - U_2) x . By Parseval-Plancherel, ^2 ∫_^d |∇ U_1 - ∇ U_2|^2 x ≤∇ (U_1 - U_2) _L^2∇Δ^-1 (h_1 - h_2)_L^2 . By applying Young's inequality with a small parameter, we obtain ∇ U_1 - ∇ U_2 _L^2≤^-2∇Δ^-1 (h_1 - h_2) _L^2 as required. Finally, we present estimates demonstrating a gain of integrability for e^U compared to ρ. Although the constants that are not uniform in , as it is natural in our situation, it is crucial for our applications that the bounds degenerate only polynomially with respect to , and not exponentially as it was the case in all previous results in this setting. The key idea is to exploit the equation satisfied by e^U. Since Δ(e^U) = e^U Δ U + e^U |∇ U|^2, it follows that -^2 Δ(e^U) + ^2 |∇ U|^2 e^U = e^U(ρ - e^U) . Let d=3. Assume that ρ∈ L^q for some q > 3/2. Then, for all r ≥ q, there exists an exponent α = α(q,r), defined by α(q,r) : = q^-1 - r^-1/2/3 - q^-1 . and constants C_q,r, c_q,r>0 such that e^U_L^r≤ C_q,r (^-2)^αρ_L^q^α + 1 + c_q,r^2, If r=q the result follows directly from Lemma <ref>. For r > q, test equation (<ref>) with the function e^(r-2) U; after integrating by parts, we obtain the following equality: 4/r-1^2 ∫_^3 |∇ e^r-1/2U|^2 x + ∫_^3 e^rU x = ∫_^3 e^(r-1)Uρ x . By the Sobolev-Gagliardo-Nirenberg inequality on the torus <cit.>, there exists a constant C>0, independent of r, such that e^r-1/2U - ⟨ e^r-1/2U⟩_L^6≤ C ∇ e^r-1/2U_L^2 , where ⟨·⟩ denotes the average value: ⟨ e^r-1/2U⟩ : = 1/|^3|∫_^3 e^r-1/2U x . We therefore expect that estimate (<ref>) will imply that e^r-1/2U∈ L^2 r '∩ L^6, if we can control the right hand side. The right hand side of (<ref>) is ∫_^3 e^(r-1)Uρ x = ∫_^3( e^(r-1)U/2)^2 ρ x. Since r > q > 3/2, then 3 > q' > r'. Thus there exists θ∈ (0,1) such that 1/q' = 1 - 1/q =1-θ/3 + θ/r ' , i.e. θ = 2/3 - 1/q/2/3- 1/r . We now write ∫_^3( e^(r-1)U/2)^2 ρ x = ∫_^3( e^(r-1)U/2)^2θ( e^(r-1)U/2)^2(1-θ)ρ x . To handle the average, we observe that, since 1-θ < 1, for all a,b ≥0 we have the estimate (a + b)^2(1-θ)≤ 2^1-θ(a^2(1-θ) + b^2(1-θ)). By writing e^(r-1)U/2 = (e^(r-1)U/2 - ⟨ e^(r-1)U/2⟩ ) + ⟨ e^(r-1)U/2⟩ and applying (<ref>), we find that ∫_^3( e^(r-1)U/2)^2θ( e^(r-1)U/2)^2(1-θ)ρ x ≤ 2^1- θ∫_^3( e^(r-1)U/2)^2θ( e^(r-1)U/2 - ⟨ e^(r-1)U/2⟩)^2(1-θ)ρ x_=: I_1 + 2^1-θ⟨ e^(r-1)U/2⟩^2(1-θ)∫_^3( e^(r-1)U/2)^2θρ x_=: I_2 . To estimate I_1, we interpolate between L^2r' and L^6: by the choice of θ in (<ref>), I_1 = ∫_^3( e^(r-1)U/2)^2θ( e^(r-1)U/2 - ⟨ e^(r-1)U/2⟩)^2(1-θ)ρ x ≤ e^(r-1)U/2_L^2 r'^2 θ e^(r-1)U/2 - ⟨ e^(r-1)U/2⟩_L^6^2(1-θ)ρ_L^q . Then, by (<ref>), for some constant C_q,r>0 independent of (that may change from line to line), I_1 ≤ C_q,r e^rU_L^1^θ/r'∇ (e^(r-1)U/2 ) _L^2^2(1-θ)ρ_L^q . Hence 2^1-θ I_1 ≤ e^rU_L^1^θ/r' ( 4 ^2/(1-θ)(r-1)∇ (e^(r-1)U/2 ) _L^2^2 )^1-θ C_q,r^-2(1-θ)ρ_L^q . By Young's inequality with exponents r'/θ, 1/1-θ and r/θ, 2^1-θ I_1 ≤θ/r' e^rU_L^1 + 4 ^2/r-1∇ (e^(r-1)U/2 ) _L^2^2 + C_q,r ( ^-2(1-θ)ρ_L^q )^r/θ. For I_2, we use only that e^(r-1)U/2∈ L^2r ': by Hölder's inequality, ⟨ e^(r-1)U/2⟩ = 1/|^3| e^(r-1)U/2_L^1≤ C_r e^(r-1)U/2_L^2r'≤ C_r e^rU_L^1^1/2r' . Furthermore, by the choice of θ in (<ref>), ∫_^3( e^(r-1)U/2)^2θρ x ≤ C e^(r-1)U/2_L^2r'^2 θρ_L^q≤ C e^rU_L^1^θ/r'ρ_L^q . It follows that 2^1-θ I_2 = 2^1-θ⟨ e^(r-1)U/2⟩^2(1-θ)∫_^3( e^(r-1)U/2)^2θρ x ≤ ( 2 |^3|^- 1/r' e^rU_L^1^1/r')^1-θ |^3|^1-θ/3 e^rU_L^1^θ/r'ρ_L^q ≤ e^rU_L^1^1/r' C_r^1-θρ_L^q ≤ ( (1-θ) e^rU_L^1 )^1/r' (1-θ)^-1/r' C_r^1-θρ_L^q . By Young's inequality with exponents 1/r, 1/r', we find that 2^1-θ I_2 ≤(1-θ)/r' e^rU_L^1 + C_q,rρ_L^q^r . Altogether, by estimates (<ref>), (<ref>) and (<ref>) we have 4/r-1^2 ∫_^3 |∇ e^r-1/2U|^2 x + ∫_^3 e^rU x ≤1/r' e^rU_L^1 + 4 ^2/r-1∇ (e^(r-1)U/2 ) _L^2^2 + C_q,r ( ^-2(1-θ)ρ_L^q )^r/θ + C_q,rρ_L^q^r . We rearrange this to find e^U _L^r^r ≤ C_q,r ( ^-2(1-θ)ρ_L^q )^r/θ + C_q,rρ_L^q^r . The second term is lower order: by Young's inequality, ρ_L^q^r ≤θ ( ^-2(1-θ)ρ_L^q )^r/θ + (1-θ) ^2r Thus e^U _L^r^r ≤ C_q,r ( ( ^-2(1-θ)ρ_L^q )^r/θ + ^2r ). Finally, by taking the rth root we find that e^U _L^r≤ C_q,r ( ^-2(θ^-1 - 1)ρ_L^q^θ^-1+ ^2 ). Finally, we compute the exponent α = θ^-1 - 1 as a function of q and r: α(q,r) : = θ(q,r)^-1-1 = 2/3 - 1/r /2/3 - 1/q -1= q^-1 - r^-1/2/3 - q^-1 . § STABILITY Using the estimates of Proposition <ref>, we are able to prove the following stability estimate, the VPME equivalent of <cit.>. Let ≤ 1, and let f_1, f_2 be two weak solutions of the (VPME)_ system (<ref>), and set ρ_1:= ∫_^d f_1 dv, ρ_2= ∫_^d f_2 dv. Define the function A(t):=ρ_1(t)_L^∞(𝕋^d)+ρ_2(t)_L^∞(𝕋^d), and assume that A(t) ∈ L^1([0,T]) for some T>0. There exist a dimensional constant C_d>0 and a universal constant c_0>0 such that the following holds: if W_2(f_1(0),f_2(0)) is sufficiently small so that W_2(f_1(0),f_2(0))≤ c_0 and √(|log( ^-2W_2(f_1(0),f_2(0))^2 | log1/2^-2W_2(f_1(0),f_2(0))^2|)|)≥C_d/∫_0^TA(s) ds+√(|log(/e)|), then, for all t ∈ [0,T], W_2(f_1(t),f_2(t))^2 ≤ 2 e^-(√(|log{^-2W_2(f_1(0),f_2(0))^2 | log1/2^-2W_2(f_1(0),f_2(0))^2|}|) - C_d/∫_0^tA(s) ds)^2. Following <cit.>, we define the quantity D(t) through the following identity: for every t ∈ [0,T], D(t) is the unique number in [0,1) solving the equation D(t) = ^-2|log D(t)| 1/2∫_(^d×^d)^2 |X_1(t,x,v)-X_2(t,y,w)|^2π_0(x,v,y,w) + 1/2∫_(^d×^d)^2|V_1(t,x,v)-V_2(t,y,w)|^2 π_0(x,v,y,w). As shown in <cit.>, D(t) is well-defined and Lipschits (so, differentiable almost everywhere). To lighten the notation, we will write λ(t) : = ^-2log D(t). Then we compute, exactly as in <cit.>, D'(t) =1/2∫_(^d×^d)^2λ'(t)|X_1(t,x,v)-X_2(t,y,w)|^2 dπ_0(x,v,y,w) +∫_(^d×^d)^2 λ(t)(X_1(t,x,v)-X_2(t,y,w))·(V_1(t,x,v)-V_2(t,y,w) dπ_0(x,v,y,w) -∫_(^d×^d)^2 (V_1(t,x,v)-V_2(t,y,w)·(E_1(t, X_1(t,x,v))-E_2(t, X_2(t,y,w))) dπ_0(x,v,y,w). By Cauchy-Schwartz inequality and recalling the definition (<ref>) of D(t) we have: D'(t) ≤1/2λ'(t)∫_(^d×^d)^2|X_1(t,x,v)-X_2(t,y,w)|^2 dπ_0(x,v,y,w) +λ(t)∫_(^d×^d)^2|X_1(t,x,v)-X_2(t,y,w)|^2 dπ_0(x,v,y,w)^1/2· ·∫_(^d×^d)^2|V_1(t,x,v)-V_2(t,y,w)|^2 dπ_0(x,v,y,w)^1/2 +∫_(^d×^d)^2|V_1(t,x,v)-V_2(t,y,w)|^2 dπ_0(x,v,y,w)^1/2· ·∫_(^d×^d)^2|E_1(t, X_1(t,x,v))-E_2(t, X_2(t,y,w))|^2 dπ_0(x,v,y,w)^1/2 ≤1/2λ'(t)∫_(^d×^d)^2|X_1(t,x,v)-X_2(t,y,w)|^2 dπ_0(x,v,y,w) +2√(λ(t))D(t)+√( D(t)) E_1(t, X_1(t,x,v))-E_2(t, X_2(t,y,w))_L^2(dπ_0(x,v,y,w)). Adding and subtracting -E_2(t,X_1) in the last term, we obtain: D'(t) ≤1/2λ'(t)∫_(^d×^d)^2|X_1(t,x,v)-X_2(t,y,w)|^2 dπ_0(x,v,y,w) +2√(λ(t))D(t) +√(D(t))T_1+T_2, where T_1= E_2(t, X_1(t,x,v))-E_2(t, X_2(t,y,w))_L^2(dπ_0(x,v,y,w)), T_2= E_1(t, X_1(t,x,v))-E_2(t,X_1(t,x,v))_L^2(dπ_0(x,v,y,w)). Next, arguing as in <cit.> while using Proposition <ref> in place of <cit.>, as in <cit.> we find that T_2≤C/^2 A(t)√(Q(t)/λ(t)), and T_1≤C/^2A(t) √(ϕQ(t)/λ(t)) where we have ϕ(s)={[ slog^2(s) s∈(0,1/e]; s s>1/e. ]. Recalling that λ(t) : = ^-2log D(t) we obtain that, if ^2 D(t)/|log(D(t))|∈ (0,1/e) (which is the case of interest for our purposes) then D'(t) ≤( -1/2D'(t)/|log (D(t))|∫_(^d×^d)^2|X_1(t,x,v)-X_2(t,y,w)|^2 dπ_0(x,v,y,w)) +(2√(|log(D(t))|)/+C A(t)/√(|log(D(t))|))D(t) +C A(t)√(D(t))/√(D(t)/|log(D(t))|log^2 (^2 D(t)/|log(D(t))|)). Thus we arrive at the same estimate as was obtained in the proof of <cit.>. The remainder of the argument concludes exactly as in <cit.>. § GROWTH ESTIMATES ON Ρ_L^∞ The goal of this section is to obtain new L^∞ bounds on the spatial density ρ_f for a solution f of the VPME system (<ref>), so to control the quantity A(t) appearing in the statement of Proposition <ref>. We will prove the following proposition. Let d = 2,3 and f_0,∈ L^1 ∩ L^∞(^d ×^d) satisfy the assumptions <ref>. Let T>0 be fixed. Then there exists _∗ depending on T, k and d such that for all ≤_∗: If d=2, there exists a constant C>0 depending on C_0 such that ∫_0^T A(s) s ≤ C ^-4 (1+ T)^3 ( 1 + log (1 + ^-2T) ) If d=3, there exists a constant C>0 depending on C_0 and k such that ∫_0^T A(s) s ≤ C_k (T +1)^4 ^-30 The strategy will be to control the maximal possible growth in the velocity coordinate of a characteristic trajectory of the system. More precisely, we will use the following notation for the characteristic flow: let the pair X(t; s,x,v), V(t; s,x,v) denote the solution of the system of ODEs Ẋ(t; s,x,v) = V(t; s,x,v), V̇(t; s, x,v) = E(X(t; s,x,v)), X(s; s, x,v) = x, V(s ; s,x,v) = v. We will study the quantity Q_∗(t) : = sup_(x,v) ∈^d ×^d |V(t; 0,x,v) - v|. This has been shown in <cit.> to be finite in the case d=2,3 for all t ∈ [0,+∞), under the assumptions <ref> with k_0 > d(d-1). In the case d(d-1) ≥ k_0 > d we will apply our argument to a series of regularized solutions (see <cit.> for the procedure); the estimates we obtain will be uniform in the regularisation parameter and hence will pass to the limit. In particular, following this argument for any fixed >0 shows that, under assumption <ref>, the global weak solution f_ constructed in <cit.> in fact has bounded density ρ_f_∈ L^∞_loc([0, + ∞) ; L^∞(^d)), and thus by <cit.> is the unique solution in this class. We may therefore relax the condition m_0 > d(d-1) in <cit.> to m_0 > d. Our interest in the quantity Q_∗ is motivated by the following lemma: it gives us the control of ρ_f in L^∞ that we seek. Let d ≥ 1 and t ≥ 0. Assume that the assumptions <ref> hold and that quantity Q_∗(t), defined in equation (<ref>), is finite. Then ρ_f(t) _L^∞(^d ×^d)≤ C (1 + Q_∗(t)^d) . Using the representation of f in terms of the characteristic flow and the weighted L^∞ estimate <ref> on f_0,, we may obtain the estimate f(t, x,v) ≤C/1 + |V(t;0,x,v)|^k_0 for all (t,x,v) ∈ [0,+∞) ×^d ×^d . We note by the (reverse) triangle inequality that |V(t;0,x,v)| ≥ ( |v| - |V(t; 0,x,v) - v| )_+ ≥ ( |v| - sup_x',v' |V(t; 0,x',v') - v'| )_+ ≥ (|v| - Q_∗(t))_+ , where the last inequality follows directly from the definition of Q_∗. We then deduce from (<ref>) that f(t,x,v) ≤C/1 + ( |v| - Q_∗(t) )_+^k_0. Next, we integrate (<ref>) over all v ∈^d to obtain a bound on ρ_f: ρ_f(x) ≤ C ∫_^d1/1 + (|v|-Q_∗(t))_+^k_0 v. The integrand is radially symmetric in v. We therefore change to polar coordinates to find that ρ_f(x) ≤ C ∫_0^∞r^d-1/1 + (r-Q_∗(t))_+^k_0 r ≤ C ∫_0^Q_∗(t) r^d-1 r + C ∫_Q_∗(t)^∞r^d-1/1 + (r-Q_∗(t))_+^k_0 r ≤ C ∫_0^Q_∗(t) r^d-1 r + C ∫_0^∞(r + Q_∗(t))^d-1/1 + r^k_0 r . We observe that (r + Q_∗(t))^d-1≤ 2^d-1 ( r^d-1 + Q_∗(t)^d-1 ) . Next compute the integral: ∫_0^Q_∗(t) r^d-1 r = 1/d Q_∗(t)^d. Finally, since k_0 > d we may estimate ∫_0^∞r^d-1/1 + r^k_0 r + ∫_0^∞1/1 + r^k_0 r ≤ C_d, k_0 < + ∞. We conclude that ρ_f(x) ≤ C_d, k_0 (Q_∗(t)^d + Q_∗(t)^d-1 + 1 ) ≤ C_d, k_0(1 + Q_∗(t)^d) for all x ∈^d. The statement follows immediately. Our aim is therefore to obtain estimates on Q_∗, as these will entail estimates on A. As in previous works on this subject <cit.>, our method will differ depending on the dimension. §.§ Case d=2 Let d=2. Let (1 + |v|^k_0)f_0 ∈ L^1 ∩ L^∞(^2 ×^2) for some k_0 > 2. Let f denote the unique bounded density solution of (<ref>) with initial datum f_0. Then the spatial density ρ_f satisfies the estimate sup_[0,t]ρ_f(t, ·) _L^∞(^2)≤ C ( 1 + ^-4 t^2) (1 + log(1 + ^-2 t)) , for all t>0. We will need the following estimate for the electric field, which can be found in <cit.>. Let h ∈ L^1 ∩ L^∞(^2), and let U be the unique W^1,2(^2) solution of the Poisson equation ^2 Δ U = h . Then there exists a constant C depending only on h _L^2(^2) such that ∇ U _L^∞(^2)≤ C ^-2 ( 1 + |log h _L^∞(^2) |^1/2 ) . Under the assumptions of Proposition <ref>, Q_∗ satisfies the estimate Q_∗(t)^2 ≤ C ^-4 t^2 (1 + log(1 + ^-2 t)) , for some constant C>0 independent of t and . By Lemma <ref>, the electrostatic potential U satisfies the assumptions of Lemma <ref>, with h _L^∞(^2)≤ C(1 + Q_∗(t)^2) by Lemma <ref>. Hence the electric field is uniformly bounded: E(t) _L^∞(^2) = ∇ U (t) _L^∞(^2)≤ C ^-2 ( 1 + |log C(1+Q_∗(t)^2) |^1/2 ) . Next, observe that for any characteristic trajectory (X(t; 0,x,v), V(t; 0,x,v)), |V(t; 0,x,v) - v| ≤∫_0^t |E(X(τ; 0,x,v))| τ≤ t E _L^∞([0,t] ×^d) . By (<ref>), |V(t; 0,x,v) - v| ≤ C ^-2 t sup_s ∈ [0,t] ( 1 + |log C(1+Q_∗(s)^2) |^1/2 ) . Taking supremum over (x,v), we obtain that Q_∗(t) ≤ C ^-2 t sup_s ∈ [0,t] ( 1 + |log C(1+Q_∗(s)^2) |^1/2 ) ≤ C t ^-2 ( 1 + |log C(1+ Q_∗(t)^2) |^1/2 ) . Since √(x+y)≤√(x) + √(y)≤√(2(x+y)), we find that Q_∗(t) ≤ C ^-2 t ( 1 + log (1+ Q_∗(t)^2 ) )^1/2 , where C>0 is a larger constant. We rearrange this to obtain the inequality Q_∗(t)^2/ 1 + log (1+ Q_∗(t)^2 ) ≤ C ^-2 t . The function b : y ↦y/1 + log(1+y) is continuous and strictly increasing for y ≥ 0—see Lemma <ref> below—and therefore has a well-defined, continuous, strictly increasing inverse b^-1. We also show in Lemma <ref> that this inverse obeys the bound b^-1(u) ≤ 2 u (1 + log(1+u)) . We deduce that, for some C > 0, Q_∗(t)^2 ≤ C ^-4 t^2 (1 + log(1 + C ^-4 t^2)) . Since log(1+u) ≤log(1 + √(u))^2 ≤ 2 log(1 + √(u)), after possibly enlarging the constant C>0 we find that Q_∗(t)^2 ≤ C ^-4 t^2 (1 + log(1 + ^-2 t)) which completes the proof. Proposition <ref> then follows from Lemma <ref> and Lemma <ref>. The estimate (<ref>) then follows upon integrating over time. §.§ Case d=3 In order to control the L^∞ norm of the density in the three dimensional case, we will make use of techniques for estimating the growth of characteristic trajectories over time. These can be traced back to the development of the well-posedness theory for the 3D electron Vlasov-Poisson system <cit.> and results on the propagation of moments <cit.> on the torus ^3, where the approach of Lions-Perthame does not apply and techniques based on characteristic trajectories are used instead. Our method will be based in particular on the techniques of Chen and Chen <cit.> for the propagation of moments for the electron model. We therefore introduce the notation M_k(t) : = sup_s ∈[0,t]∫_^3 ×^3 (1 + |v|^k) f(s,x,v) x v for the velocity moment of order k. We will prove an estimate for small increments of the characteristic trajectories. For all t ∈[0,T] and δ∈ (0,t], we define Q(t,δ) by Q(t, δ) : = sup_(x,v) ∈^d ×^d∫_t-δ^t |E(X(s ; 0, x, v)) | s . Observe that Q_∗(t) ≤ Q(t,t), so that obtaining estimates on Q(t,δ) for all δ will suffice to control the density. The main new steps required compared to <cit.> are: * To handle the fact that the electric field depends on ρ through the nonlinear Poisson-Boltzmann equation rather than a linear Poisson equation—we will do this by using the splitting of the field E = E̅ + E; and * To quantify carefully the dependence of constants on in the quasineutral scaling. For this we will need to revisit the arguments of <cit.> in detail. First, we relate estimates on the moments to estimates on the electric field. The Coulomb kernel in the three-dimensional torus is the function K_^3 defined by K_^3 = - ∇ G_^3, - Δ G_^3 = δ_0 - 1 in ^3 . We note the following result for convolutions against K_^3. The case q = +∞ is proved in <cit.>; the general case can be proved using a similar interpolation argument. Let d=3, 1 ≤ p < 3 < q ≤ +∞ and let h ∈ L^p ∩ L^q. Then the Coulomb kernel K_^3∗ h _L^∞≤ C_p,q h _L^p^1-θ h _L^q^θ, where the exponent θ satisfies 1/3 = 1-θ/p + θ/q. There exists a constant depending only on f_0 _L^∞, E[f_0] such that, for all t ∈ [0, +∞), E__L^∞≤ C ^-7 (^-3∧ M_k^1/2(k-2)) . By applying Lemma <ref> with the choice p = 5/3, q = r, we deduce that for any r ∈ (3, +∞), E__L^∞ = ^-2 K_^3∗ e^U _L^∞≤ C_r ^-2 e^U _L^5/3^1-β e^U _L^r^β, where β = 4/3 (3 - 5/r )^-1. By Lemma <ref>, e^U_L^r≤ (C_r ^-2)^3(3 - 5/r)ρ_L^5/3^5(2 - 3/r) + c_r ^2, and thus E__L^∞≤ C_r ^-2 (C_r ^-2)^4 (1+ ρ_L^5/3)^10 . By (<ref>), ρ_L^5/3 is uniformly bounded and hence E__L^∞≤ C_r ^-10 . Finally, we fix some r > 3 and thereby deduce the result. Next, by the moment interpolation estimate (<ref>) we recall that ρ_L^1 + k/3≤ C M_k^3/k+3 . By Lemma <ref>, for any η, ϕ∈ [0,1] and r satisfying η + ϕ≤ 1 and 1/3 = ϕ3/k+3 + η/r + 3/5 (1-η-ϕ) , E__L^∞ = ^-2 K_^3∗ e^U _L^∞≤ C_r ^-2 e^U _L^5/3^1-η-ϕ e^U_L^1 + k/3^ϕ e^U _L^r^η, Choose ϕ = k+3/6(k-2), which implies that η (3/5 - 1/r ) = 1/6 . By (<ref>), Lemmas <ref> and <ref>, (<ref>) and (<ref>), E__L^∞≤ C_r ^-2 M_k^1/2(k-2)^-30(3/5 - 1/r)η≤ C_r ^-7 M_k^1/2(k-2) for any admissible choice of r. We are able to prove the following estimate on Q(t,δ)—this is analogous to the estimate <cit.>, but for the ion case and quantified in . Let k > 3 and assume that sup_s∈[0,t]M_k(s) is finite. Then, for all δ∈ [0,t], Q(t,δ)^3/2≤ C ^-2δ^1/2((δ Q(t,δ))^1/2 ( Q(t,δ)^4/3 + M_k(t)^1/2(k-2) + ^-5 (^-3∧ M_k^1/2(k-2)) ) + M_k(t)^1/2(k-2) ). We use the decomposition E = E̅ + E. To estimate E̅, we note (see e.g. <cit.>) that K_^3 may be written in the form K_^3(x) = C |x|_^3^-2 + K_0(x), if | x |_^3 < 1/4 K_1(x) otherwise, for some smooth functions K_0, K_1. We may then write Q(t, δ) ≤sup_(x,v) ∈^3 ×^3 C ^-2∫_t-δ^t ∫_^d∫_^df(s,x',v')/|x' - X(s; 0,x,v)|^21_B_1/4(x' - X(s; 0,x,v)) x' v' s + δ^-2 (K_0 + K_1) ∗ρ_f _L^∞((t-δ, t] ×^3) + δE_L^∞((t-δ, t] ×^3) . By Lemma <ref>, there exists a constant C>0 such that E__L^∞≤ C ^-7 (^-3∧ M_k^1/2(k-2)). Thus Q(t, δ) ≤ C ^-7 (^-3∧ M_k^1/2(k-2)) + sup_(x,v) ∈^3 ×^3 C ^-2∫_t-δ^t ∫_^3 ×^3f(s,x',v')/|x' - X(s; 0,x,v)|^21_B_1/4(x' - X(s; 0,x,v)) x' v' s . The second term of (<ref>) is estimated using methods based on <cit.>. First, fix a particular characteristic trajectory X_∗(s) : = X(s ; 0,x_∗ ,v_∗), V_∗(s) : = V(s ; 0,x_∗,v_∗) considered as a lifted trajectory in ^3 ×^3, as was done in <cit.>. Next, we consider the following decomposition of the set [t-δ, t] ×^3 ×^3: for some parameters R, γ > 0 to be determined, let Λ_γ : ^3 →_+ denote the function Λ_γ(v) = 1 + γ |v|^2 1_|v| ≤γ + γ^3-k |v|^k 1_|v| > γ , and let the sets Ω, Ω_G, Ω_B, Ω_U ⊂ [t-δ, t] ×^3 ×^3 be defined by Ω : = { (s,x,v) ∈ [t-δ, t] ×^3 ×^3 : |x - X_∗(s)| < 1/4} Ω_G : = { (s,x,v) ∈Ω : |v - V_∗(s)| ≤ 5 Q(t,δ) or |v| ≤ 5 Q(t,δ) } Ω_B : = { (s,x,v) ∈Ω : |x - X_∗(s)| ≤R/Λ_γ(v)}∖Ω_G Ω_U : = Ω∖ (Ω_G ∪Ω_B) . The decomposition (<ref>)-(<ref>)-(<ref>) is taken as in <cit.>, except that we replace the function Λ(v) = 1 + |v|^1+ in the definition (<ref>) of the set Ω_B with Λ_γ as defined above in (<ref>). The purpose of this is to allow us to obtain a sharp exponent in our eventual final estimate, with no `loss of an epsilon'. Observe that ∫_^d1/Λ_γ(v) v ≤γ^-1∫_|v| ≤γ |v|^-2 + γ^k-3∫_|v| > γ |v|^-k v ≤ C, for some constant C>0 independent of γ, and ∫_^d ×^dΛ_γ(v) f(t,x,v) x v ≤ 1 + γ M_2(t) + γ^3-k M_k(t) . From now on, we will set γ : = ( M_k(t) M_2(t)^-1 )^1/(k-2). This ensures that sup_s ≤ t∫_^d ×^dΛ_γ(v) f(s,x,v) x v ≤ 1 + M_2(t)^k-3/k-2 M_k(t)^1/k-2≤ C M_k(t)^1/k-2 , where the last inequality follows from M_k ≥1 and the conservation of energy: M_2(t) ≤ M_2(0). We also write Λ : = Λ_γ to lighten the notation. Region Ω_G: In this region, either |v| ≤ 5 Q(t,δ) or |v - V_∗(s)| ≤ 5 Q(t,δ). Hence 0 ≤ f|_Ω_G(s,x,v) ≤ f_0 _L^∞ ( 1_|v| ≤ 5 Q(t,δ) + 1_|v - V_∗(s) | ≤ 5 Q(t,δ) ) . Thus, for all (s,x) ∈ [t- δ, t], 0 ≤∫_^3 f (s,x, v) v ≤∫_^3 f |_Ω_G (s,x, v) v ≤ C f_0 _L^∞ Q(t,δ)^3 , and hence ∫_^3 f |_Ω_G (·, ·, v) v _L^∞_s,x≤ C f_0 _L^∞ Q(t,δ)^3 . It then follows by <cit.> (see Lemma <ref>, case q=+∞) that ∫_Ω_Gf(s,x,v)/|x - X_∗(s)|^2 x v s ≤ C δ Q(t,δ)^4/3 . Region Ω_B: In this region, |x - X_∗(s)| ≤ R Λ (v)^-1; hence ∫_Ω_Bf(s,x,v)/|x - X_∗(s)|^2 x v s ≤ f_0 _L^∞∫_t-δ^t ∫_^3∫_|y| ≤ R Λ(v)^-1 |y|^-2 y v s ≤ C f_0 _L^∞ R ∫_t-δ^t ∫_^3Λ (v)^-1 v . Thus, by (<ref>), ∫_Ω_Bf(s,x,v)/|x - X_∗(s)|^2 x v s ≤ C δ R. Region Ω_U: Here we follow the arguments of <cit.>, replacing the function 1 + |v|^1+ in <cit.> by Λ (v): we wish to estimate I_U : = ∫_Ω_Uf(s,x,v)/|x - X_∗(s)|^2 x v s . We perform the change of variables (x̃, ṽ) = ( X(t; s,x,v) , V(t; s,x,v) ). Since then f(s,x,v) = f(t,x̃, ṽ), we have I_U = ∫_t-δ^t ∫_^3 ×^3 f(t,x̃, ṽ) 1_U (s, Z(s ; t, x̃, ṽ) )/|X(s ; t, x̃, ṽ) - X_∗(s)|^2x̃ṽ s . Next, write the x domain as the union ^3 = ⋃_α∈^3α + [ - 1/2, 1/2 )^3. Then I_U = ∑_α∈^3∫_ [ - 1/2, 1/2 )^3∫_^3 f(t, x + α, v) ∫_t-δ^t 1_U (s, Z(s; t, x+α, v) )/| X(s; t, x+α, v) - X_∗(s)|^2 s x v By periodicity we note that f(t, x+α, v = f(t,x,v) and E(s,x+α) + E(s,x)for all α∈^3 and all s ≥ 0. It follows that the (lifted) flow commutes with shifts in the x variable: X(s; t, x+α, v) = α + X(s; t,x,v) , V(s; t, x+α, v) = V(s; t,x,v) . We introduce the shorthand Z̃(s) = (X̃(s), Ṽ(s) ) = ( X(s; t,x,v), V(s; t,x,v) ) for (x,v) ∈ [ - 1/2, 1/2 )^3 ×^3, and X̃_α(s) = α + X̃(s), Z̃_α = (X̃_α, Ṽ). Thus I_U = ∫_ [ - 1/2, 1/2 )^3∫_^3 f(t, x, v) ∑_α∈^3∫_t-δ^t 1_U (s, Z̃_α(s) )/| X̃_α (s) - X_∗(s)|^2 s x v . We need only include those α in the set A(x,v) : = {α∈^3 : ∃ s ∈ [t-δ, t], (s, X̃_α(s), Ṽ(s) ) ∈Ω_U } ; hence I_U = ∫_ [ - 1/2, 1/2 )^3∫_^3 f(t, x, v) ∑_α∈ A(x,v)∫_t-δ^t 1_U (s, Z̃_α(s) )/| X̃_α (s) - X_∗(s)|^2 s x v . We now seek a lower bound on | X̃_α (s) - X_∗(s)|, given that (s, X̃_α (s), Ṽ(s) ) ∈Ω_U. First, from the definition of Ω_U we have |X̃_α(s) - X_∗(s)| > R/Λ ( Ṽ(s) ) . A second estimate can be obtained by using the dynamics of solutions to the characteristic ODE. Arguing exactly as in <cit.>, we may show that for all τ∈ [t-δ, t], | X̃_α (τ) - X_∗(τ) | ≥ |τ - τ_∗| ( |Ṽ(τ_∗) - V_∗( τ_∗) | - Q(t,δ) ), where τ_∗ is such that |X̃_α (τ_∗) - X_∗( τ_∗) | = min_τ∈ [t-δ, t] | X̃(τ) - X_∗(τ) | . Indeed, letting ξ(τ) : = X̃_α (τ) - X_∗(τ), by the mean value theorem we have |ξ(τ) - ξ(τ_∗) - (τ - τ_∗) ξ̇(τ_∗)| ≤ |τ - τ_∗| sup_θ∈ [t-δ, t] |ξ̇(τ_∗) - ξ̇(θ)| ≤ |τ - τ_∗| Q(t,δ) . Since τ_∗ is a minimiser, (τ - τ_∗) ξ (τ_∗) ·ξ̇(t_∗) ≥ 0, and hence | ξ(τ_∗) + (τ - τ_∗) ξ̇(τ_∗) |^2 ≥ |τ - τ_∗|^2 |ξ̇(τ_∗)| . We conclude by the (reverse) triangle inequality. Next, we determine bounds for (<ref>) and (<ref>) depending on the values of the characteristic trajectories at the final time t. First recall that, by definition of Ω_U, | Ṽ(s) - V_∗(s)| > 5Q. Moreover, by definition of Q(t,δ), for any τ∈ [t-δ, t] we may estimate | ( Ṽ(τ) - V_∗(τ) ) - ( Ṽ(s) - V_∗(s) ) | ≤ 2 Q(t, δ) ≤2/5 | Ṽ(s) - V_∗(s) | . Thus, by the triangle inequality, 3/5 | Ṽ(s) - V_∗(s) | ≤ | Ṽ(τ) - V_∗(τ)| ≤7/5 | Ṽ(s) - V_∗(s) | . Choosing τ = t, we have 5/7 | v - V_∗(t)| ≤ | Ṽ(s) - V_∗(s) | ≤5/3 | v - V_∗(t)| . Hence we may rewrite the global bounds (<ref>) in terms of the value at time t: for all τ∈ [t-δ, t], 3/7 | v - V_∗(t)| ≤ | Ṽ(τ) - V_∗(τ)| ≤7/3 | Ṽ(s) - V_∗(s) | . Moreover, by (<ref>) once again, Q(t,δ) ≤1/5 | Ṽ(s) - V_∗(s) | ≤1/3 | v - V_∗(t)| . Then, by (<ref>), | X̃_α (τ) - X_∗(τ) | ≥2/21 |τ - τ_∗| | v - V_∗(t)| . Similarly, since |Ṽ(s) - v| ≤ Q(t,δ) ≤1/5 |Ṽ(s)|, then |v| ≥ |Ṽ(s)| - |Ṽ(s) - v| ≥4/5 |Ṽ(s)| . Since Λ is a radially increasing function, Λ(|Ṽ(s)|) ≤Λ ( 5/4 v ) ≤ ( 5/4 )^k Λ (v) . Hence, by (<ref>), |X̃_α(s) - X_∗(s)| > C_k R/Λ (v ) . By combining (<ref>) and (<ref>), we find that, for some τ_∗∈ [t-δ, t] |X̃_α(s) - X_∗(s)|^-2≤ C_k ( R^-2Λ(v)^2 ∧ |s - τ_∗|^-2 |v - V_∗(t)|^-2 ). Therefore I_U ≤ C_k ∫_ [ - 1/2, 1/2 )^3∫_^3 f(t, x, v) |A(x,v)| sup_τ_∗∫_t-δ^t ( R^-2Λ(v)^2 ∧ |s - τ_∗|^-2 |v - V_∗(t)|^-2 ) s x v . We calculate sup_τ_∗∫_t-δ^t ( R^-2Λ(v)^2 ∧ |s - τ_∗|^-2 |v - V_∗(t)|^-2 ) s ≤ 2 ∫_0^δ ( R^-2Λ(v)^2 ∧ |θ|^-2 |v - V_∗(t)|^-2 ) θ ≤ C Λ(v)/R |v - V_∗(t)|. Finally, by <cit.> and (<ref>), |A(x,v)| ≤ C ( 1 + ∫_t-δ^t |Ṽ(τ) - V_∗ (τ) | t ) ≤ C ( 1 + δ | v - V_∗ (t) | ) . Therefore I_U ≤ C_k R^-1∫_ [ - 1/2, 1/2 )^3∫_^3 f(t, x, v) Λ(v) ( δ + | v - V_∗ (t) |^-1 ) x v . By (<ref>), | v - V_∗ (t) |^-1≤1/3 Q(t,δ)^-1, and thus by (<ref>) we find that I_U ≤ C_k R^-1 ( δ + Q(t,δ)^-1 ) ∫_ [ - 1/2, 1/2 )^3∫_^3 f(t, x, v) Λ(v) x v ≤ C_k δ1/R (1 + (δ Q(t,δ))^-1 )M_k(t)^1/k-2 . Summing over Ω_G, Ω_B, Ω_U gives ∫_t-δ^t ∫_^d∫_^df(s,x,v)/|x - X_∗(s)|^2 x v s ≤ C δ Q(t,δ)^4/3 + C δ R + C δ1/R (1 + (δ Q(t,δ))^-1 )M_k(t)^1/k-2 . The optimal choice of R is R = (1 + (δ Q(t,δ))^-1 )^1/2 M_k(t)^1/2(k-2), giving the estimate ∫_t-δ^t ∫_^d∫_^df(s,x,v)/|x - X_∗(s)|^2 x v s ≤ C δ (Q(t,δ)^4/3 + (1 + (δ Q(t,δ))^-1 )^1/2M_k(t)^1/2(k-2) ) . Substituting this into inequality (<ref>) gives Q(t,δ) ≤ C ^-2δ (Q(t,δ)^4/3 + (δ Q(t,δ))^-1/2M_k(t)^1/2(k-2) + (M_k(t)^1/2(k-2) + ^-5 (^-3∧ M_k^1/2(k-2)) ) . Hence Q(t,δ)^3/2≤ C ^-2δ^1/2 ((δ Q(t,δ))^1/2( Q(t,δ)^4/3 + M_k(t)^1/2(k-2) +^-5 (^-3∧ M_k^1/2(k-2)) + M_k(t)^1/2(k-2) ) ; this completes the proof. The relation (<ref>) can then be resolved so as to obtain an estimate on Q(t,t), by using an extension of the method of <cit.>. We will explain the argument in detail in order to show how to keep proper track of the dependence on and handle the extra term ^-7 (^-3∧ M_k^1/2(k-2)) arising from the additional part E of the electric field. For all t ≥ 0 there exists δ_∗(t) such that for all δ≤δ_∗(t) Q(t,δ) ≤ C ^-4/3δ^1/3 M_k(t)^1/3(k-2) . Explicitly, we may take δ_∗(t) : = C M_k(t)^1/2(k-2) (M_k(t)^1/2(k-2) + ^-5 (^-3∧ M_k^1/2(k-2)) )^-3/2 . Since lim_δ→ 0Q(t,δ) = 0, for sufficiently small δ∈ (0,t] we have (δ Q(t,δ))^1/2 ( Q(t,δ)^4/3 + M_k(t)^1/2(k-2) + ^-5 (^-3∧ M_k^1/2(k-2)) ) ≤ 2 M_k(t)^1/2(k-2). As long as this holds, (<ref>) will imply an estimate of the form Q(t,δ)^3/2≤ C ^-2δ^1/2 M_k(t)^1/2(k-2) ; that is, Q(t,δ) ≤ C ^-4/3δ^1/3 M_k(t)^1/3(k-2) . We wish to find an explicit δ_∗(t) such that (<ref>) holds for all δ∈ (0, δ_∗(t)]. First, let δ̅: = sup{δ∈ (0,t] : δ^1/2 Q(t,δ)^11/6≤ M_k(t)^1/2(k-2), (δ Q(t,δ))^1/2 ( M_k(t)^1/2(k-2) + ^-5 (^-3∧ M_k^1/2(k-2)) ) ≤ M_k(t)^1/2(k-2)} . Then (<ref>) holds at least for all δ∈ (0, δ̅]. We now seek a lower bound on δ̅. If δ̅= t, then there is nothing to prove. Otherwise, one of the two inequalities defining the supremum (<ref>) is attained when δ = δ̅. We consider each of the cases separately. Case 1: (δ Q(t,δ))^1/2 ( M_k(t)^1/2(k-2) + ^-5 (^-3∧ M_k^1/2(k-2) )) = M_k(t)^1/2(k-2). Hence M_k(t)^1/2(k-2) ( M_k(t)^1/2(k-2) + ^-5 (^-3∧ M_k^1/2(k-2) ))^-1 = (δ̅Q(t,δ̅))^1/2≤ C ^-2/3δ̅^2/3 M_k(t)^1/6(k-2) . After rearranging, we obtain the lower bound δ̅≥ C M_k(t)^1/2(k-2) (M_k(t)^1/2(k-2) + ^-5 (^-3∧ M_k^1/2(k-2)))^-3/2 . Case 2: δ̅^1/2 Q(t, δ̅)^11/6 = M_k(t)^1/2(k-2). We will show that this case is excluded for all sufficiently small . By (<ref>), M_k(t)^1/(k-2) = δ̅Q(t,δ̅)^11/3≤ C ^-44/9 M_k(t)^11/9(k-2)δ̅^20/9. We rearrange this to find that C ^11/5 M_k(t)^-1/10(k-2)≤δ̅, and thus Q(t,δ̅) = (δ̅^-1/2 M_k(t)^1/2(k-2))^6/11≤ C ^-3/5 M_k(t)^3/10(k-2). However, in this case M_k(t)^1/2(k-2) + ^-5 (^-3∧ M_k^1/2(k-2)) ≤ Q(t, δ̅)^4/3. If M_k^1/2(k-2)≤^-3, then ^-5 M_k^1/2(k-2)≤ Q(t, δ̅)^4/3≤ C ^-4/5 M_k(t)^2/5(k-2). Rearranging gives M_k^1/2(k-2)≤ C ^21, which gives a contradiction for sufficiently small , since M_k(t) ≥ 1 by definition (<ref>). Otherwise, if M_k^1/2(k-2) > ^-3, then ^-8, M_k(t)^1/2(k-2)≤ Q(t, δ̅)^4/3. Hence ^-8/5 M_k(t)^2/5(k-2) = (^-8)^1/5 (M_k(t)^1/2(k-2) )^4/5≤ Q(t, δ)^4/3≤ C ^-4/5 M_k(t)^2/5(k-2) , from which it follows that ^-4/5≤ C, for some universal constant C >0. This is a contradiction for all ≤_∗ for some _∗ > 0 depending only on C. Finally, let δ_∗(t) : = C M_k(t)^1/2(k-2) (M_k(t)^1/2(k-2) + ^-5 (^-3∧ M_k^1/2(k-2)) )^-3/2 , which provides the desired lower bound by (<ref>). Next, we wish to prove an estimate for increments Q(t, t-s) where t-s > δ_∗(t). To do this, we use the approach from Chen-Chen <cit.>, in which the interval [s,t] is subdivided into subintervals each of which is small enough that the estimate of Lemma <ref> may be applied. We summarise this part of their estimates, quantified in , in the following lemma. Let 0≤ a < b. Then Q(b, b-a) ≤ C ^-4/3( b-a ) Δ(a,b)^-2/3 M_k(b)^1/3(k-2) , where Δ(a,b) : = inf_s∈[a,b]δ_∗(s) ∧ (b-a) Split the time interval [a, b] into subintervals of length no more than Δ(a,b) (see Figure <ref>): let n = n(a,b) : = ⌊b-a/Δ(a,b)⌋, such that [a, b] = [a, b - n Δ(a,b)) ∪⋃_j=1^n(a,b) (b - jΔ(a,b), b - (j-1)Δ(a,b)]. Then, by splitting the integral defining Q(b,b-a) according to the regions (<ref>), we find that Q(b, b-a) ≤ Q(b - n Δ(a,b), b - n Δ(a,b)) + ∑_j=1^n Q( b - (j-1) Δ(a,b), Δ(a,b)) . Now estimate each summand using (<ref>): Q(b, b-a) ≤ C ^-4/3Δ(a,b)^1/3∑_j=0^n M_k(b - j Δ(a,b) )^1/3(k-2) . Since M_k(s) is a non-decreasing function of s, M_k(b - j Δ(a,b)) ≤ M_k(b) for all j and hence Q(b, b-a) ≤ C (n(a,b) + 1) ^-4/3Δ(a,b)^1/3 M_k(b)^1/3(k-2) . Since n(a,b) ≤ (b-a)Δ(a,b)^-1, this completes the proof. Our next step is to apply the previous result in order to estimate Q(t,t). To do so we need to estimate the infimum of δ_∗. We begin by writing δ_∗(s) = h_(M_k(s)^1/2(k-2)) where h_(z) : = C (1 + ^-5)^-3/2z^-1/2 z ≤ ^-3 C z(z + ^-8)^-3/2 z > ^-3 . Observe (see Figure <ref>) that h_ is a decreasing function for z ∈ [0, ^-3∪ (2 ^-8, +∞) and increasing for z ∈ ( ^-3, 2 ^-8). Since M_k(s) is a non-decreasing function of s, we may identify corresponding time intervals of monotonicity for δ_∗(s). Let t_I := inf{ s ≥ 0 : M_k(s)^1/2(k-2) > ^-3} t_II := inf{ s ≥ 0 : M_k(s)^1/2(k-2) > 2 ^-8} . Then δ_∗(s) is a non-increasing function of s for s ∈ [0, t_I) ∪ (t_II, + ∞) and a non-decreasing function of s for s ∈ (t_I, t_II). We therefore see a difference between the non-increasing regions [0, t_I) and [t_II, + ∞) and the non-decreasing region [t_I, t_II): in the non-increasing regions, the infimum of δ_∗ of any subinterval is attained at the right hand endpoint of the interval, whereas in the non-decreasing region the infimum is attained at the left hand endpoint. Consequently, we will use different methods to estimate Q depending on whether we consider a non-increasing or non-decreasing region. At this point it is instructive to compare the corresponding function in the electron Vlasov-Poisson case, which is δ_∗(s) = C M_k(s)^-1/4(k-2). This is a non-increasing function of s for alls. In the ion case, we will follow the argument of <cit.> in the non-increasing regions (Lemma <ref>), as this is suited to the non-increasing case. In the non-decreasing region, however, we will develop a new argument (see Lemma <ref>). In Regions I and III we follow the method of <cit.> and deduce the following estimate by a direct application of Lemma <ref>. For all t ≤ t_I, Q(t , t ) ≤ C ^-7 t M_k(t )^1/2(k-2) . For all t > t_II, Q(t, t - t_II) ≤ C ^-2( t - t_II) M_k(t)^1/2(k-2) . Note that δ(s) is non-increasing for all s ∈ [0, t_I) and all s > t_II. Thus, if t ≤ t_I, the infimal value over [0,t] is realised at the upper endpoint of the interval, s = t, and Δ(0 , t ) = δ_∗(t ) ∧ t = h_(M_k(t )^1/2(k-2)) ∧ t . For t > t_II, an identical argument shows that Δ(t_II, t ) = δ_∗(t) ∧ (t - t_II). By definition of t_I, M_k(s)^1/2(k-2)≤^-3 for all s ≤ t_I; in particular, when t ≤ t_I we may substitute the definition of h_(<ref>) for z≤^-3 to rewrite (<ref>) as Δ(0, t) = C (1 + ^-5)^-3/2 M_k(t)^- 1/4(k-2)∧ t . Then, by Lemma <ref>, Q(t, t) ≤ C ^-2 (1 + ^-5) t M_k(t )^1/2(k-2)≤ C ^-7 t M_k(t )^1/2(k-2) . For t > t_II, by definition of t_II we have M_k(t)^1/2(k-2)≥ 2 ^-8. Then, substituting the definition of h_(<ref>) for z ≥ 2^-8 into (<ref>) gives Δ(t_II, t) = C M_k(t)^1/2(k-2)(M_k(t )^1/2(k-2) + ^-8)^-3/2≥ C M_k(t )^- 1/4(k-2) , and by Lemma <ref>, Q(t_II, t - t_II) ≤ C ^-2( t - t_II) M_k(t)^1/2(k-2) . In Region II, the argument in Lemma <ref> would give Δ(t_I, t ) = δ_∗(t_I) = h_(M_k(t_I)^1/2(k-2)) = h(^-3) = C (1 + ^5)^-3/2^10, t ∈ [t_I, t_II], and thus we would find the estimate Q(t , t - t_I) ≤ C ^-8 (t - t_I) M_k(t )^1/3(k-2) . At t=t_II this implies that Q(t_II , t_II - t_I) ≤ C ^-40/3 ( t_II - t_I) . In the following lemma, we show that we may in fact obtain an improved estimate of order ^-10, by using a further subdivision of the interval [t_I, t_II]. This is a key difference in our proof from the method of Chen-Chen <cit.> for the electron model. Let t ∈ (t_I, t_II]. Then Q(t , t - t_I) ≤ C ^-10 (t - t_I) + C ^3 M_k(t)^1/2(k-2) . Recall that M_k(t)^1/2(k-2)≤^-8 for t ≤ t_II, so that the second term is of lower order in ^-1: Q(t , t - t_I) ≤ C ^-10 (t - t_I) + C ^-5≤ C ^-10 (1 + t - t_I) . Consider the following subdivision of the interval [t_I, t ]: first, let τ_0 := t_I; then, for j ≥ 1, let τ_j := inf{ s > τ_j-1 : M_k(s)^1/2(k-2) > 2 M_k(τ_j-1)^1/2(k-2)} . Since M_k(s) is a continuous function of s, we have M_k(τ_j)^1/2(k-2) = 2 M_k(τ_j-1)^1/2(k-2) = 2^j M_k(τ_0)^1/2(k-2) for all j. Then let J : = inf{ j ≥ 0 : τ_j ≥ t } ; note that J is finite with J(t) ≤ 1 + logM_k(t)^1/2(k-2) - logM_k(t_I)^1/2(k-2)/log 2≤ 1 + log(^3 M_k(t)^1/2(k-2) ) /log 2 , since 2^J-1^-3 = 2^J-1 M_k( t_I )^1/2(k-2) = M_k( τ_J-1 )^1/2(k-2)≤ M_k(t)^1/2(k-2) . For convenience we then redefine τ_J = t. Thus [t_I, t] may be written as the following (almost disjoint) union of intervals (see Figure <ref>): [t_I, t ] = ⋃_j=1^J [τ_j-1, τ_j]. We will now apply Lemma <ref> on each subinterval [τ_j-1, τ_j]. Since δ_∗ (s) is increasing for s∈ (t_I, t_II), its minimal value is found at the left endpoint of any subinterval, i.e. Δ(τ_j-1, τ_j ) = δ_∗(τ_j-1) = C M_k(τ)^1/2(k-2) (M_k(τ)^1/2(k-2) + ^-8 )^-3/2∧ (τ_j - τ_j-1) ≥ C ^13 M_k(τ_j-1)^1/2(k-2)∧ (τ_j - τ_j-1) Then, by Lemma <ref>, Q(τ_j, τ_j - τ_j-1) ≤ C ^-10 (τ_j - τ_j-1) ( M_k(τ_j)/M_k(τ_j-1) )^1/3(k-2) if Δ(τ_j-1, τ_j ) = C ^13 M_k(τ_j-1)^1/2(k-2), or C ^-4/3 (τ_j - τ_j-1)^1/3 M_k(τ_j)^1/3(k-2) if Δ(τ_j-1, τ_j ) = τ_j - τ_j-1 . In the second case, τ_j - τ_j-1≤ C ^13 M_k(τ_j-1)^1/2(k-2), and so Q(τ_j, τ_j - τ_j-1) ≤ C ^3 M_k(τ_j-1)^1/2(k-2)( M_k(τ_j)/M_k(τ_j-1) )^1/3(k-2) . Now recall that, by definition of the τ_j(<ref>), M_k(τ_j)/M_k(τ_j-1) = 2. By summing the two cases of (<ref>) we have Q(τ_j, τ_j - τ_j-1) ≤ C 2^1/3(k-2)^-10 (τ_j - τ_j-1 ) + C 2^1/3(k-2)^3 M_k(τ_j-1)^1/2(k-2). Since M_k(τ_j-1) = 2^j-1 M_k(τ_0) for j ≤ J-1 and M_k(τ_J-1) ≤ M_k(t), Q(τ_j, τ_j - τ_j-1) ≤ C 2^1/3(k-2)^-10 (τ_j - τ_j-1 ) + C ^3 2^1/3(k-2) + j-1/2(k-2) M_k(τ_0)^1/2(k-2) j ≤ J-1, C 2^1/3(k-2)^-10 (τ_j - τ_j-1 ) + C ^3 M_k(t)^1/2(k-2) j = J . By summing over the subintervals, we obtain the estimate Q(t_I, t) ≤∑_j=1^J Q(τ_j, τ_j - τ_j-1) ≤ C 2^1/3(k-2)^-10 (t - t_I) + C ^3 2^1/3(k-2)2^J-1/2(k-2) - 1/2^1/2(k-2)-1 M_k(t_I)^1/2(k-2) + C ^3 M_k(t)^1/2(k-2). Then, using the fact that 2^J-1/2(k-2) M_k(t_I)^1/2(k-2) = M_k(t_J-1)^1/2(k-2)≤ M_k(t)^1/2(k-2), we find that Q(t_I, t) ≤ C 2^1/3(k-2) ( ^-10 (t - t_I) + C ^32^1/2(k-2)/2^1/2(k-2)-1 M_k(t)^1/2(k-2) ) , which completes the proof. We combine the previous results to obtain an estimate on Q(t,t). For all t ≥ 0, Q(t,t) ≤ C ^-2 (1 + t) ( M_k(t)^1/2(k-2)∨^-8 ) . In the case t ≤ t_I, we simply apply Lemma <ref> to obtain Q(t , t ) ≤ C ^-7 t M_k(t )^1/2(k-2)≤ C ^-10 t ≤ C ^-2 t ( M_k(t)^1/2(k-2)∨^-8 ), since M_k(t )^1/2(k-2)≤^-3. In the case t ∈ (t_I, t_II], Q(t,t) ≤ Q(t_I, t_I) + Q(t, t- t_I) . We bound the first term using Lemma <ref> and the fact that M_k(t_I )^1/2(k-2) = ^-3 by definition: Q(t_I, t_I) ≤ C ^-10 t_I . We combine this with the bound on the second term given by Lemma <ref> to obtain Q(t,t) ≤ C ^-10 t_I + C ^-10 (t - t_I) + C ^3 M_k(t)^1/2(k-2) ≤ C ^-10 t + C ^3 M_k(t)^1/2(k-2) ≤ C ^-10 (t + ^5) , since M_k(t)^1/2(k-2)≤ 2 ^-8. We conclude that Q(t,t) ≤ C ^-10(t + 1) ≤ C ^-2 (1 + t) ( M_k(t)^1/2(k-2)∨^-8 ) , for all t ∈ [0,t_II] . In the case t > t_II, we first write Q(t,t) ≤ Q(t_II, t_II) + Q(t, t- t_II) . By (<ref>) and Lemma <ref>, we have the estimate Q(t,t) ≤ C ^-10 (1 + t_II) + C ^-2( t - t_II) M_k(t)^1/2(k-2) ≤ C ^-2 ( ^-8 (1+ t_II ) + ( t - t_II) M_k(t)^1/2(k-2) ) . Then Q(t,t) ≤ C ^-2 ( M_k(t)^1/2(k-2)∨^-8 ) ( (1 + t_II ) + ( t - t_II) ) ≤ C ^-2 (1 + t) ( M_k(t)^1/2(k-2)∨^-8 ) . Combining this with (<ref>) yields the result. The next step is to resolve the relation between M_k and Q(t,t) so as to obtain an estimate on Q(t,t) that depends solely on t, , and the initial data. To do this, we will require the following estimate from <cit.>, which allows the moments to be controlled in terms of Q(t,t). There exists a constant C depending on k, M_2(0), M_k(0) such that M_k(t) ≤ C (1 + Q(t,t)^max{2, k-2}) Using this result, we may first obtain lower bounds on the time t_II. The time t_II satisfies the lower bound ^-2 (4 min{ 4, k } - 13)≤ C_k(1 + t_II) . By definition of t_II, M_k(t_II)^1/2(k-2) = 2 ^-8. Moreover, by Lemma <ref>, Q(t_II, t_II) ≤ C ^-10 (1+ t_II). We substitute these bounds into the estimate obtained from Lemma <ref>: ^-16(k-2) = C M_k(t_II) ≤ C (1 + Q(t_II,t_II)^max{2, (k-2) }) ≤ C ^-10 max{2, (k-2) } (1+ t_II)^max{2, (k-2) }. Upon rearranging this inequality, we obtain ^-2 (4 min{ 4, k } - 13)≤ C_k(1 + t_II) . Thus if k > 13/4 then t_II→ +∞ as → 0. It follows that for T_∗ > 0 fixed there exists _∗ > 0 such that T_∗ < t_II for all < _∗, and thus the interval of interest [0, T_∗] is entirely contained within the regions I and II. We deduce directly from Lemma <ref> that for all t ∈ [0,T_∗] and < _∗, Q(t,t) ≤ C ^-10 (t+1) . It remains only to complete the estimate for the case 3 < k ≤ 13/4. Let k ≤ 13/4. Then Q(t,t) ≤ C ^-2 ·k-2/k-3 (t+1)^k-2/k-3 . If t ≤ t_II, then the estimate (<ref>) holds. Otherwise, t > t_II and so M_k(t)^1/2(k-2)≥ 2 ^-8. Then, Lemmas <ref> and <ref> give Q(t,t) ≤ C ^-2 (t+1) M_k(t)^1/2(k-2)≤ C ^-2 (t+1) Q(t,t)^1/k-2 , since k ≤ 13/4 and so in particular k < 4 (we assume without loss of generality that Q(t,t) > 1 so as to absorb the additive constant). Thus Q(t,t) ≤ C ^-2k-2/k-3 (1+t)^k-2/k-3 . We conclude that, for each t ≥ 0, Q(t,t) is bounded by the maximum of the two bounds (<ref>) and (<ref>) Q(t,t) ≤ C max{^-10 (t+1), ^-2k-2/k-3 (1+t)^k-2/k-3}. Since k ≤ 13/4, we have 2 ·k-2/k-3≥ 10, and we conclude that Q(t,t)≤ C ^-2 ·k-2/k-3 (1+t)^k-2/k-3 . § PROOF OF THEOREM <REF> Thanks to all the results from the previous sections, we can now prove our main result. The proof is based on a perturbative argument: we consider solutions g_ of the VPME system (<ref>) with respective initial data g_0,. Such solutions exist since the functions { g_0,}_≤ 1 satisfy <ref>, and therefore <ref>. Thus by <cit.> a global strong solution g_ exists, and is unique in the class of bounded density solutions. By the triangle inequality for W_2 and the first inequality in Lemma <ref>, W_1(f_, g)≤√(2)W_2(f_, g) ≤√(2) W_2(f_, g_) + √(2)W_2(g_, g) . We will show that each of the terms on the right hand side converges to zero. First, we discuss the convergence g_→ g. Since { g_0,}_≤ 1 are uniformly analytic in x there exists a solution g of KIsE (<ref>) such that g_ converges to g. This can be shown using an adaptation to the ion model of the methods of Grenier <cit.> (see the discussion in <cit.>). In <cit.>, the author introduces a representation of the plasma as a superposition of a possibly uncountable collection of fluids (ρ_^θ, u_^θ)_θ∈Θ, and shows that the quasineutral limit holds when the initial data have uniformly analytic regularity with respect to x. The convergence can be stated as follows: For any δ ' < δ, there exists a time T_∗ > 0 and multi-fluids (ρ_^θ, u_^θ)_θ∈^d bounded in C([0,T_∗] ; B_δ ') such that g_(t,x,v) = ∫_^dρ^θ_(t,x) δ_0(v - u^θ_(t,x)) θ/1 + |θ|^k_0 , for all ∈ (0,1] and multi-fluids (ρ^θ, u^θ)_θ∈^d such that lim_→ 0sup_t ∈ [0,T_∗] ( ρ^θ_ - ρ^θ_H^s(^d) + u^θ_ - u^θ_H^s(^d) ) = 0 for all s ∈N and the function g(t,x,v) : = ∫_^dρ^θ(t,x) δ_0(v - u^θ(t,x)) θ/1 + |θ|^k_0 defines a solution to KIsE (<ref>). The multi-fluid H^s convergence (<ref>) then implies that lim_→ 0sup_t ∈ [0,T_∗] W_2 (g_, g) = 0 . It remains to show that lim_→ 0sup_[0,T] W_2(f_, g_) = 0 . For this, we apply Proposition <ref> to f_ and g_: if W_2(f_0,,g_0,)≤ c_0 and √(|log( ^-2W_2(f_0,, g_0,)^2 | log1/2^-2W_2(f_0,, g_0,)^2|)|)≥C_d/∫_0^T_∗ A(s) ds+√(|log(/e)|) , where A(t) = sup_s ≤ tρ[f_](s) _L^∞ + sup_s ≤ tρ[g_](s) _L^∞, then, for all t ∈ [0,T_∗ ], W_2(f_(t),g_(t))^2 ≤ 2 e^-(√(|log{^-2W_2(f_0,, g_0,)^2 | log1/2^-2W_2(f_0,, g_0,)^2|}|) - C_d/∫_0^tA(s) ds)^2≤2/e . By the second inequality in Lemma <ref> and (<ref>)-(<ref>), W_2(f_0,,g_0,)≤ 3(1 + 2 C_0)^1/k_0-1 W_1 (f_0,,g_0,)^k_0-2/k_0-1 . It is then clear from <ref> that W_2(f_0,,g_0,)≤ c_0 is satisfied for all sufficiently small. To complete the proof it therefore suffices to show that (<ref>) holds. Now suppose that W_2(f_0,,g_0,) ≤exp (- B ^-ζ(1 + 1_d=2 |log|^2) ), for an exponent ζ > 0 and constant B>0 to be determined. Note that, by estimate (<ref>), (<ref>) is implied by <ref> if C is large enough in terms of B, C_0 and k_0. Observe that the function x ↦ |log (x |log x|) | tends to +∞ as x tends to zero from above, and that it is strictly decreasing on the interval (0, x_0) for x_0 > 0 sufficiently small. For > 0 sufficiently small, we have ^-2W_2(f_0,, g_0,)^2 ≤^-2exp (- 2 B ^-ζ(1 + 1_d=2 |log|^2) ) ≤exp (- B ^-ζ(1 + 1_d=2 |log|^2) ). Hence, for small , |log( ^-2W_2(f_0,, g_0,)^2 | log1/2^-2W_2(f_0,, g_0,)^2|)| ≥ | log ( exp (- B ^-ζ(1 + 1_d=2 |log|^2) ) B ^-ζ(1 + 1_d=2 |log|^2) ) | ≥1/2 B ^-ζ(1 + 1_d=2 |log|^2) . It remains to show that B and ζ can be chosen such that for all sufficiently small , √(B)^-ζ/2(1 + 1_d=2 |log|) ≥2 C_d/∫_0^T_∗ A(s) ds+ 2 √(|log(/e)|) . The term 2 √(|log(/e)|) is clearly of lower order than ^-ζ for any ζ > 0. It therefore suffices to show that √(B)^-ζ/2(1 + 1_d=2 |log|) ≥4 C_d/∫_0^T_∗ A(s) ds . By <ref>, <ref>, Proposition <ref> and Lemma <ref>, A(t) ≤ C_2 (1 + T_∗)^3 ^-4 ( 1 + |log|) d=2 C_3 (T_∗+1)^3 + 12/1 - (13 - 4k_0)_+^-6 (1 + 4/1 - (13 - 4k_0)_+ ) d=3 , and hence 1/∫_0^T_∗ A(t) t ≤ C_2(T_∗) ^-5 ( 1 + |log|) d=2 C_3(T_∗, k_0) ^-7 + 24/1 - (13 - 4k_0)_+ d=3 . Therefore, by choosing ζ = 10 d=2 2 + 12 (1 + 4/1 - (13 - 4k_0)_+ ) d=3 , and B>0 large enough in terms of T_∗, k_0 and d, we can ensure that (<ref>) holds, which implies the convergence (<ref>). Thus there exists C>0 such that lim_→ 0sup_t ≤ T_∗ W_2(f_(t), g(t)) = 0 under the hypothesis <ref>-<ref>, which completes the proof. Acknowledgments The first author gratefully acknowledges the support of the Heilbronn Institute for Mathematical Research, and thanks the Forschungsinstitut für Mathematik at ETH Zürich for hospitality during the preparation of this work. The authors are grateful to Mr. Florian Spicher for his assistance in the creation of the figures. § AUXILIARY LEMMA Consider the function b: [0, + ∞) → [0, + ∞) defined by b(y) = y/1 + log(1+y) . Then b has a well-defined continuous strictly increasing inverse b^-1. Moreover, for all u ∈ [0,+∞), b^-1(u) ≤ 2 u (1 + log(1+u)) . Since b is a continuously differentiable function of y, we can check its monotonicity by direct computation of the derivative: b'(y) = 1 + (1+z)log(1+z)/(1+z)(1+log(1+z))^2 > 0 for all z≥ 0. Hence, since b is continuous and strictly increasing, b^-1 is well-defined, continuous, and strictly increasing. To prove (<ref>), we will show that u ≤ b(2 u (1 + log(1+u))). Applying b^-1 will then imply the bound, since b^-1 is increasing. With this in mind, we compute b(2 u (1 + log(1+u))) = u ·2 (1 + log(1+u))/1 + log (1 + 2 u (1 + log(1+u))) . We observe that 1 + log(1+u) ≤ 1 + u, and hence 1 + log (1 + 2 u (1 + log(1+u))) ≤ 1 + log(1 + 2u(1+u)) . Since u ≥ 0, 1 + 2u(1 + u) ≤ 1 + 4u + 2u^2 ≤ 2(1+u)^2 . We substitute this inequality into (<ref>) to find 1 + log (1 + 2 u (1 + log(1+u))) ≤ 1 + log 2 + 2log(1+u) ≤ 2 (1 + log(1+u)). Using (<ref>) and (<ref>) we obtain (<ref>), which completes the proof. abbrv
http://arxiv.org/abs/2307.05638v1
20230711093752
A Comprehensive Survey of Deep Transfer Learning for Anomaly Detection in Industrial Time Series: Methods, Applications, and Directions
[ "Peng Yan", "Ahmed Abdulkadir", "Matthias Rosenthal", "Gerrit A. Schatte", "Benjamin F. Grewe", "Thilo Stadelmann" ]
cs.LG
[ "cs.LG", "cs.AI", "I.2.0" ]
Dissecting the γ-ray emissions of the nearby galaxies NGC 1068 and NGC 253 Jintao Zheng ========================================================================== Automating the monitoring of industrial processes has the potential to enhance efficiency and optimize quality by promptly detecting abnormal events and thus facilitating timely interventions. Deep learning, with its capacity to discern non-trivial patterns within large datasets, plays a pivotal role in this process. Standard deep learning methods are suitable to solve a specific task given a specific type of data. During training, the algorithms demand large volumes of labeled training data. However, due to the dynamic nature of processes and the environment, it is impractical to acquire the needed data for standard deep learning training for every slightly different case anew. Deep transfer learning offers a solution to this problem. By leveraging knowledge from related tasks and accounting for variations in data distributions, this learning framework solves new tasks even with little or no additional labeled data. The approach bypasses the need to retrain a model from scratch for every new setup and dramatically reduces the labeled data requirement. This survey provides an in-depth review of deep transfer learning, examining the problem settings of transfer learning and classifying the prevailing deep transfer learning methods. Moreover, we delve into applying deep transfer learning in the context of a broad spectrum of time series anomaly detection tasks prevalent in primary industrial domains, e.g., manufacturing process monitoring, predictive maintenance, energy management, and infrastructure facility monitoring. We conclude this survey by underlining the challenges and limitations of deep transfer learning in industrial contexts. We also provide practical directions for solution design and implementation for these tasks, leading to specific, actionable suggestions. § INTRODUCTION The fourth industrial revolution — Industry 4.0 <cit.>, that is characterized by increasing efficiency through the digitization of production, automation, and horizontal integration across companies <cit.>, and the advent of connected cyber-physical systems – referred to as internet of things (), increases the need for autonomous and intelligent process monitoring. This can be exemplified by the use case of a smart factory in which industrial processes are transformed to be more flexible, intelligent, and dynamic <cit.>, or the use case of decentralized energy production with wind and solar <cit.>. In these examples, AI-powered anomaly detection integrates the analysis of time series data to detect unusual patterns in the recorded data. By identifying parameters that fall outside a window of normal operation, operators can trigger interventions and adjustments to ensure high product quality and safe operations. To achieve this, physical properties such as pressure or temperature are monitored and analyzed in real-time. Changes in these variables capture drifting and abrupt faults caused by process failures or malfunctions <cit.>. The production process must adapt quickly to changes in production and the environment to meet the requirements for flexibility and dynamics. Further use cases exist in such diverse areas as manufacturing monitoring including automatic quality control, predictive maintenance of goods and services, infrastructure monitoring of e.g. building energy systems or power plants, digital agriculture, petrochemical process optimization, computer network intrusion detection, or aircraft flight monitoring, to name a few. Artificial Intelligence, in particular deep learning, provides competent frameworks to automate intelligent monitoring in order to provide valuable assistance to operators and high-level control systems. Leveraging the power of deep learning, formative features of the data – technically referred to as representations <cit.> – can be captured in a machine-learned model and thereby enable a detailed understanding of variations in standard operations. However, in non-trivial and non-stationary conditions, the task or the underlying data may change. For example, the monitoring system of a milling machine may in one instance be tasked to detect a blunt tool based on the vibration and in another instance – using the same vibration measurements – to detect insufficient cooling lubricant. Knowledge acquired to solve one task in one setting with a given tool, machined part, and type of machine may be transferred to solve the same or similar task in a setting with a different tool, machined part, or type of machine. Slowly changing conditions (drifts), abrupt mode changes (for instance due to tool change), and new tasks (such as the detection of another failure mode) may require adjustments to the machine-learned model. In these cases, it is desirable to adjust the analysis model without retraining from scratch, as it is costly or impractical to acquire sufficient training data to learn the full manifold <cit.>. Transfer learning is a machine learning framework to achieve this <cit.>. As depicted in Figure <ref>, data and algorithms from a related task may be leveraged in a new one. By accounting for changes in data distributions and tasks, or leveraging existing models, knowledge learned from related tasks can be used to improve performance on new tasks instead of retraining a model for each individual application from scratch. This transfer-learning-boosted modeling forms the basis for identifying anomalies that deviate from established patterns in a non-trivial manner without full re-training. Deep transfer learning <cit.> extends the transfer learning paradigm by leveraging deep learning models. In industrial contexts, it ensures optimal production even as production conditions shift. This dynamic adaptability is key in maintaining the effectiveness of anomaly detection systems in the dynamic environment that characterizes industrial applications including the broad categories of manufacturing process monitoring, predictive maintenance, energy management, and infrastructure facility monitoring as detailed in <Ref>. In this survey, we review the foundations of deep transfer learning to equip the reader with working knowledge of the main principles and intuition for ideas. Further, we provide a comprehensive overview of the current state of the art of deep transfer learning approaches for time series anomaly detection for industrial applications. Our main contribution is a systematic review of research work on real-world industrial applications. For these, we discuss potential, challenges, and limitations and give directions for future work and potential. The paper is organized as follows: First, we introduce a taxonomy of transfer learning problem settings and further categorization of deep transfer learning approaches (Section <ref>). Then, we describe the task of anomaly detection in time series (Section <ref>) in selected industrial applications (Section <ref>). To conclude, we discuss current challenges, limitations, and future research directions (Sections <ref>–<ref>) in the field. § DEEP TRANSFER LEARNING Transfer learning in a deep learning setting aims to increase the efficiency, performance, and generalization of deep learning models by transferring knowledge from one data set and task to a new one. This eliminates the need to train a deep learning model from scratch, which in turn reduces the amount of necessary data and compute required to solve a new task or new data domain. In either case, knowledge is transferred from a source to a target domain, as defined below. The transfer learning problem settings can be categorized as inductive or transductive transfer depending on the data and task conditions, while we categorize deep learning-based transfer learning approaches into instance transfer, parameter transfer, mapping transfer and domain-adversarial transfer. We illustrate them by using two intuitive examples in Figure <ref>, with more details being elaborated in the following sections. §.§ Transfer learning problem definition Domain 𝒟 includes the domain feature space 𝒳 and marginal data distribution P(X) as 𝒟 = {𝒳, P(X)}, where X is the domain data, X={x_1, …, x_n}∈𝒳. Similarly, a learning task is defined as 𝒯 = {𝒴, f_𝒯(·)}, where 𝒴 denotes the task space and usually represents class label. For anomaly detection tasks, 𝒴 is the set of the two classes “normal” and “abnormal”. The function f_𝒯(·) can be used to predict the corresponding label of a new instance x_i. The objective predictive function f_𝒯(·) can be learned from domain data and can be interpreted as a form of conditional probability. Thus, the learning task can be rewritten as 𝒯 = {𝒴, P(Y|X)}, where P(Y|X) is used as a likelihood measure to determine how well a given data set X fits with a corresponding class label set Y. We largely follow the definition of transfer learning by <cit.> and <cit.>. Given a source domain 𝒟_S and learning task 𝒯_S, as well as a target domain 𝒟_T and learning task 𝒯_T, transfer learning aims to improve the performance of the predictive function f_𝒯(·) in 𝒟_T by transferring knowledge from 𝒟_S and 𝒯_S, where 𝒟_S𝒟_T and/or 𝒯_S𝒯_T. Usually, the size of source dataset is much smaller than target dataset. This definition of transfer learning can be broadened, i.e., the target task can profit from multiple source domains. Transfer learning is thus the idea of making the best use of related source domains to solve new tasks. In contrast, traditional machine learning (ML) methods learn each task separately from scratch, and each respective model can only be applied to the corresponding task. We define a taxonomy of transfer learning problems settings as shown in Figure <ref> mainly depending on the label availability in the two domains to be easily applicable to the requirements of a case at hand (compare different definitions for other purposes in the literature <cit.>). We differentiate it into inductive and transductive transfer learning <cit.>[In this survey, we do not consider unsupervised learning scenarios since either source labels or target labels are provided for most industrial applications.]. Inductive transfer learning is applied when the target task is different from the source task, i.e., 𝒯_S𝒯_T (meaning that {𝒴_𝒮𝒴_𝒯} or {P(Y_S|X_S) P(Y_T|X_T) }). The conditional probability distribution is induced with labeled training data in the target domain <cit.>. A corresponding example is illustrated as Scenario A in Figure <ref>, where the learning tasks are different and the goal of transfer learning is to recognize point anomaly from the collective anomaly task. Related areas of inductive transfer learning are multi-task learning <cit.> and sequential learning, depending on whether tasks are learned simultaneously or sequentially. Transductive transfer learning is applied when the source and target tasks are the same, while the source and target domain are different, i.e., 𝒯_S = 𝒯_T and 𝒟_S𝒟_T (meaning that {𝒳_𝒮𝒳_𝒯} or {P(X_S) P(X_T)}. A subcategory is domain adaptation <cit.> when the feature space of source and target data are the same but the corresponding marginal distributions are different (i.e., {𝒳_𝒮 = 𝒳_𝒯} and {P(X_S) P(X_T)}). Scenario B in Figure <ref> is an example of transductive transfer learning where the learning tasks are identical, and the goal of transfer learning is to recognize contextual anomalies in an unlabelled data set. Other learning paradigms closely related to transfer learning are listed below: Multi-task learning is a machine learning technique where a single model is trained on multiple tasks simultaneously. The idea is to improve the performance of the model by learning a shared representation that captures the features between all tasks. Continuous learning <cit.> is a learning process where the model continuously learns new tasks from previous tasks over time without forgetting how to solve previous tasks. To some extent, continuous learning can be seen as a sequential transfer learning process, with the constraint to preserve the performance on the previous tasks which leads to an accumulation of knowledge over time. Few-shot learning <cit.> is a type of machine learning where a model can learn and perform well on a new task with only a limited number of labeled samples. In extreme cases, the model can learn with one label <cit.> and without any label <cit.>. While transfer learning usually involves reusing the model from relevant tasks and continuing training on the target dataset. Meta-learning <cit.> is a machine learning technique that focuses on the learning process. It is known as “learning to learn”. For meta-learning, models are trained on a different set of tasks instead of a set of data in the traditional machine learning setting. In this sense, meta-learning can be seen as a form of transfer learning because it involves transferring knowledge from task to task. Knowledge distillation <cit.> effectively learns a small model trained to mimic the behavior of a larger, more complex model. The knowledge learned by the larger model can be transferred to the smaller model, which can then be used for the target task, e.g., on a less powerful edge device. Self-supervised learning <cit.> involves training a model to predict some aspect of the input data without any external supervision. The learned representations can be used for various downstream tasks, including those that involve transferring knowledge from one domain to another. §.§ Deep transfer learning approaches Since deep neural networks (DNNs) can learn useful feature representations from large amounts of data through back-propagation <cit.>, they have been widely adopted for tackling complex problems, which involve large-scale and high-dimensional data, also in practice <cit.>. Deep transfer learning methods implement transfer learning principles within DNN and, among other things, enable deep learning based analysis pipelines to be applied to new datasets. On a high level, deep transfer learning approaches can be divided into data-driven and model-driven ones. Data-driven approaches focus on transferring knowledge by transforming and adjusting the data instances. Model-driven approaches leverage DNNs to develop domain-invariant features by reducing the feature discrepancy between source and target domain data and then transferring generalized knowledge to new tasks. Following the taxonomy of <cit.>, we divide deep transfer learning approaches further into 4 categories: instance transfer, parameter transfer, mapping transfer, and domain-adversarial transfer, as illustrated in Table <ref>. Instance transfer and mapping transfer are data-driven approaches, parameter transfer is a model-driven approach, while domain-adversarial transfer is a combination of both. 1=12pt §.§.§ Instance transfer The intuition of instance transfer is that although source and target domains differ, it is still possible to transform and reuse source data together with a few labeled target samples. A typical approach is to re-create some labeled data from the source domain. <cit.> propose an instance-based deep transfer learning model with attention mechanism to predict stock movement. They first create new samples from the source dataset that are similar to the target samples by using attention weights, and then train on the created samples and target training samples for prediction tasks. <cit.> introduce an innovative instance transfer method for domain adaptation. They propose an effective auto-encoder model with a pseudo-label classifier to reconstruct new data instances that obtain general features across different datasets for medical image analysis. Taking another avenue, <cit.> exclude the source data that have a bad impact on training target data. Specifically, they choose a pre-trained model from a source domain, estimate the impact of all training samples in the target domain, and remove samples that lower the performance of the model. §.§.§ Parameter transfer Parameter transfer adapts learned parameters of a pre-trained model to a new model. This assumes that DNNs can get similar feature representations from similar domains. Thus, through transferring parts of the DNN layers together with pre-trained parameters and/or hyperparameters, the pre-trained model is used as a base model to further train on target domain data and solve different learning tasks. Particularly, parameter transfer has gained popularity in computer vision and natural language processing, where large models are pre-trained on large datasets <cit.>. In natural language processing, BERT <cit.> and GPT-3 <cit.> that are based on the transformer architecture <cit.> can be fine-tuned for a variety of natural language processing tasks, including content generation <cit.>, language translation <cit.>, question answering <cit.>, and summarization <cit.>. <cit.> investigated the general transferability of DNNs. Experiment results show that transferring features from source to target domain leads to improved generalization in networks compared to those trained solely on the target dataset. Unlike the typical way of fine-tuning a pre-trained model, <cit.> propose the adaptive fine-tuning approach SpotTune to find the optimal fine-tuning strategy for the target task. Specifically, a policy network is used to make routing decisions on whether to pass the target instance through the pre-trained model. The results show SpotTune is effective in most cases by using a hybrid of parameter and instance transfer. <cit.> propose an unsupervised domain adaptation for vertebrae detection in 3D CT volumes by transferring knowledge across domains during each batch of the training process. §.§.§ Mapping transfer Mapping transfer refers to learning a related feature representation for the target domain by feature transformation, which includes feature alignment, feature mapping, and feature encoding <cit.>. The goal is to reduce feature discrepancies between source and target domains by minimizing the distance between the distribution of mapped features in the latent space. There are various criteria to measure the distribution difference, including Wasserstein distance <cit.>, Kullback-Leibler Divergence <cit.>, etc. Among them, Maximum Mean Discrepancy (MMD) <cit.> is most frequently adopted in mapping transfer from the surveyed papers. The MMD is calculated as the difference between the mean embeddings of the samples in a reproducing kernel Hilbert space associated with a chosen kernel function. Added to the target loss function, it measures the difference between two probability distributions and serves as a powerful tool for comparing the similarity of complex, high-dimensional datasets using a wide variety of kernel functions. Some previous work focusing on transferred feature extraction/dimensionality reduction using MMD has been done. <cit.> base their Joint Adaptation Network on MMD, in which the joint distributions of multiple domain-specific layers across domains are aligned. In addition, an adversarial training version was adopted to make distributions of the source and target domains more distinguishable. Similarly, <cit.> adopted multi-layer adaptation and proposed Deep Adaptation Networks (DAN). In DAN models, the first three convolutional layers are used to extract general features. For the last three layers, multi-kernel MMD is used to bridge the cross-domain discrepancy and learn transferable features. <cit.> also based on MMD and proposed a Deep Transfer Network in which two types of layers are used to obtain domain invariant features across domains. The shared feature extraction layers learn a shared feature subspace between the source and the target samples, and the discrimination layer is then used to match conditional distributions by classifier transduction. <cit.> proposed Deep Adaptation Hash network, which is fine-tuned from the VGG-F <cit.> network. Multi-kernel MMD loss trains the Deep Adaptation Hash to learn feature representations that align the source and target domains. §.§.§ Domain-adversarial transfer Inspired by Generative Adversarial Networks (GANs) <cit.>, the goal here is to extract a transferable feature representation that is indiscriminative between source and target domain through adversarial training. Adversarial transfer mainly focuses on domain adaptation problems. <cit.> adopt a domain confusion loss across the source and target domains to learn a domain invariant representation. <cit.> propose a new domain adaptation architecture by adding a domain classifier after feature extraction layers. A gradient reversal layer is used to ensure the similarity of the feature distributions over source and target domains. <cit.> propose a domain adversarial DNN in which a domain regressor is applied to learn a domain invariant feature representation. <cit.> use an unsupervised domain adaptation method that combines adversarial learning with discriminative feature learning. § TIME SERIES ANALYSIS FOR INDUSTRIAL PROCESSES Time series analysis encompasses statistical techniques to analyze and interpret sequential temporal data. In the context of industrial processes, time series analysis plays a crucial role in automating monitoring and controlling the efficiency, quality, and performance of these processes. Specifically, the analysis of time series data can be used for anomaly detection, forecasting, process control, performance assessment, and maintenance scheduling to increase the efficiency of the process. Anomaly detection According to <cit.>, an outlier is as an observation that deviates significantly from other observations in a way that it is likely that it was generated by a different mechanism. In this survey, we focus on time series data collected from machine sensor readings in the context of industrial applications, either univariate (only one variable is recorded over time) or multivariate (several simultaneously recorded measurements). Time series anomalies might occur for various reasons, including internal factors (e.g., temporary sensor error, machinery malfunction) and external factors (e.g. human error, ambient temperature). They can be divided into three categories <cit.>: point anomalies, contextual anomalies, and collective anomalies. Point anomalies are isolated samples that deviate significantly from the normal behavior of that time series, which can be seen on the left of Fig. <ref>, e.g., a sudden spike in a pressure reading from a manufacturing machine sensor. These point anomalies can be caused by temporal sensor error, human error, or abnormal machinery operations. Contextual anomalies represent data points that deviate from normal ones only in their current context, and an example can be seen in the middle of Fig. <ref>. Collective anomalies are a set of data points that in their entirety (but not individually) are abnormal with respect to the entire time series, as shown on the right of Fig. <ref>. Challenges regarding detecting time series anomalies persist due to two specific properties: First, the complexity of time series data. As the automation level of industrial processes and the complexity of industrial systems increases, univariate time series data become insufficient and inefficient in representing any industrial process in its entirety. Hence, more sensors are installed to monitor the whole process, making it necessary in turn to detect anomalies from multivariate time series, which poses particular challenges since it requires consideration of temporal dependencies and relationships between variables and modalities. Second, the dynamic variability of industrial processes can pose difficulties in detecting anomalies due to fluctuations in the process caused by varying input or environmental conditions such as material, temperature, pressure, and humidity, which lead to domain shifts. Process automation The tasks described above are combined to automate industrial processes. For example, after the detection of an anomaly, another model that captures the relationship between time course and different failure modes or drifts may be exploited for predictive maintenance. For example, in injection molding process monitoring, anomaly detection models are used to analyze recorded sensor data from injection molding machines to detect bad parts and identify the root cause of anomalies <cit.>. There are two basic ways to detect anomalies: for supervised anomaly detection, labels (normal/abnormal) are needed per time series to build a binary classifier <cit.>. For unsupervised anomaly detection, an anomaly score or confidence value that is conditioned purely on normal data can be used to differentiate abnormal from normal instances <cit.>. § INDUSTRIAL APPLICATIONS §.§ Overview Currently, deep transfer learning approaches are popular in the field of computer vision and natural language processing because of the large available datasets. This popularity is not as pronounced for industrial time series data, likely due to the lack of publicly available data and the domain-specific differences between such data, making the field a less easy target for general gains. Fortunately, in recent years a growing number of deep transfer learning approaches have been applied in industry to solve anomaly detection tasks, such as fault diagnosis <cit.>, quality management <cit.>, manufacturing process monitoring <cit.>, network/software security <cit.>, and infrastructure monitoring <cit.>. These can be mapped onto the core industrial domains of manufacturing process and infrastructure monitoring, predictive maintenance, and energy management. Table <ref> presents a compact comparison of the related works using deep transfer learning approaches to solve these tasks. Figure <ref> illustrates the quantity structure of the connections between industrial applications and the deep transfer learning approaches based on our literature survey. The Sankey diagram shows every path that connects the four dimensions of the methodology-problem-landscape within the surveyed literature. The broader the path is, the more papers are related to that element. The goal is to give an overview of how deep transfer learning is applied to industrial problems in the recent literature and specifically show with these four dimensions: (1) which deep transfer learning approaches are actually used in practice; (2) what the main industrial domains for time series anomaly detection are; (3) what deep transfer learning category these domains belong to; and (4) what labels are available in source and target domain. Key observations from Figure <ref> are: Regarding deep transfer learning approaches, parameter transfer is much more frequently used than any other deep transfer learning approach across all surveyed industrial applications since fine-tuning a pre-trained model on target data is more straightforward to implement by taking advantage of the pre-trained model on the source dataset and usually without fundamental modification on the model architecture. It is noteworthy that instance transfer and adversarial transfer do not appear in the diagram. Apparently, these two deep transfer learning approaches are not considered the optimal choice for respective time series anomaly detection tasks in the industry. The difficulty lies in implementing and training these scarcely researched approaches in the industrial field, as indicated by the findings. Regarding industrial applications, hybrid approaches of parameter and mapping transfer can be seen in predictive maintenance. Regarding deep transfer learning categories, most industrial applications use inductive TL, indicating they focus on leveraging labeled source and target data to solve the target task, i.e., use supervised learning. §.§ Manufacturing process monitoring Manufacturing process monitoring is crucially important to ensure high-quality products and low rejection rates. For example, in injection molding machines, sensors are installed to detect molding conditions in the cavity, such as cavity pressure and temperature. These signals are used to analyze in particular the mold filling and solidification process for each produced part. Such cyclic processing data can also be seen in metal machining (cutting force signal) or joining of parts (joining force signal). Currently, parameter transfer is predominantly used for manufacturing processes <cit.>. In injection molding, parameter transfer is applied to transfer the knowledge from one or more source domains to solve tasks in a target domain <cit.>. Similarly, <cit.> build a bridge between simulated data and real data using parameter transfer in injection molding. <cit.> compare different DNNs for anomaly detection tasks on metal forming datasets. Further, they propose a deep transfer learning framework aiming to transfer knowledge between tasks. However, the proposed architecture is not validated. Later, <cit.> apply continuous learning on the same dataset by transferring knowledge from several source tasks to a target task to train a deep learning algorithm capable of solving both source and target tasks. <cit.> apply parameter transfer to monitor operation status of manufacturing testbeds with vibration sensor data. <cit.> transfer knowledge across three chambers in a production line to detect anomalous time series data. Results show reduced training time and improved detection accuracy through transfer learning. §.§ Predictive Maintenance Predictive maintenance aims to predict the necessity of maintenance before production is negatively impacted by a failure. Tasks involve monitoring equipment to anticipate maintenance requirements (i.e., predict likely future failure) to optimize maintenance schedules <cit.>. Time series anomaly detection is often used in respective systems to identify abnormal patterns or behaviors in operation that may indicate the need for maintenance, such as increasing noise, vibrations, etc. <cit.> use mapping transfer with a Sparse Auto-Encoder (SAE) for motor vibration anomaly detection. A transformation from the source and target space to a common latent feature space is learned by MMD to make the feature distribution of two domains as identical as possible. Similarly, <cit.> also used mapping transfer with an SAE architecture for fault detection of rotation bearings, using an MMD regularizer to extract a common feature representation. Subsequently, they propose a new MU-Net architecture to deal with multivariate time series anomaly detection tasks <cit.>. First, they pre-train a U-Net <cit.> on a large time series dataset for an anomaly detection task. Then, they propose a new model MU-Net, built upon U-Net, wherein each channel they can use the pre-trained U-Net through fine-tuning to transfer knowledge for multivariate time series anomaly detection. In another application, parameter transfer is used to predict the remaining useful life for tools in manufacturing <cit.>. An SAE network is first trained to predict the remaining useful life of a cutting tool on retrospectively acquired data in an offline process. The trained network is then transferred to production with a new tool in operation for online remaining useful life prediction. A 2D CNN-LSTM <cit.> hybrid architecture for fault detection is presented. The model is trained on a fault dataset, and then parameter transfer is applied to target datasets with a different set of conditions. The result shows that transfer learning based hybrid deep learning significantly reduces the training time and is highly suitable for real-time industrial fault diagnosis in various environments. Similarly, parameter transfer is implemented to reduce the gap between different industrial environments <cit.>. <cit.> use a stacked SAE to extract general features from source data and a digital-twin-assisted fault diagnosis approach is presented to transfer knowledge from virtual space to physical space for real-time use. Here, a DNN model is first fully trained in virtual space and then migrated to the physical space using deep transfer learning for real-time use. The surveyed literature proves that deep transfer learning is a research field that could simplify the life cycle of predictive maintenance systems and facilitate DNN model reusability by reducing the required data and training time, helping adapt them to solve similar tasks. §.§ Energy management Energy management deals with systems that detect abnormal excessive consumption caused by end-users' unusual behavior or malfunction of faulty devices or systems <cit.>. The goal is to develop automatic, quick-responding, accurate, and reliable fault detection to save energy and build environmentally friendly systems. Energy anomaly detection systems monitor data during energy generation, transmission, and utilization, in order to ensure normal energy consumption. <cit.> design a cluster-based deep adaptation layer to improve a deep adaptation network, effectively reducing the mismatch in transfer learning of spinning power consumption anomaly detection. <cit.> successfully build an electricity consumption time series anomaly detection method in aluminum extrusion. Parameter transfer is applied to transfer domain knowledge from another data-sufficient domain. They also find it unnecessary when the target data is sufficient because transferring knowledge decreases prediction accuracy. §.§ Infrastructure facilities monitoring Infrastructure facilities monitoring refers to monitoring and maintaining the conditions of infrastructure facilities, such as bridges, buildings, and networks. This can include detecting potential issues or failures. The goal is to minimize the impact of failures on the public or the environment. <cit.> apply parameter transfer to make full use of the similarity of the anomalous patterns across different bridges and transfer the knowledge obtained by a CNN model to a small part of target data, achieving high accuracy anomaly detection across bridges. <cit.> present a parameter transfer approach towards building a network intrusion detection system based on CNN and LSTM. § DISCUSSION §.§ Challenges Label availability Deep transfer learning is built upon deep learning, which usually requires a large amount of labeled data, the more data a model has available for learning, the better it can generalize to new examples. In real-world industrial time series anomaly detection tasks, collecting data is probably easy, but collecting labels is much more expensive and time-consuming, sometimes prohibitively so, leading to the unavailability of sufficient labeled data. Self-supervised learning can be used to re-label a large amount of unlabeled data and thus facilitate transfer learning process. Thus, anomaly detection models usually need to learn in an unsupervised or semi-supervised mode <cit.>. Deep learning for imbalanced data Even if the labels can be collected, anomalies can be extremely rare by design, which poses the risk of training with extremely imbalanced data. A practical problem for anomaly detection in industry is the extremely imbalanced data distribution, in which normal samples dominate in data and abnormal samples only share a small percentage in the whole dataset. Prior research has proven that the effect of class imbalance on classification performance by using deep learning is detrimental <cit.>. However, most research studies still ignore such problems, which can result in poor performance regarding the minority class, i.e., abnormal data are misclassified as normal. Missing relevant data Another problem is missing relevant data, i.e, some information that has a significant effect on the process from case to case is not even recorded or is too complex to record (i.e. part geometry, machine geometry, or environmental conditions in injection molding processes). Domain shift Domain shifts lie at the heart of the deep transfer learning problem, but the dynamic changes in many industrial processes, up to an apparent dissimilarity of source and target data, make the transfer learning task particularly challenging. Effectiveness of deep transfer learning The general effectiveness of deep transfer learning is limited by the difficulty of determining which knowledge or to what extent the knowledge should be transferred from source to target task. Unlike natural language processing, pretraining a language model on a large corpus of text data can help the model learn the statistical patterns and semantic and syntactic representations of words and sentences, which can be used for new natural language processing tasks with a few data. For industrial time series, due to data privacy, large available public datasets usually do not exist, or they cannot be used even because of a large domain gap between different datasets and tasks. In this case, transferring all of the knowledge may not be beneficial, as it may be irrelevant. In the worst case, this can lead to negative transfer <cit.>, in which the extracted knowledge harms the new task-learning. This requires assessing how source and target tasks are related, carefully selecting the knowledge to be transferred, and selecting the proper means to implement this transfer. §.§ Directions for anomaly detection solution design Data preprocessing How data preprocessing should be conducted is an open question. For industrial applications, some researchers consider directly using time series data as input for training to be inefficient, thus suggesting deriving or selecting features from time series data by statistical methods or human experience to decrease the complexity of the dataset dramatically. On the other hand, this crops a lot of potentially useful information, e.g., the time series trend. Some researchers use machine parameters as features of the manufacturing process instead of using process data collected by sensors <cit.>. Others try different transformations of raw time series data, a common way being to transform 1D time series data to 2D image data <cit.> or transforming time domain signals otherwise into the frequency domain <cit.>. However, as large-scale computation power and storage become cheaper and more accessible, it is becoming increasingly common to use deep learning techniques to directly process time series data <cit.>. Data augmentation Data augmentation is useful for deep learning models because it can help to prevent overfitting. For deep transfer learning, when a model becomes too closely adapted to the specifics of the source domain, it may not be able to generalize well to some examples in the task domain. One important technique is to acquire effective synthetic data, e.g., using a simulation process or model to explore potential anomalous conditions by simulating industrial processes under parameters that cannot yet be experienced in the real-world. High fidelity and reliability of simulation data can provide training data at low cost and mitigate the problem of insufficient samples for deep transfer learning <cit.>. Another way to generate effective synthetic data is to use GANs. GANs are trained on normal data only to generate indistinguishable samples from which abnormal samples are distinguished during the testing stage of the overall anomaly detection system based on their deviating data distribution <cit.>. To increase the number of anomalous samples and thus the robustness of the anomaly detection model, the technique of adversarial perturbation known from computer vision <cit.> can be used. Data imbalance DNNs perform well when they are trained on balanced datasets. However, in practice, it is difficult to get sufficient anomalous data for anomaly detection tasks. For example, manufacturing process is usually in a healthy state due to the pre-designed and optimized operation. Several ways exist to address the imbalanced dataset for time series anomaly detection. To deal with data imbalance, one way is to oversample the minority class, i.e., to randomly replicate samples from the minority class to equalize the number of samples from each class in each batch. Synthetic Minority Over-sampling Technique is an advanced method that creates synthetic samples to force the decision region of the minority class to become more general <cit.>. This technique is widely used in anomaly detection tasks in industry <cit.>. Apart from oversampling, the resampling strategy is also frequently used to assign a higher probability to abnormal samples and evenly select the same amount of samples from both classes in each batch. Finally, a weighted loss can be used that balances the loss for the abnormal and normal class in supervised anomaly detection <cit.>. §.§ Directions for deep transfer learning implementation When shall deep transfer learning be used? (1) Limited data availability: If the amount of data available for a specific task is limited, pre-training on related source data can learn general features that can be transferred to the specific learning task in the target domain. (2) Similar domains: deep transfer learning is well suited for similar source and target domains. (3) Limited resources (time and compute): Using parameter transfer here is recommended if pre-trained models exist. When not to use deep transfer learning? (1) Different tasks: If the target learning task is vastly different from the source learning task, deep transfer learning may not be appropriate. For example, if one wants to train a model for natural language processing on a new dataset, using a pre-trained model that has been trained on image data will not be useful. (2) High domain shift: If there is a large difference between the source and the target domain, deep transfer learning may not be effective. This can happen when the data distributions, features, or labels are vastly different. (3) Abundance of labeled data: If there are enough data for the new task, it may be more effective to train a model from scratch <cit.>. What model architecture to choose? We suggest selecting the model architecture mainly depending on the data size and label availability, starting from a relatively small network and moving gradually to more complex DNNs. It is important to effectively capture the temporal dependencies and extract respective features of time series data. LSTMs are used heavily for detecting temporal dependencies in time series data <cit.>. CNNs are also effective in extracting time series features <cit.>. For semi-supervised settings, CNN-based auto-encoders are trained to reconstruct the original data <cit.>. Another aspect of deep transfer learning implementations is often the limited computing power of hardware platforms, such as embedded systems in industrial applications. Sensor data are typically acquired using resource-constrained edge processing devices that struggle with computationally intensive tasks, especially when updating a DNN model. One possible solution is federated learning, perhaps the most popular framework, mainly due to its feature of leveraging data while still preserving their privacy <cit.>. The technology enables a more collaborative approach to ML while preserving user privacy by storing data decentralized on distributed devices rather than on a central server. Combining deep transfer learning with federated learning is a promising and powerful combination in the abovementioned industrial applications. Beyond transfer learning Foundation models such as transformers <cit.>, diffusion models <cit.> and SAM <cit.> demonstrate emerging properties such as in-context learning <cit.> and complex cross-modality conditioning. This is achieved by training complex and often auto-regressive models with massive amounts of data, although the precise mechanisms that lead to this are not well understood. Some of those models generalize to new settings and tasks, without an explicit element of transfer learning. Thus, the application of foundational models in industrial time series analysis has the potential to reduce and eventually eliminate the need to explicitly account for changes in the domain in modeling, and instead, the foundational models will provide the transfer capability. To not only detect anomalies but also identify failure modes and elicit an appropriate intervention, AI systems must have some form of understanding, a world model, or, in other words, the AI has to implicitly or explicitly model causal relations. Counterfactual inference incorporates causal relations between observations and interventions which allows making predictions of outcomes that were never seen during training <cit.>. § CONCLUSIONS In this survey, we presented a comprehensive overview of (deep) transfer learning by defining transfer learning problem settings and categorizing the state-of-the-art deep transfer learning approaches. Equipped with this foundation, we selected representative examples of the landscape of fielded applications to provide practitioners with a guide to the field and possibilities of industrial time series anomaly detection. After carefully discussing open challenges, we gave practical directions for time series anomaly detection solution design and deep transfer learning implementation. We found that current applications focus on simple cases with simple datasets, neural network structures, and deep transfer learning schemes. Despite this, the survey suggests that deep transfer learning approaches have huge potential and promise for solving more complex and dynamic anomaly detection tasks in the industry. As the field is still in an early stage, more R&D is expected to fully realize the potential of deep transfer learning in increasingly complex settings. Future work should focus on developing robust transfer learning schemas and methods that can handle more complex and dynamic tasks. The following directions hold the greatest potential for future work: Automatic selection of transferable features <cit.> It refers to methods for selecting and transferring only the relevant knowledge from the base model. This could involve the use of techniques such as selective fine-tuning and distillation to identify the most important features and knowledge learned from source domains <cit.>. Investing into more complex deep transfer learning schemes and DNN architectures Most deep transfer learning approaches applied in industry focus on the parameter transfer approach as it is conceptionally the simplest and readily applicable by interdisciplinary teams without ML research experience. It seems promising to invest in testing more appropriate deep transfer learning approaches according to different use cases, such as mapping transfer, adversarial transfer, etc. The same applies to testing diverse DNN architectures besides straightforward ones. Data-centric approach to real-time anomaly detection The data-centric approach focuses on improving ML models by ensuring high-quality labeled data <cit.> using techniques such as re-labeling, re-weighting, or data augmentation <cit.>. Currently, a human-in-the-loop solution is still needed, frameworks have been proposed to assist annotators with graph-based algorithms such as nearest neighbor graphs <cit.>, decision trees <cit.>, or factor graphs <cit.>. Although these methods have proven to be effective, a more automated process is a goal for future research. Integration with other ML methods To build robust AI approaches to solve time series anomaly detection in industry, only focusing on transfer learning will not be sufficient. Combinations with other ML approaches are needed in the future, such as continuous learning, meta-learning, and federated learning. § ACKNOWLEDGMENTS We would like to acknowledge Claudio Riginio for his constructive comments on an earlier draft of this paper and for his assistance with the illustrations. This work has been supported by Innosuisse grant 62174.1 IP-ENG “DISTRAL”. elsarticle-num-names
http://arxiv.org/abs/2307.07343v2
20230714134635
MaxMin-L2-SVC-NCH: A Novel Approach for Support Vector Classifier Training and Parameter Selection
[ "Linkai Luo", "Qiaoling Yang", "Hong Peng", "Yiding Wang", "Ziyang Chen" ]
cs.LG
[ "cs.LG" ]
MaxMin-L2-SVC-NCH: A Novel Approach for Support Vector Classifier Training and Parameter Selection Linkai Luo, Qiaoling Yang, Hong Peng, Yiding Wang, Ziyang Chen This work was supported in part by the China Natural Science Foundation under Grant 62171391. (Corresponding author: Linkai Luo). Linkai Luo, Qiaoling Yang, Hong Peng, Yiding Wang and Ziyang Chen are with the Department of Automation, Xiamen University, Xiamen 361102, China, and also with the Xiamen Key Laboratory of Big Data Intelligent Analysis and Decision-Making, Xiamen, China (e-mail: [email protected]; [email protected]; [email protected]; [email protected]; [email protected];). ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= The selection of Gaussian kernel parameters plays an important role in the applications of support vector classification (SVC). A commonly used method is the k-fold cross validation with grid search (CV), which is extremely time-consuming because it needs to train a large number of SVC models. In this paper, a new approach is proposed to train SVC and optimize the selection of Gaussian kernel parameters. We first formulate the training and parameter selection of SVC as a minimax optimization problem named as MaxMin-L2-SVC-NCH, in which the minimization problem is an optimization problem of finding the closest points between two normal convex hulls (L2-SVC-NCH) while the maximization problem is an optimization problem of finding the optimal Gaussian kernel parameters. A lower time complexity can be expected in MaxMin-L2-SVC-NCH because CV is not needed. We then propose a projected gradient algorithm (PGA) for training L2-SVC-NCH. The famous sequential minimal optimization (SMO) algorithm is a special case of the PGA. Thus, the PGA can provide more flexibility than the SMO. Furthermore, the solution of the maximization problem is done by a gradient ascent algorithm with dynamic learning rate. The comparative experiments between MaxMin-L2-SVC-NCH and the previous best approaches on public datasets show that MaxMin-L2-SVC-NCH greatly reduces the number of models to be trained while maintaining competitive test accuracy. These findings indicate that MaxMin-L2-SVC-NCH is a better choice for SVC tasks. Support vector classification (SVC), the selection of kernel parameters, minimax optimization, closest points, normal convex hull, projected gradient algorithm. § INTRODUCTION Support vector classifier (SVC)<cit.> is one of the most successful machine learning methods. The SVC with Gaussian radian basis function (GRBF) kernel performs well on many classification tasks<cit.>. Although SVC is a traditional method, there are still many research on improving SVCs recently <cit.>. A major disadvantage of SVC is high time cost in the selection of model’s parameters. The model’s parameters include the penalty parameter C for empirical risk and Gaussian kernel parameter γ. They need to be tuned for specific tasks so that a good performance can be obtained. The commonly used method to tune C and γ is the k-fold cross-validation based on grid search (CV) that is extremely time-consuming. In CV, the candidate values of C and γ are obtained with grid division on some value intervals and the dataset is randomly divided into k equal-sized subsets. The optimal (C,γ) is the one with the highest average cross-validation accuracy. A large number of SVC models needs to be trained to compute the average cross-validation accuracies at each grid-point (C,γ). Thus, the CV is extremely time-consuming. Many methods have been proposed to reduce the time cost of SVC. Wang et al. (2003) <cit.> applied Fisher discriminant function calculated from GRBF to choose γ and verified the effectiveness of the selected γ on an a synthetic dataset. However, they don’t provide the comparative experiments on real datasets. Tsang et al. (2005) <cit.> proposed core vector machine (CVM) that is faster than the traditional SVC models with close accuracies. However, the CVM doesn’t involve tuning the optimal (C,γ), i.e. the method of tuning (C,γ) is still CV and the time cost reduction is due to a lower training cost for the SVC model with given (C,γ). Sun et al. (2010) <cit.> proposed a novel method to tune γ by maximizing the distance between two classes (DBTC) in the feature space. They showed that the DBTC implicitly takes the between-class separation into account with a normalized kernel function and the DBTC-based methods outperforms the Span bound, the Radius/Margin bound, kernel Fisher discriminant models and the radial basis function network on most of the benchmark datasets. However, the comparison between the DBTC and the commonly used CV is not provided. Menezes et al. (2019) <cit.> applied a density estimation-based approach to tune γ by maximizing a dissimilarity function and obtained a close accuracy as the traditional SVC models do. However, the dissimilarity function is very complicated and the maximizing problem is not easy to solve. In addition, the tuning of γ is independent of training SVC, i.e. it belongs to the filter approach rather than the more efficient wrapper approach. Akram-Ali-Hammouri et al. (2021) <cit.> proposed the fast support vector classification (FSVC) that obtains a big improvement in training time, especially for large-scale problems. However, the accuracy (Kappa) on small datasets is inferior to that of the traditional SVC, which indicates that the principle of the FSVC still has shortcomings. Evolutionary computation is often used to choose the hyper-parameters of SVC. Friedrichs and Igel (2005) <cit.> applied the covariance matrix adaptation evolution strategy to choose the hyper-parameters of SVC, while Tharwat et al. (2017) <cit.> utilized Bat algorithm to optimize the parameters of SVC. However, the values of the fitness functions are calculated by training SVC models, i.e. a large number of SVC models still need to be trained and the time cost is still huge. Although there exist many methods as described above in reducing the time cost of SVC, they have shortcomings on accuracy or efficiency. As far as we know that CV is still the most commonly used method in the selection of hyper-parameters. In this paper, we will propose a new method to train SVC including the selection of hyper-parameters so that the time cost can be significantly reduced and the accuracy is not inferior to the traditional method. Peng et al. (2011) <cit.> proposed a soft-margin support vector model based on normal convex hulls (L2-SVC-NCH) for binary classification. The traditional soft-margin SVC, i.e. the SVC for nonlinearly separable case, is derived by finding the closest points between two reduced convex hulls (RCNs) <cit.>. Compared to normal convex hull (NCH), RCN is hard to understand. However, the soft-margin SVC is modeled by finding the closest points between the positive NCH and the negative NCH in L2-SVC-NCH where the incomprehensible RCH is removed. Therefore, L2-SVC-NCH has a good intuitive geometric interpretation. The distance of the closest points is just the distance between the positive NCH and the negative NCH. The larger the distance of the closest points, the better generalization performance of the SVC. It indicates that tuning the hyper-parameters of SVC can be considered as maximizing the distance of the closest points of the two NCHs. Therefore, we model training SVC and the selection of hyper-parameters as a minimax optimization problem (MaxMin-L2-SVC-NCH), in which the minimization problem is to find the closest points between the two NCHs, i.e. L2-SVC-NCH, while the maximization problem is to find the hyper-parameters that maximize the distance of the closest points. A low time cost can be expected in MaxMin-L2-SVC-NCH because CV is not needed. To solve MaxMin-L2-SVC-NCH quickly, a projected gradient algorithm (PGA) is proposed for training L2-SVC-NCH while a gradient ascent algorithm with dynamic learning rate (GA-DLR) is used in the solution of the maximization problem. The major contributions of this paper are as follows. * The training and the selection of Gaussian kernel parameters of SVC is modeled as a minimax problem, MaxMin-L2-SVC-NCH, in which CV is not needed. * The PGA is proposed for training the minimization problem L2-SVC-NCH. It is revealed that the famous sequential minimal optimization (SMO) algorithm is a special case of the PGA and the PGA can provide more flexibility. * The GA-DLR algorithm is proposed for the solution of the maximization problem so that the optimal Gaussian kernel parameters can be obtained by gradient-based methods. * MaxMin-L2-SVC-NCH greatly reduces the time complexity while maintaining a competitive accuracy compared to the previous best approaches, which indicates that it is a better choice for SVC tasks. The remainder of this paper is organized as follows. Section 2 provides a concise introduction to L2-SVC-NCH. In Section 3, we firstly propose MaxMin-L2-SVC-NCH as an innovative approach for training SVC and selecting kernel parameters. We then present the PGA and the GA-DLR algorithm for the solutions of the minimization problem and the maximization problem respectively. A gradient-based algorithm is finally provided for the solution of MaxMin-L2-SVC-NCH by connecting the PGA and the GA-DLR in series. To illustrate that the SMO algorithm is a special case of the PGA, Karush-Kuhn-Tucker (KKT) conditions and the SMO algorithm of L2-SVC-NCH, as well as the comparison between the SMO and the PGA are also provided in Section 3. Experimental results and discussions are presented in Section 4. Finally, in Section 5, we conclude the paper and provide some potential directions for future research. § RELATED WORK Given a training set T={ (x_i,y_i) | x_i ∈𝐑^n, y_i∈{+1, -1}, i = 1,...,l}, L2-SVC-NCH <cit.> is αmin 1/2α^T( G + I/C)α s.t.  ∑_i ∈ID^+α_i = ∑_i ∈ID^-α_i = 1, α_i≥ 0, i = 1,2,...,l where G=[ y_i y_j k(x_i,x_j)]_l× l, k(·,·) is a given kernel function, I is is the unit matrix, C is the penalty parameter to empirical risk, ID^+ and ID^- are the index sets of positive samples and negative samples respectively, i.e., ID^+ = { i | y_i = 1, i = 1,2,...,l},  ID^- = { i | y_i = - 1, i = 1,2,...,l}. Suppose α^* is an optimal solution of (<ref>), then the decision function for the binary classification problem is y(x) = sign( ∑_i ∈ Sy_iα_i^*k(x,x_i) - ( p^* + q^*)/2) where p^* = ∑_i ∈ Sy_iα_i^*k( x_j,x_i) + y_jα_j^*/2 ( ∃ α_j^* > 0,y_j = 1), q^* = ∑_i ∈ Sy_iα_i^*k( x_j,x_i) + y_jα_j^*/2 ( ∃ α_j^* > 0,y_j = - 1 ). S is the collection of support vectors, i.e. S = { i | α_i^* > 0, i = 1,2,...,l}<cit.>. G is often called the gram matrix of kernel function k(·,·). In fact, (G+I/C) can be viewed as the gram matrix of -k(·,·), where k( x_i,x_j) = k( x_i,x_j) + 1/Cδ_ij and δ_ij is Kronecker delta kernel function, i.e., δ_ij = {1,   if i = j, 0,   if i ≠ j. . L2-SVC-NCH has many advantages compared with traditional SVCs. Table <ref> summarizes the advantages of L2-SVC-NCH <cit.>. § PROPOSED METHOD In this section, we firstly formulate the training and parameter selection of L2-SVC-NCH as the minimax problem MaxMin-L2-SVC-NCH. Subsequently, we present the PGA for the solution of the minimization problem after the KKT conditions to L2-SVC-NCH are derived. Additionally, it is revealed that the famous SMO algorithm is a special case of the PGA by a rigorous comparison between them. Moreover, the GA-DLR algorithm is proposed for the solution of the maximization problem. Through connecting the PGA and the GA-DLR in series, a gradient-based algorithm is finally provided for the solution of MaxMin-L2-SVC-NCH. §.§ The minimax problem It is necessary to choose the appropriate kernel function and the trade-off parameter C between the ordinary kernel function and the Kronecker delta kernel function in the application of L2-SVC-NCH. Considering Gaussian kernel is the most commonly used kernel function, we choose Gaussian kernel as the kernel function. In addition, the parameter C is set to a constant since it plays a small role due to the introduction of Gaussian kernel. The reason why C plays a small role when Gaussian kernel is introduced is as follows. Firstly, we can always select a Gaussian kernel so that two-classes problem is linearly separable in the mapped space, while C is not needed for linearly separable problem since empirical risk can be zero by selecting a suitable hyperplane. Further, the distances in the mapped space for k( x_i,x_j) (the kernel function with C) and k( x_i,x_j) (the kernel function without C) are d^2( x_i,x_j) = 2 + 2/C - 2k( x_i,x_j) and d^2( x_i,x_j) = 2 - 2k( x_i,x_j) respectively. There is no essential difference between d^2( x_i,x_j) and d^2( x_i,x_j) because d^2( x_i,x_j) can viewed as a translation of d^2( x_i,x_j) with 2/C. Finally, k( x_i,x_j) is a weighted sum of two Gaussian kernel functions from (<ref>) since δ_ij can be views as the Gaussian kernel when Gaussian kernel parameter γ = + ∞. Thus, it is conceivable that we could find a Gaussian kernel function instead of k( x_i,x_j). Menezes et al.<cit.> also verifies that C is unimportant by experiments when Gaussian kernel is introduced. L2-SVC-NCH is interpreted as finding the closest points between two normal convex hulls. If the kernel function and C is given, L2-SVC-NCH is a minimization problem. However, the choice of kernel function is a maximization problem since its goal is to find the closest points with the largest distance. Therefore, training L2-SVC-NCH with the choice of Gaussian kernel can be modeled as a minimax problem, i.e. γmax αmin 1/2α^Ty_iy_je^- γx_i - x_j^2 + 1/Cδ_ij_l × lα s.t.  ∑_i ∈ID^+α_i = ∑_i ∈ID^-α_i = 1, 0 ≤α_i≤ 1,i = 1,2,⋯,l where γ is the parameter of Gaussian kernel. For convenience, we abbreviate the model (<ref>) as MaxMin-L2-SVC-NCH. In the traditional SVC, the k-fold cross-validation based on grid search is commonly used to the choice of model parameters, which is extremely time-consuming. Suppose the number of grid points is m, it needs training m× k SVC models to obtain suitable model parameters. However, the choice of model parameters in (<ref>) is modeled a maximization problem, i.e. it can be solved by gradient-based algorithms. Let f( α,γ) = 1/2α^Ty_iy_je^- γx_i - x_j^2 + 1/Cδ_ij_l × lα. A feasible point ( α^*,γ^*) is called as a saddle point if satisfies f( α^*,γ) ≤ f( α^*,γ^*) ≤ f( α,γ^*) i.e. α^* is the minimum point of f( α,γ^*) while γ^* is the maximum point of f( α^*,γ). From the definition of saddle point, we see that saddle point is a local optimum of (<ref>). An alternate optimization between α and γ based on gradient will be provided in Section 3.C to find the local optimal solution of (<ref>). §.§ The solution of the minimization problem 1) The KKT conditions L2-SVC-NCH is a strictly convex quadratic programming problem. Thus, KKT condition is a necessary and sufficient condition for the optimal solution. The following Theorem 1 provides a KKT condition of the optimal solution for L2-SVC-NCH. Theorem 1 Let f(α) is the objective function of L2-SVC-NCH, a KKT condition of the optimal solution α for L2-SVC-NCH is [ ∀ 0 < α_i^+ < 1, - ∇_if(α) = p,; ∀ α_i^+ = 0, - ∇_if(α) ≤ p,; ∀ α_i^+ = 1, - ∇_if(α) ≥ p,; ∀ 0 < α_i^- < 1, - ∇_if( α) = q,; ∀ α_i^- = 0, -∇_if(α) ≤ q,; ∀ α_i^- = 1, -∇_if(α) ≥ q; ] where α_i^+ or α_i^- indicates that the i-th sample belongs to positive or negative class, ∇_if(α) is the i-th component of the gradient ∇f(α), and ∇ f(α) = (G + I/C)α. Proof The Lagrange function of L2-SVC-NCH is L(α,p,q,λ,β) = f(α) + p(( e^+)^Tα - 1) + q((e^-)^Tα - 1) - λ^Tα + β^T(α - e) where λ and β are Lagrange multiplier vectors corresponding to the inequality constraints α≥ 0 and α≤ 1, p and q are Lagrange multiplier variables corresponding to the equality constraints ( e^+)^Tα = 1 and ( e^-)^Tα = 1, e^+ is a column vector where the components corresponding positive samples are all one and all other components are zero, e^- is a column vector where the components corresponding negative samples are all one and all other components are zero, and e is a column vector with all components are one. L2-SVC-NCH is a convex quadratic programming problem. Thus, α are an optimal solution if and only if there are corresponding Lagrange multiplier variables p,q,λ,β satisfying the KKT condition {[ ∇ f(α) + pe^+ + qe^- - λ + β = 0,; λ^Tα = 0, β^T( α - e) = 0,λ≥ 0,β≥ 0,; ( e^+)^Tα = 1,( e^-)^Tα = 1,0 ≤α≤ 1.; ]. For positive samples, if 0 < α_i^+ < 1⇒λ_i^+ = β_i^+ = 0⇒∇_if( α ) + p = 0 ⇒- ∇_if( α ) = p, if α_i^+ =0⇒β_i^+ = 0⇒∇_if( α) + p = λ_i^+≥ 0 ⇒ - ∇_if( α) ≤ p, if α_i^+ = 1⇒λ_i^+ = 0⇒∇_if( α) + p = - β_i^+≤ 0 ⇒ - ∇_if( α) ≥ p. For negative samples, we have similar results: if 0 < α_i^- < 1⇒λ_i^- = β_i^- = 0⇒∇_if( α) + q = 0 ⇒- ∇_if( α) = q, if α_i^- =0⇒β_i^- = 0⇒∇_if( α) + q = λ_i^-≥ 0 ⇒ - ∇_if( α) ≤ q, if α_i^- = 1⇒λ_i^- = 0⇒∇_if( α) + q = - β_i^-≤ 0 ⇒ - ∇_if( α) ≥ q. Combining the results of positive samples and negative samples, we can obtain (<ref>). The proof is finished. Let I_up^+ ={ i | α_i < 1 and y_i = 1}, I_low^+ ={ i|α_i > 0 and y_i = 1}, I_up^- = { i | α_i < 1 and y_i = - 1}, I_low^- = { i | α_i > 0 and y_i = - 1}. From Theorem 1, we have - ∇_if(α) ≤ p  if  i∈I_up^+, - ∇_if(α) ≥ p  if  i∈I_low^+, - ∇_if(α) ≤ q  if  i∈I_up^-, - ∇_if(α) ≥ q  if  i∈I_low^-. ⇒ m^+(α) ≤ M^+(α) and m^-(α) ≤M^-(α) where m^+(α) = max_i ∈ I_𝑢𝑝^+- ∇_if(α),M^+(α) = min_i ∈ I_𝑙𝑜𝑤^+- ∇_if(α), m^-(α) = max_i ∈ I_𝑢𝑝^-- ∇_if(α),M^-(α) = min_i ∈ I_𝑙𝑜𝑤^-- ∇_if(α). Corollary 1 Another KKT condition of the optimal solution α for L2-SVC-NCH is m^+(α) ≤ M^+(α) and m^-(α) ≤ M^-(α). The KKT condition (<ref>) satisfying a given precision ε can be descripted as [ ∀ 0 < α_i^+ < 1, | - ∇_if( α) - μ^+| ≤ε,; ∀ α_i^+ = 0, - ∇_if( α) ≤μ^+ + ε,; ∀ α_i^+ = 1, - ∇_if( α) ≥μ^+ - ε,; . ∀ 0 < α_i^- < 1, | - ∇_if( α) - μ^-| ≤ε, .; ∀ α_i^- = 0, - ∇_if( α) ≤μ^- + ε,; ∀ α_i^- = 1, - ∇_if( α) ≥μ^- - ε ] where μ^+ = ∑_0 < α_i^+ < 1( - ∇._if(α))/-l_+, μ^- = ∑_0 < α_i^- < 1( - ∇._if(α))/-l_-, -l_+ and -l_- are the number of samples that satisfy 0 < α_i^+ < 1 and 0 < α_i^- < 1 respectively. The KKT condition (<ref>) can be used in the corresponding algorithm. Similarly, the KKT condition (<ref>) satisfying a given precision can be descripted as m^+(α) ≤ M^+(α) + ε  and m^-(α)≤M^-(α) + ε. The KKT condition (<ref>) can be used in the sequential minimal optimization (SMO) algorithm based on maximal violation pair. 2) A projected gradient algorithm The most commonly used algorithms to solve the minimization problem L2-SVC-NCH in (<ref>) are SMO algorithms <cit.>. L2-SVC-NCH is a strictly convex quadratic programming problem, which can also be solved by PGA <cit.>. The PGA is flexible due to its simplicity. In fact, the SMO is a special case of the PGA by the comparison in Section 3.B. Therefore, here we provide the PGA of L2-SVC-NCH. In PGA, a feasible descent direction is first obtained by the projection of negative gradient vector with a projection matrix. Then, one-dimensional search in the feasible descent direction is performed so that the objective function is decreased. The constraint 0 ≤α_i≤ 1 can be expressed as 0 ≤. ( e._i)^Tα≤ 1 where e_i is a column vector whose the i-th component is one and other components are zero. Set M = [ E; ( e ^+ )^T; ( e^- )^T; ]_(l_0 + 2) × l where l_0 is the number of the effective constraints {. ( e._i)^Tα = 0 | i = 1,2,⋯,l}, E = ( . ( e._i)^T)_l_0× l, ( e^+)^T and ( e^-)^T are the coefficient vectors corresponding to the equality constrains ( e ^+)^Tα = 1 and (e^- )^Tα = 1 respectively. The projection matrix P = I - M^T( MM^T)^- 1M. The projection of negative gradient vector with P is d = - P∇ f(α) where d_i = {0, if α_i = 0, - ∇_if(α) - μ^+, if α_i > 0 and y_i = 1,  - ∇_if(α) - μ^-, if α_i > 0 and y_i = - 1, i = 1,2,⋯,l. . The corresponding multiplier variables for the effective constraints and the equality constraints are μ = - ( MM^T)^- 1M∇ f(α) where μ_i = {∇_if(α) + μ^+ for . ( e._i)^Tα = 0 and y_i = 1, ∇_if(α) +μ^- for . ( e._i)^Tα = 0 and y_i = - 1, μ^+   for  . ( e.^+)^Tα = 1, μ^-   for  ( e^-)^Tα = 1, i = 1,2,⋯,l_0 + 2. . If d≠0, then d is a feasible descent direction. If d=0 and μ_i≥ 0 for . ( e._i)^Tα = 0, then α is an optimal solution. If d=0 and there is μ_i < 0 for . ( e._i)^Tα = 0, then a non-zero feasible descent direction can be obtained by removing the corresponding row from M and re-computing the projection matrix P with the new M. After a non-zero feasible descent direction d is obtained, the one-dimensional search is modeled as η min 1/2( . α + ηd)^T(G + I/C)(α + ηd) . s.t.  0 ≤η≤η_max i.e., η min 1/2d^T(G + I/C)dη^2 + d^T(G + I/C)αη s.t.  0 ≤η≤η_max where η_max = min{η_max^+, η_max^-}, η_max^+ = min_i:α_i > 0,y_i = 1,∇_if(α) + μ^+ > 0α_i/( ∇_if( α) + μ^+), η_max^- = min_i:α_i > 0,y_i = - 1,∇_if(α) + μ^- > 0α_i/( ∇_if( α) + μ^-). Set g(η) = 1/2d^T(G + I/C)dη^2 + d^T(G + I/C)αη. Let g'(η) = 0, we obtain -η = - d^T(G + I/C)α/d^T(G + I/C)d. Thus, the optimal solution of (<ref>) is η^* = {-η, if 0 ≤-η≤η_max,    0,   if -η < 0,     η_max, if -η > η_max. . and α is updated according to α = α + η^*d. A projected gradient algorithm for L2-SVC-NCH is provided in the following Algorithm 1. 3) A SMO algorithm To carry out the comparison of the PGA and SMO algorithms, we also provide a SMO algorithm of L2-SVC-NCH. SMO algorithms generally consist of two main steps: finding two variables by some heuristic rules and solving the optimization problem formed by the two variables. For L2-SVC-NCH, the two variables can be obtained with maximal violating pair. A maximal violating pair refers to a pair of samples with the most violation to KKT condition (<ref>). If the KKT condition (<ref>) is not satisfied, and i^+ = arg max_i ∈ I_𝑢𝑝^+- ∇_if(α),j^+ = argmin_i ∈ I_𝑙𝑜𝑤^+- ∇_if(α), i^- = arg max_i ∈ I_𝑢𝑝^-- ∇_if(α),j^- = argmin_i ∈ I_𝑙𝑜𝑤^-- ∇_if(α), then the most violating pair is ( i,j) = argmax_(i^+,j^+),(i^-,j^-){ m^+( α_i^+) - M^+( α_j^+), . m^-( α_i^-) - M^-( α_j^-)}. After the most violation pair ( i,j) is obtained, the optimization problem formed by the two variables is max_α_i,α_j 1/2( k( x_i,x_i) + C)α_i^2 + 1/2( k( x_j,x_j) + C)α_j^2 + y_iy_jk( x_i,x_j)α_iα_j + v_iα_i + v_jα_j s.t.  α_i + α_j = α_i^old + α_j^old = Const, 0 ≤α_i,α_j≤ 1 where v_i = y_i∑_k ≠ i,jy_kk( x_i,x_k)α_k,v_j = y_j∑_k ≠ i,jy_kk( x_j,x_k)α_k. Suppose ( α_i^uc,α_j^uc) is the optimal solution of (<ref>) without considering the constraint 0 ≤α_i,α_j≤ 1, then α_i^uc = ( k( x_j,x_j) + C)( α_i^old + α_j^old) - y_iy_jk( x_i,x_j)( α_i^old + α_j^old) - v_i + v_j/k( x_i,x_i) + k( x_j,x_j) + 2C - 2y_iy_jk( x_i,x_j) and α_j^uc = α_i^old + α_j^old - α_i^uc. The optimal solution of (<ref>) with the constraint 0 ≤α_i,α_j≤ 1 is . ( α._i^*,α_j^*) = {. ( α._i^uc,α_j^uc), if 0 ≤α_i^uc≤α_i^old + α_j^old,   ( 0, α_i^old + α_j^old),     if α_i^uc < 0,           ( α_i^old + α_j^old,0 ), if α_i^uc > α_i^old + α_j^old.   . A SMO algorithm for L2-SVC-NCH is provided in the following Algorithm 2. 4) A comparison of the PGA and the SMO algorithm The space complexity of the PGA and the SMO algorithm for solving L2-SVC-NCH are the same order, and their time complexity are also close. The variables in the PGA and the SMO that need to be stored are G,α,∇ f(α), which are the same. The number of multiplication in one iteration of the PGA is O( 2l ^2 + 2l ) while it is O( l ^2 + 4l ) for the SMO. Thus, the number of multiplication of the PGA and the SMO algorithm are O( ( 2l^2 + 2l) × m_p) and ( ( l^2 + 4l) × m_s) where m_p and m_s are the number of iteration for the PGA and the SMO. Considering that ( 2l ^2 + 2l ) > ( l ^2 + 4l ) and m_p≪ m_s, their time complexity are close. Further, the SMO algorithm can be regarded as a special case of the PGA. In the original PGA, almost all components of α (except α_i = 0) are updated in one iteration. However, the PGA can also only update two components as the SMO does. From (<ref>), we can select the two components with the largest standard deviation if we need to select two components among positive samples to be updated, i.e. ( i^+,j^+) = max_y_i = y_j = 1std{- ∇_if(α),- ∇_jf(α) }. If (i^+,j^+ ) is satisfied (<ref>), then d_i^+ and d_j^+ are large. Thus α_i^+ and α_j^+ can be updated with large values. The ( i^+,j^+) that is satisfied (<ref>) can also be obtained by i^+ = max_y_i = 1- ∇_if(α),  j^+ = min_y_j = 1- ∇_jf(α). If we select two components among negative samples to be updated, then i^- = max_y_i = - 1- ∇_if(α),  j^- = min_y_j = - 1- ∇_jf(α). We see that the two variables selected by the PGA and the SMO are same by the comparison of (<ref>) and (<ref>), (<ref>). In addition, both the solution obtained from (<ref>) and the one from (<ref>) are the optimal solution of the two-variables optimal problem. Thus, the solutions of the PGA and the SMO are same since the optimal solution is unique. Hence, the SMO is a special case of the PGA in which only two components of the projected gradient vector are non-zero. In fact, the number of the components updated in each iteration by the PGA can vary from two to all. Hence, the PGA is more flexible compared to the SMO. §.§ The solution of the maximization problem Since the PGA is more flexible and the SMO algorithm is a special case of it, we apply the PGA to solve L2-SVC-NCH. Thus, a gradient-based algorithm by an alternating optimization between α and γ is proposed to solve the minimax problem (<ref>) where the minimization problem L2-SVC-NCH is solved by the PGA and the maximization problem is solved by a gradient ascent method. The gradient for L2-SVC-NCH with a given Gaussian kernel parameter γ is ∇ f(α) = y_iy_je^- γx_i - x_j^2 + 1/Cδ_ij_l × lα. The maximization problem in (<ref>) is a univariate optimization problem. Suppose α^* is an optimal solution of L2-SVC-NCH, then the derivative for the maximization problem with the given α^* is f^'(γ) = 1/2. ( α.^*)^Ty_iy_j( - x_i - x_j^2)e^- γx_i - x_j^2_l × lα^*. A gradient-based rule for updating γ is γ = γ + η f^'(γ) where η is the learning rate. We provide a strategy of dynamic learning rate for the selection of η so that the iteration of γ can be speeded up. The principle of the strategy is as follows: η will take a relatively large value if the latest update for γ is large, otherwise η will take a small value because γ may be near the extreme point. The specific strategy is η = | γ^𝐧𝐞𝐰 - γ^𝐨𝐥𝐝| and the initial value of η is set with 1. In addition, η needs to satisfy that f(γ) is ascending. If f(γ) is not ascending, η will be halved until f(γ) is ascending. §.§ The solution of the minimax problem Through connecting the PGA and the GA-DLR in series, a gradient-based (GB) algorithm is obtained for the solution of MaxMin-L2-SVC-NCH. The following Algorithm 3 provides the GB algorithm where α and γ are optimized alternately. If the stop condition about gradient with the given precision is satisfied in Algorithm 3, ( α^*,γ^*) is a local suboptimal solution of (<ref>). The source code for MaxMin-L2-SVC-NCH can be found at https://github.com/visitauto/MaxMin-L2-SVC-NCHhttps://github.com/visitauto/MaxMin-L2-SVC-NCH. § EXPERIMENTS AND DISCUSSIONS In order to explore the effectiveness of the MaxMin-L2-SVC-NCH model, this section compares the performance of MaxMin-L2-SVC-NCH with some previous best approaches on a series of public datasets. In addition, the training effectiveness of MaxMin-L2-SVC-NCH is tested on some representative datasets. §.§ The datasets The commonly used datasets are Parkinsons, Sonar, Spectf, Heart, Ionosphere, Breast, Australian, German, Mushrooms and Phishing, which are downloaded from LIBSVM datasets <cit.> and UCI datasets <cit.>. All datasets belong to two-classes problem. Table <ref> lists the basic information of the datasets. §.§ The experimental scheme All models run in PyCharm 2021.2.3 on a DELL PC with i5-11400 2.60 GHz processors, 16.00 GB memory, and Windows 10.0 operating system. We compare some previous best approaches with our MaxMin-L2-SVC-NCH. The previous best approaches include the Bat-SVC, the commonly used CV-SVC, the Fisher-SVC and the LD-SVC. The Bat-SVC model, proposed by Yang et al., employs the bat algorithm to search for the optimal values of the Gaussian kernel parameter γ and penalty parameter C<cit.>. The parameter setting of Bat-SVC refers to Yang's paper. The CV-SVC model uses the classic grid search approach to identify the best values for γ and C. The Gaussian kernel parameter γ and the penalty parameter C are selected by 5-fold cross-validations in CV-SVC. The model is implemented by Sklearn Toolkit based on LIBSVM<cit.>. Fisher-SVC is a SVC model that employs fisher discriminant function to search for the optimal kernel parameters<cit.>. The parameter C is obtained through grid search. Similarly, LD-SVC uses kernel density estimation to calculate in likelihood space, searching for the best parameters within a given range of Gaussian kernel parameters<cit.>. The penalty parameter C is also obtained through grid search. MaxMin-L2-SVC-NCH is implemented by our GB algorithm. All datasets are standardized so that the mean value of each feature is 0 and the variance is 1 before the experiment. All datasets are randomly divided into training set and test set according to the ratio of 8:2. To eliminate the random effect of data division as much as possible, the random division is repeated thirty times. The mean and standard deviation (STD) of the classification accuracies over the thirty randomized trials are reported. Due to the large size of the last two datasets, MUS and PHI, only five repeated experiments were conducted for these two datasets. In all previous best approaches, the candidate sets are γ∈{2^-15, 2^-13, ⋯, 2^3} and C∈{2^-5, 2^-3, ⋯, 2^15}, which are referenced from <cit.>. In MaxMin-L2-SVC-NCH, γ is selected by the gradient-based rule from [2^-15, ⋯, 2^3] and the initial value is set by 0.004 that is the middle value between 2^-15 and 2^3. Choosing the middle value as the initial value is more conducive to find the best parameter. Unlike other baseline models, MaxMin-L2-SVC-NCH does not search for penalty parameters C. The parameter C in MaxMin-L2-SVC-NCH is fixed as 1 <cit.> since it plays a small role. The other parameters ε_1, ε_2, epoch_γ, epoch_α in GB algorithm are set as 10^-6, 10^-3, 500 and 2000, respectively. §.§ The experimental results and discussions 1) Accuracy According to the experimental settings described above, Table <ref> presents the average accuracy and standard deviation of the tests through a series of comparative experiments of various models. For the convenience of display, MaxMin-L2-SVC-NCH is shortened as MaxMin-SVC in tabels. Compared with other models, MaxMin-L2-SVC-NCH performs better overall and more stable on the ten datasets, while the other four baseline models have different performances. Bat-SVC is a genetic search algorithm that automatically searches for optimal parameters in the global space. Although Bat-SVC performed well on the SPE dataset with an accuracy rate of 79.63, it showed poor performance on other datasets and exhibited overall instability. This is due to the fact that the Bat-SVC algorithm searches for the best parameters globally, making it difficult to converge and more susceptible to getting stuck in a local solution. On the other hand, CV-SVC performs well on most datasets and shows the best performance on the HEA dataset with an accuracy of 81.91. As a classic algorithm, CV-SVC has good experimental results but needs to train a large number of models. Its performance is usually relatively stable, and a large number of cross-validations can effectively reduce the generalization error. Fisher-SVC performs best on the SPE, GER, and MUS datasets, shows average performance on other datasets, and performs poorly on the SON and PHI datasets. Fisher-SVC uses the discriminator to optimize the within-class and inner-class distances, which requires extensive calculations. The optimization of distance to select parameters is also relatively unstable. Fisher-SVC is highly dependent on the distance of training samples distributed in different kernel spaces, which makes it easy to overfit the training data and is not conducive to generalization. Moreover, it is susceptible to noise data. LD-SVC performs best on the ION dataset and exhibits relatively stable performance overall. LD-SVC uses the kernel density estimation method to estimate the sample distribution in the likelihood space, making the calculation complexity higher since the calculation of the likelihood function is not easy. MaxMin-L2-SVC-NCH uses gradient information for training and shows excellent performance on most datasets, with relatively small standard deviations. Compared to other models, MaxMin-L2-SVC-NCH exhibits better overall performance and stability on the ten datasets. In summary, Bat-SVC utilizes the genetic algorithm to search for parameters globally, which makes it difficult to converge and results in unstable performance. On the other hand, the classic CV-SVC algorithm shows stable performance but requires a significant amount of time to search for optimal parameters. Fisher-SVC optimizes the distance of within-class and inner-class, which requires a large amount of calculation and exhibits unstable performance. The LD-SVC model uses the kernel density estimation method, which has high computational complexity. Finally, MaxMin-L2-SVC-NCH utilizes the gradient information and demonstrates excellent and stable performance. Additionally, the dynamic learning rate can be used to update Gaussian kernel parameters quickly based on gradient information. This approach is applicable to different kinds of datasets. 2) Training cost Regarding model training efficiency, CV-SVC is the only model that directly selects parameters through model training, while other models use different parameter search methods. Table <ref> provides a comparison of the number of models trained. Grid search and 5-fold cross-validations in CV-SVC results in a total of 550 trained models, which is time-consuming. The Bat-SVC algorithm employs 20 bats to select parameters through a genetic algorithm and iterates 20 times, thus training a total of 400 SVM models. Fisher-SVC pre-searches the Gaussian kernel parameters through the discriminator, and calculates the intra-class distance and inter-class distance of the sample by grid search the parameter γ. The training of the model is not involved here. The penalty parameter C is selected by grid search, so 11 models need to be trained in Fisher-SVC. The approach of Fisher-SVC is to maximize the inter-class distance and minimize the intra-class distance, which involves significant computation time and memory usage. The method of LD-SVC to select parameters is similar to the Fisher-SVC algorithm, the difference is that the LD-SVC algorithm replaces the method of the discriminator with a kernel density estimator. The LD-SVC algorithm uses kernel density estimation to grid search the parameter γ, and its parameter C is also selected by grid search, so that the method only needs to train 11 models. The method of the kernel density estimator is to select parameters by calculating in the likelihood space of the sample, which also requires additional computing time. The MaxMin-L2-SVC-NCH model selects Gaussian kernel parameters directly via gradient information in the training algorithm. The number of trained models depends on the number of algorithm update iterations, and MaxMin-L2-SVC-NCH stops automatically when the model satisfies the KKT conditions (<ref>). In experiments, GB algorithm meet the conditions to exit before reaching the maximum number of iterations. This demonstrates the validity of MaxMin-L2-SVC-NCH. The average number of iterations is only 8.2 in MaxMin-L2-SVC-NCH. Our model does not require heuristic search of Gaussian kernel parameters in advance, and it only needs to calculate the gradient information of the Gaussian kernel during model training and use the dynamic learning rate to iterate quickly while directly updating the Gaussian kernel parameters. Another reason for the efficient training of MaxMin-L2-SVC-NCH is that it does not require the grid searches for the parameter C. In MaxMin-L2-SVC-NCH, we evaluate different Gaussian kernel parameters by a fixed C from (<ref>). It is fair because the weights of two items in the objective function for different Gaussian kernel parameters is the same. However, CV-SVC, Bat-SVC, Fisher-SVC and LD-SVC can not do that as MaxMin-L2-SVC-NCH. 3) The effectiveness of training MaxMin-L2-SVC-NCH To analyze the training process of MaxMin-L2-SVC-NCH, we provide the changes of f'(γ), the ratio of the inter-class distance to the intra-class distance on four representative datasets: Parkinsons, Heart, Germen and Mushrooms. Parkinsons, Heart and Mushrooms are natural data while Germen are economic data. The sizes of Parkinsons, Heart and Germen are small while the size of Mushrooms is relatively large. Fig.1 shows the changes of f'(γ) during the training of MaxMin-L2-SVC-NCH on the representative datasets. From Fig.1, we see that all f'(γ) quickly converge to zero with the required accuracy after a small number of iterations. It indicates that the training process of MaxMin-L2-SVC-NCH is efficient. Fig.2 provides the ratio of the inter-class distance to the intra-class variance, i.e. Fisher discriminant function <cit.>. The inter-class distance is defined by D_Inter = m^+- m^-^2 =( 1/l^+)^2∑_i = 1^l^+∑_j = 1^l^+k( x_i,x_j) + ( 1/l^-)^2∑_i = 1^l^-∑_j = 1^l^-k( x_i,x_j) - 2/l^+l^-∑_i = 1^l^+∑_j = 1^l^-k( x_i,x_j) where m^+ = 1/l^+∑_i = 1^l^+Φ( x_i) and m^- = 1/l^-∑_i = 1^l^-Φ( x_i). The intra-class variance is defined by D_Inner = 1/l^+∑_i = 1^l^+m^+ - Φ( x_i)^2 + 1/l^-∑_i = 1^l^-m^- - Φ( x_i)^2. From Fig.2, we see that the ratios of the inter-class distance to the intra-class variance all converge to relatively large values. It indicates that our algorithm also obtains a similar effect of maximizing the ratio of the inter-class distance to the intra-class variance. § CONCLUSIONS A novel method, MaxMin-L2-SVC-NCH, is proposed for training SVC and selecting Gaussian kernel parameters. MaxMin-L2-SVC-NCH is a minimax problem, where the minimization problem is L2-SVC-NCH and the maximization problem aims to select the optimal Gaussian kernel parameters. A lower time complexity is expected in MaxMin-L2-SVC-NCH because the time-consuming CV is not needed. To solve L2-SVC-NCH quickly and efficiently, the PGA is proposed. The PGA provides more flexibility than the famous SMO algorithm since the SMO can be viewed as a special case of the PGA. For the solution of the maximization problem, the GA-DLR algorithm is proposed. A gradient-based algorithm is subsequently provided for the solution of MaxMin-L2-SVC-NCH by connecting the PGA and the GA-DLR algorithm in series. Experimental results on public datasets reveal that MaxMin-L2-SVC-NCH significantly reduces the number of trained models while maintaining competitive testing accuracy compared to the previous best approaches. These findings reveal an enhanced performance of MaxMin-L2-SVC-NCH, which indicates it may be a better choice for SVC tasks. It is a future work to optimize the implementation of MaxMin-L2-SVC-NCH so that it is suitable for very large datasets. Additionally, investigating the impact of the initial value of the kernel parameter γ on MaxMin-L2-SVCNCH is worthy of further exploration. Furthermore, extending MaxMin-L2-SVC-NCH to encompass multi-class and regression problems is also an interesting task. IEEEtran
http://arxiv.org/abs/2307.04022v1
20230708175848
Explicit a posteriori error representation for variational problems and application to TV-minimization
[ "Sören Bartels", "Alex Kaltenbach" ]
math.NA
[ "math.NA", "cs.NA", "math.OC", "35Q68, 49M25, 49M29, 65N30, 65N50" ]
1]Sören BartelsEmail: 2]Alex KaltenbachEmail: [1]Department of Applied Mathematics, University of Freiburg, Hermann–Herder–Straße 10, 79104 Freiburg [2]Institute of Mathematics, Technical University of Berlin, Straße des 17. Juni 136, 10623 Berlin Explicit a posteriori error representation for variational problems and application to TV-minimization [ August 12, 2023 ======================================================================================================== fancy 0cm -0.25cm [CO]Explicit error representation and application to TV-minimization [CE]S. Bartels and A. Kaltenbach [R] [R] In this paper, we propose a general approach for explicit a posteriori error representation for convex minimization problems using basic convex duality relations. Exploiting discrete orthogonality relations in the space of element-wise constant vector fields as well as a discrete integration-by-parts formula between the Crouzeix–Raviart and the  , all convex duality relations are transferred to a discrete level, making the explicit a posteriori error representation –initially based on continuous arguments only– practicable from a numerical point of view. In addition, we provide a generalized Marini formula for the primal solution that determines a discrete primal solution in terms of a given discrete dual solution. We benchmark all these concepts via the Rudin–Osher–Fatemi model. This leads to an adaptive algorithm that yields a (quasi-optimal) linear convergence rate. 35Q68; 49M25; 49M29; 65N30; 65N50 § INTRODUCTION empty The numerical analysis of the approximation of variational problems is challenging when these are non-differentiable, degenerate, or involve constraints. In particular, following established concepts for linear elliptic partial differential equations often leads to sub-optimal results only. The framework of convex duality provides an attractive concept to reveal hidden information and structures to obtain quasi-optimal error representation formulas under meaningful regularity conditions. Similar to <cit.>, we first exploit this idea to derive explicit computable a posteriori error estimates for a natural error measure. Then, this general result is transferred to a non-differentiable model problem with discontinuous solutions. As a whole, our results, similar to <cit.>, show that the question of developing asymptotically exact a posteriori error estimators is rather a question of identifying optimal error quantities. However, different from <cit.>, we also propose a general approach for making our results practicable from a numerical point of view.10mm Given a domain Ω⊆ℝ^d, d∈ℕ, a convex energy density ϕℝ→ℝ∪{+∞}, a (Lebesgue) mea-surable energy density ψΩ×ℝ→ℝ∪{+∞} that is convex with respect to the second argument, and a Banach space X consisting of functions defined in Ω, we denote by the minimization of the energy functional I X→ℝ∪{+∞}, for every v∈ X defined by I(v) ∫_Ωϕ(∇ v) dx + ∫_Ωψ(·, v) dx , the primal problem. Its (Fenchel) dual problem consists in the maximization of the functional D Y→ℝ∪{-∞}, where Y is a Banach space consisting of vector fields defined in Ω, for every y∈ Y is defined by D(y) -∫_Ωϕ^*(y) dx - ∫_Ωψ^*(·, div y) dx . Here, ϕ^*ℝ^d→ℝ∪{+∞} and ψ^*Ω×ℝ→ℝ∪{+∞} (with respect to the second argument) denote the (Fenchel) conjugates of ϕℝ→ℝ∪{+∞} and ψΩ×ℝ→ℝ∪{+∞}, respectively. Under rather general conditions, cf. <cit.>, we have the well-posedness of the primal problem and the dual problem, i.e., the existence of a minimizer u∈ X of (<ref>), i.e., a primal solution, and of a maximizer z∈ Y of (<ref>), i.e., a dual solution, and the strong duality relation min_v∈ X I(v) = I(u)= D(z) = max_y∈ Y D(y) . Since u∈X and z∈ Y are optimal for (<ref>) and (<ref>), respectively, it holds 0∈∂ I(u) and 0∈∂ D(z). In particular, for every v∈ X and y∈ Y, the quantities ρ_I^2(v,u) I(v) - I(u) , ρ_-D^2(y,z) D(z) - D(y) , are non-negative. They define distances, if (<ref>) and (<ref>), respectively, are strictly convex, and are called coercivity functionals or optimal convexity measures. For accessible and admissible approximations v∈ X and y∈ Y of the solutions u ∈ X and z ∈ Y, given the definitions (<ref>) and (<ref>), the strong duality relation (<ref>) implies the error identity ρ_I^2(v,u) + ρ_-D^2(y,z) = I(v) - D(z) η^2(v,y) . Hence, the fully computable error estimator η^2 X× Y→ℝ∪{+∞}, cf. (<ref>), exactly represents the sum of the primal and dual approximation errors, i.e., of (<ref>) and (<ref>). The error representation (<ref>) can be seen as a generalization of the Prager–Synge result, cf. <cit.>, which states that for the Poisson problem, i.e., ϕ1/2|·|^2∈ C^1(ℝ^d), ψ ((t,x)^⊤↦ -f(x)t) Ω×ℝ→ℝ∪{+∞}, where f∈ L^2(Ω), X W^1,2_D(Ω), and Y W^2_N(;Ω), for every v∈ W^1,2_D(Ω) and y∈ W^2_N(;Ω) with - y=f a.e. in Ω, we have that 12 ∇ v -∇ u_L^2(Ω;ℝ^d)^2 + 12 y - z _L^2(Ω;ℝ^d)^2 = 12 ∇ v-y ^2_L^2(Ω;ℝ^d) . The equation (<ref>) has been used by various authors to define error estimators; for a comprehensive list of references, we refer the reader to <cit.>. Often, local procedures are devised to construct an ad-missible vector field y∈ W^2_N(;Ω) with - y=f a.e. in Ω from a given function v∈ W^1,2_D(Ω). While this leads to efficient procedures to obtain accurate error estimators, the arguments cannot be expected to transfer to non-linear problems. Another alternative to computing approximations for the primal and dual problems consists in using finite element methods for which reconstruction formulas are available, e.g., using the discontinuous Crouzeix–Raviart finite element method and the Marini formula in the case of the Poisson problem, cf. <cit.>.7mm It has recently been found (cf. <cit.>) that the discontinuous Crouzeix–Raviart finite element method leads to quasi-optimal a priori error estimates for non-linear and non-differentiable problems, while continuous finite element methods provide only a sub-optimal convergence behavior. In the derivation of those results, a general discrete convex duality theory with Raviart–Thomas vector fields has emerged that also leads to reconstruction formulas in rather general settings. As a consequence, given an approximation v∈ X or y∈ Y, respectively, the missing one can be obtained via a simple post-processing procedure. Then, the pair leads to the error representation formula (<ref>). It should also be noted that neither v∈ X nor y∈ Y needs to be optimal in a subspace of X or Y. By introducing appropriate residuals, any pair of admissible approximations of u∈ X and z∈ Y can be used. This is particularly important for non-linear problems, i.e., non-quadratic functionals, where an exact solution of discrete problems is neither possible nor rational. A difficulty in the application of the explicit a posteriori error representation formula (<ref>) arises from the condition that v∈ X and y∈ Y need to be admissible for the functionals (<ref>) and (<ref>). In the case of the Poisson problem, this arises, e.g., via element-wise constant approximations of f∈ L^2(Ω) that are the images of Raviart–Thomas vector fields under the divergence operator. While data terms can be controlled by introducing appropriate data oscillation terms, structural peculiarities of the energy densities ϕℝ^d→ℝ∪{+∞} and ψΩ×ℝ→ℝ∪{+∞} and their (Fenchel) conjugates ϕ^*ℝ^d→ℝ∪{+∞} and ψ^*Ω×ℝ→ℝ∪{+∞} are often more challenging. We illustrate this by analyzing a non-differentiable problem which leads to a new error analysis and an adaptive refinement procedure for the computationally challenging  problem. With ϕ = |·|∈ C^0(ℝ^d) and ψ=((x,t)^⊤↦α/2(t-g(x))^2)Ω×ℝ→ℝ for a given function g∈ L^2(Ω), i.e., the noisy image, and a given parameter α>0, i.e., the fidelity parameter, the Rudin–Osher–Fatemi (ROF) model, cf. <cit.>, seeks a minimizing function u∈ BV(Ω)∩ L^2(Ω), i.e., the de-noised image, where BV(Ω) denotes the space of functions with bounded variation, for the functional I BV(Ω)∩ L^2(Ω)→ℝ, for every v∈ BV(Ω)∩ L^2(Ω) defined by I(v) |Dv|(Ω) + α2v-g_L^2(Ω)^2 , where |D(·)|(Ω)BV(Ω)→ [0,+∞] denotes the total variation functional. The (Fenchel)  problem to the minimization of the functional (<ref>) consists in the maximization of the functional D W_N^2(;Ω)∩ L^∞(Ω;ℝ^d)→ℝ∪{-∞}, for every y∈ W_N^2(;Ω)∩ L^∞(Ω;ℝ^d) defined by D(y) -I_K_1(0)(y)-12αdiv y+α g_L^2(Ω)^2+α2 g_L^2(Ω)^2 , where I_K_1(0)(y) 0 if | y|≤ 1 a.e. in Ω and I_K_1(0)(y) +∞ else. The primal solution u∈ BV(Ω) ∩ L^2(Ω), i.e., the unique minimizer of (<ref>), and a dual solution z∈ W_N^2(;Ω)∩ L^∞(Ω;ℝ^d), i.e., a (possibly non-unique) maximizer of (<ref>), are (formally) related via, cf. <cit.>, z ∈.{∇ u/|∇ u|} if |∇ u|>0 K_1(0) if |∇ u|=0 } a.e. in Ω , z = α (u-g) a.e. in Ω . The relations (<ref>) determine z∈ W_N^2(;Ω)∩ L^∞(Ω;ℝ^d) via u∈ BV(Ω)∩ L^2(Ω) and vice versa. A Crouzeix–Raviart finite element approximation of (<ref>) is given by the minimization of the regularized, discrete functional I_h,ε^cr𝒮^1,cr(𝒯_h)→ℝ, h,ε>0, for every v_h∈𝒮^1,cr(𝒯_h) defined by I_h,ε^cr(v_h) f_ε(|∇_h v_h| )_L^1(Ω) + α2Π_h(v_h-g)_L^2(Ω)^2 . Here, ∇_h is the element-wise application of the gradient operator and f_ε∈C^1(ℝ) is a regularization of the modulus |·|, and Π_h denotes the (local) L^2-projection onto element-wise constant functions. A quasi-optimal dual Raviart–Thomas vector field z_h,ε^rt∈ℛT^0_N(𝒯_h) can be associated with a minimizing function u_h,ε^cr∈𝒮^1,cr(𝒯_h) of I_h,ε^cr𝒮^1,cr(𝒯_h)→ℝ via the reconstruction formula z_h,ε^rt = f_ε'(|∇_h u_h,ε^cr|) |∇_h u_h,ε^cr|∇_h u_h,ε^cr + αΠ_h (u_h,ε^cr -g)d( id_ℝ^d- Π_h id_ℝ^d) in ℛT^0_N(𝒯_h) . For canonical choices of f_ε∈ C^1(ℝ), e.g., f_ε =|·|_ε= ((·)^2+ε^2)^1/2, it holds |Π_h z_h,ε^rt|≤ 1 a.e. in Ω, but not |z_h,ε^rt|≤ 1 a.e. in Ω. Thus, we employ f_ε = (1-ε) |·|_ε, so that |f_ε'(t)|≤ 1-ε for all t∈ℝ. The choice ε∼ h^2 in (<ref>) and an additional projection step onto K_1(0) lead to an accurate approximation z_h,ε^rt∈ℛT^0_N(𝒯_h) of z∈ W_N^2(;Ω)∩ L^∞(Ω;ℝ^d), which satisfies |z_h,ε^rt|≤ 1 a.e. in Ω and, thus, represents an admissible test function that leads to the definition of an error estimator. The resulting adaptive mesh-refinement procedure leads to significantly improved experimental convergence rates compared to recent related contributions, cf. <cit.>. More precisely, we report quasi-optimal linear convergence rates which have been obtained only for meshes with quadratic grading towards a sufficiently simple jump set of a  regular g in <cit.>.10mm This article is organized as follows: In Section <ref>, we introduce the employed notation and the relevant finite element spaces. In Section <ref>, we propose a general approach for explicit a posteriori error representation for convex minimization problems based on (discrete) convex duality relations. In Section <ref>, we transfer the concepts of Section <ref> to the Rudin–Osher–Fatemi model and propose a regularization scheme. In Section <ref>, we review our theoretical findings via numerical experiments. § PRELIMINARIES §.§ Convex analysis For a (real) Banach space X, which is equipped with the norm ·_X X→ℝ_≥ 0, we denote its corresponding (continuous) dual space by X^* equipped with the dual norm ·_X^* X^*→ℝ_≥ 0, defined by x^*_X^*sup_x_X≤ 1⟨ x^*,x⟩_X for every x^*∈ X^*, where ⟨·,·⟩_X X^*× X→ℝ, defined by ⟨ x^*,x⟩_X x^*(x) for every x^*∈ X^* and x∈ X, denotes the duality pairing. A functional F X→ℝ∪{+∞} is called sub-differentiable in x∈ X, if F(x)<∞ and if there exists x^*∈ X^*, called sub-gradient, such that for every y∈ X, it holds ⟨ x^*,y-x⟩_X≤ F(y)-F(x) . The sub-differential ∂ F X→ 2^X^* of a functional F X→ℝ∪{+∞} for every x∈ X is defined by (∂ F)(x){x^*∈ X^*|(<ref>) holds for x^*} if F(x)<∞ and (∂ F)(x)∅ else. For a given functional F X→ℝ∪{±∞}, we denote its corresponding (Fenchel) conjugate by F^* X^*→ℝ∪{±∞}, which for every x^*∈ X^* is defined by F^*(x^*)sup_x∈ X⟨ x^*,x⟩_X-F(x) . If F X→ℝ∪{+∞} is a proper, convex, and lower semi-continuous functional, then also its (Fen-chel) conjugate F^* X^*→ℝ∪{+∞} is a proper, convex, and lower semi-continuous functional, cf. <cit.>. Furthermore, for every x^*∈ X^* and x∈ X such that F^*(x^*)+F(x) is well-defined, i.e., the critical case ∞-∞ does not occur, the Fenchel–Young inequality ⟨ x^*,x⟩_X≤ F^*(x^*)+F(x) applies. In particular, for every x^*∈ X^* and x∈ X, it holds the Fenchel–Young identity x^*∈ (∂ F)(x) ⇔ ⟨ x^*,x⟩_X= F^*(x^*)+F(x) . The following convexity measures for functionals play an important role in the derivation of an explicit a posteriori error representation for convex minimization problems in Section <ref>; for further information, please refer to <cit.>. Let X be a (real) Banach space and F X→ℝ∪{+∞} proper, i.e., D(F){x∈ X| F(x)<∞}≠∅. (i) The σ^2_F D(F)× X→ [0,+∞] for every x∈ D(F) and y∈ X is defined by σ^2_F(y,x) F(y)-F(x)-sup_x^*∈ (∂ F)(x)⟨ x^*,y-x⟩_X , where we use the convention sup(∅)-∞. (ii) The σ^2_F D(F)^2→ [0,+∞] for every x,y∈ D(F) is defined by σ_F,s^2(y,x)σ_F^2(y,x)+σ_F^2(x,y)=inf_x^*∈ (∂ F)(x);y^*∈ (∂ F)(y)⟨ x^*-y^*,x-y⟩_X , where we use the convention inf(∅) +∞. Let X be a (real) Banach space and F X→ℝ∪{+∞} proper. Moreover, let x∈ X be minimal for F X→ℝ∪{+∞}. Then, the ρ^2_F X^2→ [0,+∞] x∈ X for every y∈ X is defined by ρ^2_F(y,x) F(y)-F(x)≥ 0 . Let X be a (real) Banach space and F X→ℝ∪{+∞} proper. Moreover, let x∈ X be minimal for F X→ℝ∪{+∞}. Then, due to 0∈ (∂ F)(x), for every y∈ X, it holds σ^2_F(y,x)≤ρ^2_F(y,x) . §.§ Function spaces Throughout the article, we denote by Ω⊆ℝ^d, d ∈ℕ, a bounded polyhedral Lipschitz domain, whose (topological) boundary is disjointly divided into a closed Dirichlet part Γ_D and an open Neumann part Γ_N, i.e., ∂Ω = Γ_D∪Γ_N and ∅ = Γ_D∩Γ_N. 3mm For p∈[1,∞] and l∈ℕ, we employ the standard notations[Here, W^-1/p,p(Γ_N) (W^1-1/p',p'(Γ_N))^* and W^-1/p,p(∂Ω) (W^1-1/p',p'(∂Ω))^*.] W^1,p_D(Ω;ℝ^l) {v∈ L^p(Ω;ℝ^l) |∇ v∈ L^p(Ω;ℝ^l× d), v=0 in L^p(Γ_D;ℝ^l)} , W^p_N(;Ω) {y∈ L^p(Ω;ℝ^d) | y∈ L^p(Ω), _n y=0 in W^-1/p,p(Γ_N)} , W^1,p(Ω;ℝ^l) W^1,p_D(Ω;ℝ^l) if Γ_D=∅, and W^p(;Ω) W^p_N(;Ω) if Γ_N=∅, where we  by W^1,p(Ω;ℝ^l)→L^p(∂Ω;ℝ^l) and by _n(·)W^p(;Ω)→W^-1/p,p(∂Ω), the trace and  trace operator, respectively. In particular, we always omit (·) and _n(·). In addition, we employ the abbreviations L^p(Ω) L^p(Ω;ℝ^1), W^1,p(Ω) W^1,p(Ω;ℝ^1), and W^1,p_D(Ω) W^1,p_D(Ω;ℝ^1). For (Lebesgue) measurable functions u,vΩ→ℝ and a (Lebesgue) measurable set M⊆Ω, we write (u,v)_M∫_Mu v dx , whenever the right-hand side is well-defined. Analogously, for (Lebesgue) measurable vector fields z,yΩ→ℝ^d and a (Lebesgue) measurable set M⊆Ω, we write (z,y)_M∫_Mz· y dx. Moreover, let |(·)|(Ω) L^1_(Ω) →ℝ∪{+∞}, for every v∈ L^1_(Ω) defined by[Here, C_c^∞(Ω;ℝ^d) denotes the space of smooth and in Ω compactly supported vector fields.] |v|(Ω)sup{-(v, ϕ)_Ω|ϕ∈ C_c^∞(Ω;ℝ^d); ϕ_L^∞(Ω;ℝ^d)≤ 1} , denote the total variation functional. Then, the space of functions with bounded variation is defined by BV(Ω){v∈ L^1(Ω)||v|(Ω)<∞} . §.§ Triangulations Throughout the entire paper, we denote by {𝒯_h}_h>0, a family of regular, i.e., uniformly shape regular and conforming, triangulations of Ω⊆ℝ^d, d∈ℕ, cf. <cit.>. Here, h>0 refers to the average mesh-size, i.e., if we set h_T(T) for all T∈𝒯_h, then, we have that h = 1/(𝒯_h)∑_T∈𝒯_hh_T. For every element T ∈𝒯_h, we denote by ρ_T>0, the supremum of diameters of inscribed balls. We assume that there exists a constant ω_0>0, independent of h>0, such that max_T∈𝒯_hh_Tρ_T^-1≤ω_0. The smallest such constant is called the chunkiness of {𝒯_h}_h>0. The sets 𝒮_h, 𝒮_h^i, 𝒮_h^∂, and 𝒩_h contain the sides, interior sides, boundary sides, and vertices, respectively, of the elements of 𝒯_h. We have the following relation between the average mesh-size and the number of vertices: h∼(𝒩_h)^-1/d . For k∈ℕ∪{0} and T∈𝒯_h, let 𝒫_k(T) denote the set of polynomials of maximal degree k on T. Then, for k∈ℕ∪{0} and l∈ℕ, the sets of continuous and  polynomial functions or vector fields, respectively, are defined by ℒ^k(𝒯_h)^l {v_h∈ L^∞(Ω;ℝ^l)| v_h|_T∈𝒫_k(T)^l for all T∈𝒯_h} , 𝒮^k(𝒯_h)^l ℒ^k(𝒯_h)^l∩ C^0(Ω;ℝ^l) . For every T∈𝒯_h and S∈𝒮_h, let x_T1/d+1∑_z∈𝒩_h∩ Tz∈ T and x_S1/d∑_z∈𝒩_h∩ Sz∈ S denote the barycenters of T and S, respectively. The (local) L^2-projection operator Π_h L^1(Ω;ℝ^l)→ℒ^0(𝒯_h)^l onto element-wise constant functions or vector fields, respectively, for every v∈ L^1(Ω), is defined by Π_h v|_T_Tv dx for all T∈𝒯_h. The element-wise gradient ∇_hℒ^1(𝒯_h)^l→ℒ^0(𝒯_h)^l× d, for every v_h∈ℒ^1(𝒯_h)^l, is defined by ∇_hv_h|_T∇(v_h|_T) for all T∈𝒯_h. §.§.§ Crouzeix–Raviart element 11mm The Crouzeix–Raviart finite element space, cf. <cit.>, consists of affine functions that are continuous at the barycenters of inner element sides, i.e.,[Here, for every inner side S∈𝒮_h^i, v_h_S v_h|_T_+-v_h|_T_- on S, where T_+, T_-∈𝒯_h satisfy ∂ T_+∩∂ T_-=S, and for every boundary S∈𝒮_h^∂, v_h_S v_h|_T on S, where T∈𝒯_h satisfies S⊆∂ T.] 𝒮^1,cr(𝒯_h){v_h∈ℒ^1(𝒯_h)|v_h_S(x_S)=0 for all S∈𝒮_h^i} . Note that 𝒮^1,cr(𝒯_h)⊆ BV(Ω). More precisely, for every v_h∈𝒮^1,cr(𝒯_h), cf. <cit.>, we have that Dv_h=∇_ hv_h⊗dx+v_h⊗ds|_𝒮_h with ∇_ hv_h⊗dx⊥v_h⊗ds|_𝒮_h, so that, cf. <cit.>, |Dv_h|(Ω)= ∇_ hv_h_L^1(Ω;ℝ^d)+v_h_L^1(𝒮_h) . The Crouzeix–Raviart finite element space with homogeneous Dirichlet boundary condition on Γ_D is defined by 𝒮^1,cr_D(𝒯_h){v_h∈𝒮^1,cr(𝒯_h)| v_h(x_S)=0 for all S∈𝒮_h∩Γ_D} . A basis for 𝒮^1,cr(𝒯_h) is given by functions φ_S∈𝒮^1,cr(𝒯_h), S∈𝒮_h, satisfying the   φ_S(x_S')=δ_S,S' for all S,S'∈𝒮_h. A basis for 𝒮^1,cr_D(𝒯_h) is given by φ_S∈𝒮^1,cr_D(𝒯_h), S∈𝒮_h∖Γ_D. §.§.§ Raviart–Thomas element The Raviart–Thomas finite element space, cf. <cit.>, consists of element-wise affine vector fields that have continuous constant normal components on inner element sides, i.e.,[Here, for every inner side S∈𝒮_h^i, y_h· n_Sy_h|_T_+· n_T_++y_h|_T_-· n_T_- on S, where T_+, T_-∈𝒯_h satisfy ∂ T_+∩∂ T_-=S and for every T∈𝒯_h, n_T∂ T→𝕊^d-1 denotes the outward unit normal vector field to T, and for every boundary side S∈𝒮_h^∂, y_h· n_Sy_h|_T· n on S, where T∈𝒯_h satisfies S⊆∂ T and n∂Ω→𝕊^d-1 denotes the outward unit normal vector field to Ω.] ℛT^0(𝒯_h){y_h∈ℒ^1(𝒯_h)^d| y_h|_T· n_T= on ∂ T for all T∈𝒯_h , y_h· n_S=0 on S for all S∈𝒮_h^i} . Note that ℛT^0_N(𝒯_h)⊆ W^∞_N(;Ω). The Raviart–Thomas finite element space with homogeneous normal component boundary condition on Γ_N is defined by ℛT^0_N(𝒯_h){y_h∈ℛT^0(𝒯_h)| y_h· n=0 on Γ_N} . A basis for ℛT^0(𝒯_h) is given by vector fields ψ_S∈ℛT^0(𝒯_h), S∈𝒮_h, satisfying   ψ_S|_S'· n_S'=δ_S,S' on S' for all S'∈𝒮_h, where n_S is the unit normal vector on S pointing from T_- to T_+ if T_+∩ T_-=S∈𝒮_h. A basis for ℛT^0_N(𝒯_h) is given by ψ_S∈ℛT^0_N(𝒯_h), S∈𝒮_h∖Γ_N. §.§.§ Discrete integration-by-parts formula For every v_h∈𝒮^1,cr_D(𝒯_h) and y_h∈ℛT^0_N(𝒯_h), it holds the discrete integration-by-parts formula (∇_hv_h,Π_h y_h)_Ω=-(Π_h v_h, y_h)_Ω . In addition, cf. <cit.>, if a vector field y_h∈ℒ^0(𝒯_h)^d satisfies for every v_h∈𝒮^1,cr_D(𝒯_h) (y_h,∇_h v_h)_Ω=0 , then, choosing v_h=φ_S∈𝒮^1,cr_D(𝒯_h) for all S∈𝒮_h∖Γ_D, one finds that y_h∈ℛT^0_N(𝒯_h). Similarly, if a function v_h∈ℒ^0(𝒯_h) satisfies for every y_h∈ℛT^0_N(𝒯_h) (v_h, y_h)_Ω=0 , then, choosing y_h=ψ_S∈ℛT^0_N(𝒯_h) for all S∈𝒮_h∖Γ_N, one finds that v_h∈𝒮^1,cr_D(𝒯_h). In other words, we have the orthogonal (with respect to the inner product (·,·)_Ω) decompositions ℒ^0(𝒯_h)^d =(|_ℛT^0_N(𝒯_h))⊕∇_h(𝒮^1,cr_D(𝒯_h)) , ℒ^0(𝒯_h) =(∇_h|_𝒮^1,cr_D(𝒯_h))⊕ (ℛT^0_N(𝒯_h)) . § EXACT A POSTERIORI ERROR ESTIMATION FOR CONVEX MINIMIZATION PROBLEMS §.§ Continuous convex minimization problem and continuous convex duality Let ϕℝ^d→ℝ∪{+∞} be a proper, convex, and lower semi-continuous function and let ψΩ×ℝ→ℝ∪{+∞} be a (Lebesgue) measurable function such that for a.e. x∈Ω, the function ψ(x,·)Ω×ℝ→ℝ∪{+∞} is proper, convex, and lower semi-continuous. We examine the convex minimization problem that seeks for a function u∈ W^1,p_D(Ω), p∈ (1,∞), that is minimal for the functional I W^1,p_D(Ω)→ℝ∪{+∞}, for every v∈W^1,p_D(Ω) defined by I(v)∫_Ωϕ(∇ v) x+∫_Ωψ(·,v) x . In what follows, we refer to the minimization of I W^1,p_D(Ω) →ℝ∪{+∞} as the primal problem. A (Fenchel) dual problem to the minimization of (<ref>) consists in the maximization of the functional DL^p'(Ω;ℝ^d)→ℝ∪{ -∞}, for every y∈ L^p'(Ω;ℝ^d) defined by D(y) -∫_Ωϕ^*( y) x-F^*( y) , where the distributional divergence L^p'(Ω;ℝ^d)→ (W^1,p_D(Ω))^* for every y∈L^p'(Ω;ℝ^d) and v∈W^1,p_D(Ω) is defined by ⟨ y,v⟩_W^1,p_D(Ω) -(y,∇ v)_Ω and F^*L^p'(Ω)→ℝ∪{±∞} denotes the Fenchel conjugate to F L^p(Ω)→ℝ∪{+∞}, defined by F(v)∫_Ωψ(·,v) x for all v∈ L^p(Ω). Note that for every y∈W^p'_N(;Ω), we have that ⟨ y,v⟩_W^1,p_D(Ω)=( y, v)_Ω for all v∈ W^1,p_D(Ω) and, thus, the representation D(y)=-∫_Ωϕ^*( y) x-∫_Ωψ^*(·, y) x . A weak duality relation applies, cf. <cit.>, i.e., inf_v∈ W^1,p_D(Ω)I(v)≥sup_y∈ L^p'(Ω;ℝ^d)D(y) . In what follows, we always assume that ϕℝ^d→ℝ∪{+∞} and ψΩ×ℝ→ℝ∪{+∞} are such that (<ref>) admits at least one minimizer u∈ W^1,p_D(Ω), called the primal solution, (<ref>) at least one maximizer z∈ L^p'(Ω;ℝ^d), called the dual solution, and that a strong duality relation applies, i.e., I(u)= D(z) . By the Fenchel–Young inequality (cf. (<ref>)), (<ref>) is equivalent to the convex optimality relations z·∇ u =ϕ^*(z)+ϕ(∇ u) Ω , z ∈∂ F(u) . If z∈W^p'_N(;Ω), then the convex optimality relation (<ref>) is equivalent to z u=ψ^*(·, z)+ψ(·, u) Ω . If ϕ∈ C^1(ℝ^d), then, by the Fenchel–Young identity (cf. (<ref>)), (<ref>) is equivalent to z= Dϕ(∇ u) L^p'(Ω;ℝ^d) . Similarly, if z∈W^p'_N(;Ω) and ψ(x,·)∈ C^1(ℝ) for a.e. x∈Ω, then (<ref>) is equivalent to z=Dψ(·, u) L^p'(Ω) . The convex duality relations (<ref>)–(<ref>) motivate introducing the primal-dual error estimator η^2 W^1,p_D(Ω)× L^p'(Ω;ℝ^d)→ [0,+∞], for every v∈ W^1,p_D(Ω) and y∈ L^p'(Ω;ℝ^d) defined by 5mm η^2(v,y) I(v)-D(y) . Note that the sign of the estimator (<ref>) is a consequence of the weak duality relation (<ref>). Together with the optimal convexity measures (cf. Definition <ref>) ρ_I^2 W^1,p_D(Ω)^2→ [0,+∞] of (<ref>) at a primal solution u∈ W^1,p_D(Ω) and ρ_-D^2L^p'(Ω;ℝ^d)→ [0,+∞] of the negative of (<ref>) at a dual solution z∈L^p'(Ω;ℝ^d), we arrive at the following explicit a posteriori error representation.3mm The following statements apply: (i) For every v∈ W^1,p_D(Ω) and y∈L^p'(Ω;ℝ^d), we have that ρ^2_I(v,u)+ρ^2_-D(y,z)=η^2(v,y) . (ii) For every v∈ W^1,p_D(Ω) and y∈W^p'_N(;Ω), we have that η^2(v,y) = ∫_Ωϕ(∇ v)-∇ v· y+ϕ^*(y) dx+∫_Ωψ(·, v)- v div y+ψ^*(·,div y) dx . (i) By the Fenchel–Young inequality (<ref>), the integrands in the representation (<ref>), are non-negative and, thus, suitable as local refinement indicators. (ii) Appealing to Remark <ref>, from Theorem <ref> (i), for every v∈ W^1,p_D(Ω) and y∈ L^p'(Ω;ℝ^d), it follows that σ_I^2(v,u)+σ_-D^2(y,z)≤η^2(v,y). ad (i). Due to I(u)=D(z), cf. (<ref>), Definition <ref>, and (<ref>), for every v∈ W^1,p_D(Ω) and y∈ L^p'(Ω;ℝ^d), we have that ρ^2_I(v,u)+ρ^2_-D(y,z)=I(v)-I(u)+D(z)-D(y)=η^2(v,y) . ad (ii). Using (<ref>), (<ref>), and integration-by-parts, we conclude that (<ref>) applies. (i) In the , cf. <cit.>, i.e., ϕ1/p|·|^p∈ C^1(ℝ), p∈ (1,∞), and ψ ((t,x)^⊤↦ -f(x)t)Ω×ℝ→ℝ, where f∈ L^p'(Ω), cf. <cit.>, we have that ρ^2_I(v,u)∼F(∇ v)-F(∇ u)_L^2(Ω;ℝ^d)^2 , ρ^2_-D(y,z)∼F^*(y)-F^*(z)_L^2(Ω;ℝ^d)^2 , where F,F^*ℝ^d→ℝ^d for every a∈ℝ^d are defined by F(a)| a|^p-2/2a and F^*(a)| a|^p'-2/2a. (ii) In the , cf. <cit.>, i.e., ϕ1/2|·|^2∈ C^1(ℝ) and ψ ((t,x)^⊤↦ -f(x)t+I_χ(x)(t))Ω×ℝ→ℝ∪{+∞}, where f∈ L^2(Ω) and χ∈ W^1,2(Ω) with χ≤ 0 on Γ_D, cf. <cit.>, where I_χ(x)(t) 0 if t≥ 0 and I_χ(x)(t) +∞ else, we have that ρ^2_I(v,u)= 12∇ v-∇ u_L^2(Ω;ℝ^d)^2+⟨ -Λ,v-u⟩_W^1,2_D(Ω) , ρ^2_-D(y,z)≥12y-z_L^2(Ω;ℝ^d)^2 , where Λ∈ (W^1,2_D(Ω))^* is defined by ⟨Λ,v⟩_W^1,2_D(Ω) (f,v)_Ω-(∇ u,∇ v)_Ω for all v∈ W^1,2_D(Ω). (iii) In an , cf. <cit.>, i.e., ϕζ∘|·|∈ C^1(ℝ), where ζ(0) 0, ζ'(t)μ_2 t if t∈ [0,t_1], ζ'(t)μ_2 t_1 if t∈ [t_1,t_2], and ζ'(t)μ_1 t if t∈ [t_2,+∞) for some 0<t_1<t_2 and 0<μ_1<μ_2 with t_1μ_2=t_2μ_1, and ψ ((t,x)^⊤↦ -f(x)t)Ω×ℝ→ℝ, where f∈ L^2(Ω), cf. <cit.>, we have that ρ^2_I(v,u)≥12μDϕ(∇ v)-Dϕ(∇ u)_L^2(Ω;ℝ^d)^2 , ρ^2_-D(y,z)≥12μy-z_L^2(Ω;ℝ^d)^2 . (iv) In the , cf. <cit.>, i.e., ϕ|·|∈ C^0(ℝ) and ψ ((t,x)^⊤↦α/2(t-g(x))^2)Ω×ℝ→ℝ, where g∈ L^2(Ω), cf. <cit.>, we have that ρ^2_I(v,u)≥α2v-u_L^2(Ω)^2 , ρ^2_-D(y,z)≥12α y- z_L^2(Ω)^2 . Since the dual problem to the minimization of the negative of (<ref>), in turn, consists in the maximization of the negative of (<ref>), the roles of the primal problem and the dual problem may be interchanged. An advantage of Theorem <ref> consists in the fact that it yields reliable and efficient a posteriori error estimators for both the primal problem and the dual problem, i.e.,7.5mm Theorem <ref> also shows that for each y∈ L^p'(Ω;ℝ^d), the estimator η^2_I,y (v↦η^2(v,y)) W^1,p_D(Ω)→ [0,+∞] satisfies ρ^2_I(v,u)+ρ^2_-D(y,z)=η^2_I,y(v) , and for each v∈ W^1,p_D(Ω), the estimator η^2_-D,v (y↦η^2(v,y)) L^p'(Ω;ℝ^d)→ [0,+∞]  ρ^2_I(v,u)+ρ^2_-D(y,z)=η^2_-D,v(y) . For the a posteriori error estimators (<ref>) and (<ref>) for being numerically practicable, it is necessary to have a computationally cheap way to obtain sufficiently accurate approximation of the dual solution (for (<ref>)) and/or of the primal solution (for (<ref>)), respectively. In Section <ref>, resorting to (discrete) convex duality relations between a non-conforming Crouzeix–Raviart approximation of the primal problem and a Raviart–Thomas approximation of the dual problem, we arrive at discrete reconstruction formulas, called generalized Marini formula, cf. <cit.>.9mm §.§ Discrete convex minimization problem and discrete convex duality Let ψ_hΩ×ℝ→ℝ∪{+∞} denote a suitable approximation[We refrain from being too precise concerning what we mean with approximation to allow for more flexibility. Assumptions on both ϕℝ^d→ℝ∪{+∞} and ψ_hΩ×ℝ→ℝ∪{+∞}, h>0, that imply, e.g., Γ-convergence results can be found in <cit.>.] of ψΩ×ℝ→ℝ∪{+∞} such that ψ_h(·,t)∈ℒ^0(𝒯_h) for all t∈ℝ and for a.e. x∈Ω, ψ_h(x,·)Ω×ℝ→ℝ∪{+∞} is a proper, convex, and lower semi-continuous functional. Then, we examine the (discrete) convex minimization problem that seeks for a function u_h^cr∈𝒮^1,cr_D(𝒯_h) that is minimal for the functional I_h^cr𝒮^1,cr_D(𝒯_h)→ℝ∪{+∞}, for every v_h∈𝒮^1,cr_D(𝒯_h) defined by I_h^cr(v_h)∫_Ωϕ(∇_ h v_h) x+∫_Ωψ_h(·,Π_h v_h) x . In what follows, we refer the minimization of I_h^cr𝒮^1,cr_D(𝒯_h)→ℝ∪{+∞} to as the discrete primal problem. In <cit.>, it is shown that the corresponding (Fenchel) dual problem to the minimization of (<ref>) consists in the maximization of D_h^rtℛT^0_N(𝒯_h)→ℝ∪{-∞}, for every y_h∈ℛT^0_N(𝒯_h) defined by D_h^rt(y_h)-∫_Ωϕ^*(Π_h y_h) x-∫_Ωψ_h^*(·, y_h) x . A discrete weak duality relation, cf. <cit.>, applies inf_v_h∈𝒮^1,cr_D(𝒯_h)I_h^cr(v_h)≥sup_y_h∈ℛT^0_N(𝒯_h)D_h^rt(y_h) . We will always assume that ϕℝ^d→ℝ∪{+∞} and ψ_hΩ×ℝ→ℝ∪{+∞} are such that (<ref>) admits at least one minimizer u_h^cr∈𝒮^1,cr_D(𝒯_h), called the discrete primal solution, (<ref>) admits at least one maximizer z_h^rt∈ℛT^0_N(𝒯_h), called the discrete dual solution, and that a discrete strong duality relation applies, i.e., I_h^cr(u_h^cr)=D_h^rt(z_h^rt) . By the Fenchel–Young identity (cf. (<ref>)), (<ref>) is equivalent to the discrete convex optimality relations Π_h z_h^rt·∇_ h u_h^cr =ϕ^*(Π_hz_h^rt)+ϕ(∇_ h u_h^cr) a.e. in Ω , z_h^rt Π_hu_h^cr =ψ_h^*(·, z_h^rt)+ψ_h(·,Π_hu_h^cr) a.e. in Ω . If ϕ∈ C^1(ℝ^d), then, by the Fenchel–Young identity (cf. (<ref>)), (<ref>) is equivalent to Π_h z_h^rt=Dϕ(∇_ h u_h^cr) in ℒ^0(𝒯_h)^d , and if ϕ^*∈ C^1(ℝ^d), then, by the Fenchel–Young identity (cf. (<ref>)), (<ref>) is equivalent to ∇_ h u_h^cr=Dϕ^*(Π_h z_h^rt) in ℒ^0(𝒯_h)^d . Similarly, if ψ_h(x,·)∈ C^1(ℝ) for a.e. x∈Ω, then (<ref>) is equivalent to z_h^rt=Dψ_h(·,Π_hu_h^cr) in ℒ^0(𝒯_h) , and if ψ_h^*(x,·)∈ C^1(ℝ) for a.e. x∈Ω, then (<ref>) is equivalent to Π_hu_h^cr=Dψ_h^*(·, z_h^rt) in ℒ^0(𝒯_h) . The relations (<ref>)–(<ref>) motivate the following discrete recontruction formulas for a discrete dual solution z_h^rt∈ℛT^0_N(𝒯_h) from a discrete primal solution u_h^cr∈𝒮^1,cr_D(𝒯_h) and vice versa, called generalized Marini formulas, cf. <cit.>. The following statements apply: (i) If ϕ∈ C^1(ℝ^d) and ψ_h(x,·)∈ C^1(ℝ) for a.e. x∈Ω, then, given a minimizer u_h^cr∈𝒮^1,cr_D(𝒯_h) of (<ref>), a maximizer z_h^rt∈ℛT^0_N(𝒯_h) of (<ref>) is given via z_h^rt= Dϕ(∇_ h u_h^cr)+Dψ_h(·, Π_hu_h^cr)/d(_ℝ^d-Π_h_ℝ^d) in ℛT^0_N(𝒯_h) , a discrete strong duality relation applies, i.e., (<ref>). (ii) If ϕ^*∈ C^1(ℝ^d) and ψ_h^*(x,·)∈ C^1(ℝ) for a.e. x∈Ω, then, given a maximizer z_h^rt∈ℛT^0_N(𝒯_h) of (<ref>), a minimizer u_h^cr∈𝒮^1,cr_D(𝒯_h) of (<ref>) is given via u_h^cr = Dψ_h^*(·, z_h^rt)+ Dϕ^*(Π_h z_h^rt)·(_ℝ^d-Π_h_ℝ^d) in 𝒮^1,cr_D(𝒯_h) , a discrete strong duality relation applies, i.e., (<ref>). It is possible to derive reconstructions formulas similar to (<ref>) and (<ref>) under weak conditions, e.g., resorting to a regularization argument (cf. Proposition <ref>) or given discrete Lagrange multipliers (cf. <cit.>). ad (i). See <cit.>.5mm ad (ii). By definition, it holds u_h^cr∈ℒ^1(𝒯_h) and the discrete convex optimality relation (<ref>) is satisfied. Since z_h^rt∈ℛT^0_N(𝒯_h) is maximal for (<ref>) as well as ϕ^*∈ C^1(ℝ^d) and ψ_h^*(x,·)∈ C^1(ℝ) for a.e. x∈Ω, for every y_h∈ℛT^0_N(𝒯_h), we have that (Dϕ^*(Π_h z_h^rt),Π_hy_h)_Ω+(Dψ_h^*(·, z_h^rt), y_h)_Ω=0 . In particular, (<ref>) implies that Dϕ^*(Π_h z_h^rt)∈ ((|_ℛT^0_N(𝒯_h)))^⊥. Appealing to <cit.>, it holds ((|_ℛT^0_N(𝒯_h)))^⊥=∇_h(𝒮^1,cr_D(𝒯_h)). Therefore, there exists v_h∈𝒮^1,cr_D(𝒯_h) such that ∇_h v_h= Dϕ^*(Π_h z_h^rt) in ℒ^0(𝒯_h)^d . Hence, for every y_h∈ℛT^0_N(𝒯_h), resorting to the discrete integration-by-parts formula (<ref>), (<ref>), (<ref>), and (<ref>), we find that (Π_hv_h-Π_h u_h^cr, y_h)_Ω =- (Dϕ^*(Π_h z_h^rt),Π_hy_h)_Ω-(Dψ_h^*(·, z_h^rt), y_h)_Ω=0 . In other words, for every y_h∈ℛT^0_N(𝒯_h), we have that ( v_h-u_h^cr, y_h)_Ω= (Π_h v_h-Π_h u_h^cr, y_h)_Ω=0 . On the other hand, we have that ∇_ h(v_h-u_h^cr)=0 in ℒ^0(𝒯_h)^d, i.e., v_h-u_h^cr∈ℒ^0(𝒯_h). Therefore, (<ref>) in conjunction with (<ref>) implies that v_h-u_h^cr∈ ( (ℛT^0_N(𝒯_h)))^⊥=(∇_h|_𝒮^1,cr_D(𝒯_h)). As a result, due to v_h∈𝒮^1,cr_D(𝒯_h), we conclude that u_h^cr∈𝒮^1,cr_D(𝒯_h) with ∇_ h u_h^cr =Dϕ^*(Π_h z_h^rt) in ℒ^0(𝒯_h)^d , Π_hu_h^cr =Dψ_h^*(·, z_h^rt) in ℒ^0(𝒯_h) . By the Fenchel–Young identity, cf. (<ref>), (<ref>) is equivalent to Π_h z_h^rt·∇_ h u_h^cr =ϕ^*(Π_hz_h^rt)+ϕ(∇_ h u_h^cr) a.e. in Ω , z_h^rt Π_hu_h^cr =ψ_h^*(·, z_h^rt)+ψ_h(·,Π_hu_h^cr) a.e. in Ω . Eventually, adding (<ref>)_1 and (<ref>)_2, subsequently, integration with respect to x∈Ω, resorting to the discrete integration-by-parts formula (<ref>), and using the definitions (<ref>) and (<ref>), we arrive at I_h^cr(u_h^cr)=D_h^rt(z_h^rt), which, appealing to the discrete weak duality relation (<ref>), implies that u_h^cr∈𝒮^1,cr_D(𝒯_h) is minimal for (<ref>). § APPLICATION TO THE RUDIN–OSHER–FATEMI (ROF) MODEL In this section, we transfer the concepts derived in Section <ref> to the non-differentiable Rudin–Osher–Fatemi (ROF) model, cf. <cit.>. The approximation of the ROF model has been investigated by numerous authors: A priori error estimates has been derived in <cit.>. A posteriori error estimates and adaptivity results can be found in <cit.>.7mm §.§ The continuous Rudin–Osher–Fatemi (ROF) model Given a function g∈ L^2(Ω), i.e., the noisy image, and a constant parameter α>0, the fidelity parameter the Rudin–Osher–Fatemi (ROF) model, cf. <cit.>, consists in the minimization of the functional I BV(Ω)∩ L^2(Ω)→ℝ, for every v∈ BV(Ω)∩ L^2(Ω) defined by I(v)|v| (Ω)+α2v-g^2_L^2(Ω) . In <cit.>, it has been established that there exists a unique minimizer u∈ BV(Ω)∩ L^2(Ω) of (<ref>). Appealing to <cit.> or <cit.>, the  (Fenchel) dual problem to the minimization of (<ref>) consists in the maximization of the functional D W^2_N(;Ω) ∩ L^∞(Ω;ℝ^d)→ℝ∪{-∞}, for every y∈ W^2_N(;Ω) ∩ L^∞(Ω;ℝ^d) defined by D(y) -I_K_1(0)(y) -12α y+α g_L^2(Ω)^2+α2g_L^2(Ω)^2 , where I_K_1(0) L^∞(Ω;ℝ^d)→ℝ∪{∞} is defined by I_K_1(0)(y) 0 if y∈ L^∞(Ω;ℝ^d) with | y|≤ 1 a.e. in Ω and I_K_1(0)(y)∞ else. Apart from that, in <cit.>, it is shown that (<ref>) admits a maximizer z∈ W^2_N(;Ω)∩ L^∞(Ω;ℝ^d) and that a strong duality relation applies, i.e., I(u)=D(z) . Appealing to <cit.>, (<ref>) is equivalent to the convex optimality relations z =α (u-g) in L^2(Ω) , -(u, z)_Ω =|u|(Ω) . Next, if we introduce, by analogy with Section <ref>, the primal-dual error estimator η^2 BV(Ω)× (W^2_N(;Ω)∩ L^∞(Ω;ℝ^d))→ [0,+∞], for every v∈ BV(Ω) and y∈ W^2_N(;Ω)∩ L^∞(Ω;ℝ^d) defined by η^2(v,y) I(v)-D(y) , then the concepts of Section <ref> can be transferred to the ROF model.5mm The following statements apply: (i) For every v∈ BV(Ω) and y∈ W^2_N(;Ω)∩ L^∞(Ω;ℝ^d), we have that ρ^2_I(v,u)+ρ^2_-D(y,z)=η^2(v,y) . (ii) For every v∈ BV(Ω) and y∈ W^2_N(;Ω)∩ L^∞(Ω;ℝ^d), we have that η^2(v,y)= |Dv|(Ω)+( y,v)_Ω+12α y-α (v-g)_L^2(Ω)^2+I_K_1(0)(y) . ad (i). Due to I(u)=D(z), cf. (<ref>), Definition <ref>, and (<ref>), for every v∈ BV(Ω) and y∈ W^2_N(;Ω)∩ L^∞(Ω;ℝ^d), we have that ρ^2_I(v,u)+ρ^2_-D(y,z)=I(v)-I(u)+D(z)-D(y)=η^2(v,y) . ad (ii). For every v∈ BV(Ω) and y∈ W^2_N(;Ω)∩ L^∞(Ω;ℝ^d), we have that η^2(v,y) =|Dv|(Ω)+( y,v)_Ω+12αα (v-g)_L^2(Ω)^2 -12α2( y,α v)_Ω+12α y+α g_L^2(Ω)^2-α2g_L^2(Ω)^2^2+I_K_1(0)(y) =|Dv|(Ω)+( y,v)_Ω+α2v-g_L^2(Ω)^2 - 12α y-α (v-g)_L^2(Ω)^2-α2v-g_L^2(Ω)^2+I_K_1(0)(y) , which yields the claimed representation. Restricting the estimator (<ref>) to subclasses of BV(Ω) and W^2_N(;Ω)∩ L^∞(Ω;ℝ^d), , for which an appropriate integration-by-parts formula apply, e.g., (<ref>), it is possible to derive alternative representations of the estimator (<ref>), whose integrands are point-wise non-negative and, thus, suitable as local refinement indicators. (i) For every v∈ W^1,1(Ω) and y∈ W^2_N(;Ω)∩ L^∞(Ω;ℝ^d), by integration-by-parts, it holds η^2(v,y)=∇ v_L^1(Ω;ℝ^d)-(∇ v,y)_Ω+12α y+α (v-g)_L^2(Ω)^2+I_K_1(0)(y)≥ 0 . (ii) For every T∈𝒯_h, we define the local refinement indicator η_T^2 W^1,1(Ω)× W^2_N(;Ω)∩ L^∞(Ω;ℝ^d)→ [0,+∞] for every v∈ W^1,1(Ω) and y∈ W^2_N(;Ω)∩ L^∞(Ω;ℝ^d) by η^2_T,W(v,y)∇ v_L^1(T;ℝ^d)-(∇ v,y)_T+12α y+α (v-g)_L^2(T)^2+I_K_1(0)(y)≥ 0 . (iii) For every v_h∈𝒮^1,cr(Ω) and y_h∈ℛT^0_N(𝒯_h), by the representation of the total variation of Crouzeix–Raviart functions (<ref>) and the discrete integration-by-parts formula (<ref>), it holds η^2(v_h,y_h) =∇_ h v_h_L^1(Ω;ℝ^d)+v_h_L^1(𝒮_h)-(∇_ h v_h,Π_h y_h)_Ω +12α y_h+α (v_h-g)_L^2(Ω)^2+I_K_1(0)(y_h)≥ 0 . (iv) For every T∈𝒯_h, we define the discrete local refinement indicator η_T,CR^2𝒮^1,cr(𝒯_h)×ℛT^0_N(𝒯_h) → [0,+∞] for every v_h∈𝒮^1,cr(𝒯_h) and y_h∈ℛT^0_N(𝒯_h) by η^2_T,CR(v_h,y_h) ∇ v_h_L^1(T;ℝ^d)+∑_S∈𝒮_h;S⊆ Tv_h_L^1(S)-(∇_ h v_h,Π_h y_h)_T +12α y_h+α (v_h-g)_L^2(T)^2+I_K_1(0)(y_h)≥ 0 . We emphasize that the primal-dual error estimator (<ref>) and the representations (<ref>) or in Remark <ref> (i) & (ii) are well-known, cf. <cit.>. However, the combination of (<ref>) with the representation of the total variation of Crouzeix–Raviart functions (<ref>) and the discrete integration-by-parts formula (<ref>) in Remark <ref> (iii) & (iv), to the best of the authors' knowledge, is new and leads to significantly improved experimental convergence rates of the corresponding adaptive mesh-refinement procedure compared to the contributions <cit.>, cf. Section <ref>. 15mm §.§ The discretized Rudin–Osher–Fatemi (ROF) model Given g∈ L^2(Ω) and α>0, with g_hΠ_hg∈ℒ^0(𝒯_h), the discretized ROF model, proposed in <cit.>, consists in the minimization of I^cr_h𝒮^1,cr(𝒯_h)→ℝ, for every v_h∈𝒮^1,cr(𝒯_h) defined by I^cr_h(v_h)∇_hv_h_L^1(Ω;ℝ^d)+α2Π_hv_h-α g_h^2_L^2(Ω) . Note that the functional (<ref>) defines a non-conforming approximation of the functional (<ref>), as, e.g., jump terms of across inner element sides are not included. This, however, turned out to be essential in the derivation of optimal a priori error estimate in <cit.>. Since the functional (<ref>) is proper, strictly convex, weakly coercive, and lower semi-continuous, the direct method in the calculus of variations, cf. <cit.>, yields the existence of a unique minimizer u_h^cr∈𝒮^1,cr(𝒯_h), called the discrete primal solution. Appealing to <cit.>, the corresponding (Fenchel) dual problem to the minimization of (<ref>) consists in the maximization of the functional D_h^rtℛT^0_N(𝒯_h)→ℝ∪{-∞}, for every y_h∈ℛT^0_N(𝒯_h) defined by D_h^rt(y_h) -I_K_1(0)(Π_hy_h)-12α y_h+α g_h_L^2(Ω)^2+α2g_h_L^2(Ω)^2 . Appealing to Theorem <ref> (below), there exists a maximizer z_h^rt∈ℛT^0_N(𝒯_h) of (<ref>), which satisfies |Π_h z_h^rt|≤ 1 a.e. in Ω, a discrete strong duality relation applies, i.e., I^cr_h(u_h^cr)= D_h^rt(z_h^rt) , and the discrete convex optimality relations z_h^rt =α (Π_h u_h^cr-g_h) ℒ^0(𝒯_h) , Π_hz_h^rt·∇_h u_h^cr =|∇_h u_h^cr| ℒ^0(𝒯_h) . §.§ The regularized, discretized Rudin–Osher–Fatemi model To approximate a discrete minimizer u_h^cr∈𝒮^1,cr(𝒯_h) of (<ref>), it is common to approximate the modulus function by strictly convex regularizations. In this connection, for every ε∈ (0,1), we define a special regularization f_εℝ→ℝ_≥ 0 of the modulus function, for every t∈ℝ, via f_ε(t) (1-ε) | t|_ε , | t|_ε (t^2+ε^2)^1/2 , where |·|_εℝ→ℝ_≥ 0 is commonly referred to as the standard regularization.7mm Let us collect the most important properties of the regularization (<ref>). For every ε∈ (0,1), the following statements apply: (i) f_ε∈ C^1(ℝ) with f_ε'(0)=0. (ii) For every t∈ℝ, it holds -ε | t|-ε^2≤ f_ε(t)-| t|≤ε (1-| t|). (iii) For every t∈ℝ, it holds | f_ε'(t)|≤ 1-ε. (iv) For every s∈ℝ, it holds f_ε^*(s)-ε ((1-ε)^2-| s|^2)^1/2 if | s|≤ 1-ε +∞ if | s|> 1-ε . The main reason to consider the regularization f_εℝ→ℝ_≥ 0 instead of the standard regularization |·|_εℝ→ℝ_≥ 0 consists in the property (iii) in Lemma <ref>. This additional slope reduction enables us later to construct a sufficiently accurate, admissible approximation of the dual solution using an additional projection step, cf. Remark <ref> (below) and Section <ref> (below). ad (i). The claimed regularity f_ε∈ C^1(ℝ) is evident. Since for every t∈ℝ, it holds f_ε'(t)=(1-ε) t(t^2+ε^2)^1/2 , we have that f_ε'(0)=0. ad (ii). For every t∈ℝ, due to 0≤| t|_ε-| t|≤ε, we have that -ε | t|-ε^2≤ -ε | t|_ε≤ f_ε(t)-| t|=ε-ε | t|_ε≤ε (1-| t|) . ad (iii). Immediate consequence of the representation (<ref>). ad (iv). Due to <cit.>, for every s∈ℝ and ε∈ (0,1), we have that f_ε^*(s)=((1-ε) |·|_ε)^*(s)=(1-ε) (|·|_ε)^*(s1-ε) . Since for every s∈ℝ and ε∈ (0,1), it holds (|·|_ε)^*(s)= -ε (1-| s|^2)^1/2 if | s|≤ 1 +∞ if | s|> 1 , we conclude that the claimed representation of the Fenchel conjugate applies. Given g∈ L^2(Ω), α> 0, and an element-wise constant regularization parameter ε_h∈ℒ^0(𝒯_h) with 0<ε_h<1 a.e. in Ω, for g_hΠ_hg∈ℒ^0(𝒯_h), the regularized, discrete ROF model consists in the minimization of the functional I^cr_h,ε_h𝒮^1,cr(𝒯_h)→ℝ, for every v_h∈𝒮^1,cr(𝒯_h) defined by I^cr_h,ε_h(v_h)f_ε_h(|∇_hv_h|)_L^1(Ω)+α2Π_hv_h-g_h^2_L^2(Ω) . Since the functional (<ref>) is proper, strictly convex, weakly coercive, and lower semi-continuous, the direct method in the calculus of variations, cf. <cit.>, yields the existence of a unique minimizer u_h,ε_h^cr∈𝒮^1,cr(𝒯_h), called the regularized, discrete primal solution. Appealing to (f_ε_h∘|·|)^*=f_ε_h^*∘|·|, cf. <cit.>, the corresponding (Fenchel) dual problem to the minimization of (<ref>) consists in the maximization of functional D_h,ε_h^rtℛT^0_N(𝒯_h)→ℝ∪{-∞}, for every y_h∈ℛT^0_N(𝒯_h) defined by D_h,ε_h^rt(y_h) -∫_Ωf_ε_h^*(|Π_hy_h| ) dx-12α y_h+α g_h_L^2(Ω)^2+α2g_h_L^2(Ω)^2 . The following proposition clarifies the well-posedness of the dual regularized, discretized ROF model, i.e., the existence of a maximizer of (<ref>). It also yields a discrete reconstruction formula for a maximizer of (<ref>) from a minimizer of (<ref>) and proves discrete strong duality. The following statements apply: (i) A discrete weak duality relation applies, i.e., inf_v_h∈𝒮^1,cr_D(𝒯_h)I_h,ε_h^cr(v_h)≥sup_y_h∈ℛT^0_N(𝒯_h)D_h,ε_h^rt(y_h) . (ii) The discrete flux z_h^rt∈ℒ^1(𝒯_h), defined via the generalized Marini formula z_h,ε_h^rtf_ε_h'(|∇_h u_h,ε_h^cr|)|∇_h u_h,ε_h^cr|∇_h u_h,ε_h^cr+αΠ_h u_h,ε_h^cr-g_hd(_ℝ^d-Π_h_ℝ^d) , satisfies z_h,ε_h^rt∈ℛT^0_N(𝒯_h) and the discrete convex optimality relations z_h,ε_h^rt =α (Π_hu_h,ε_h^cr-g_h) in ℒ^0(𝒯_h) , Π_h z_h,ε_h^rt =f_ε_h'(|∇_ h u_h,ε_h^cr|)|∇_ h u_h,ε_h^cr|∇_ h u_h,ε_h^cr in ℒ^0(𝒯_h)^d . (iii) The discrete flux z_h^rt∈ℛT^0_N(𝒯_h) is a maximizer of (<ref>) and discrete strong duality applies, i.e., I^cr_h,ε_h(u_h,ε_h^cr)=D_h,ε_h^rt(z_h,ε_h^rt) . Note that, by the Fenchel–Young identity, cf. <cit.>, (<ref>) is equivalent to Π_h z_h,ε_h^rt·∇_h u_h,ε_h^cr =f_ε_h^*(|Π_h z_h,ε_h^rt| )+f_ε (|∇_h u_h,ε_h^cr|) in ℒ^0(𝒯_h) . Appealing to Lemma <ref> (iii), we have that |Π_h z_h,ε_h^rt|≤ 1-ε_h a.e. in Ω. Therefore, if Π_hu_h,ε_h^cr-g_h_L^∞(Ω)≤ c_0 for some c_0>0, which can be expected by discrete maximum principles, then, choosing ε_hα c_0/dh, yields that z_h,ε_h^rt_L^∞(Ω;ℝ^d)≤ 1. However, choices like ε_h∼ h let us expect convergence rates not better than 𝒪(h^1/2), cf. Proposition <ref> (i) (below). In order to allow for the convergence rate 𝒪(h), one needs to choose ε_h∼ h^2. But, in this case, we cannot guarantee that z_h,ε_h^rt_L^∞(Ω;ℝ^d)≤ 1, so that we instead consider the scaled vector field z_h,ε_h^rt z_h,ε_h^rt(max{1,z_h,ε_h^rt_L^∞(Ω;ℝ^d)})^-1∈ℛT^0_N(𝒯_h), which is still a sufficiently accurate approximation of the dual solution, as indicated by the numerical experiments, cf. Section <ref>. ad (i). Using element-wise that f_ε_h=f_ε_h^**, the definition of the convex conjugate, cf. (<ref>), and the discrete integration-by-parts formula (<ref>), we find that inf_v_h∈𝒮^1,cr_D(𝒯_h)I_h,ε_h^cr(v_h)=inf_v_h∈𝒮^1,cr_D(𝒯_h)f_ε_h^**(|∇_ h v_h|)_L^1(Ω)+α2Π_h v_h-g_h_L^2(Ω)^2 = inf_v_h∈𝒮^1,cr_D(𝒯_h)sup_y_h∈ℒ^0(𝒯_h)^d-∫_Ωf_ε_h^*(|y_h |) dx+(y_h,∇_ h v_h)_Ω+α2Π_h v_h-g_h_L^2(Ω)^2 ≥ inf_v_h∈𝒮^1,cr_D(𝒯_h)sup_y_h∈ℛT^0_N(𝒯_h)-∫_Ωf_ε_h^*(|Π_h y_h |) dx-( y_h,Π_h v_h)_Ω+α2Π_h v_h-g_h_L^2(Ω)^2 ≥ sup_y_h∈ℛT^0_N(𝒯_h)-∫_Ωf_ε_h^*(|Π_h y_h |) dx-sup_v_h∈ℒ^0(𝒯_h)( y_h,v_h)_Ω-α2v_h-g_h_L^2(Ω)^2 = sup_y_h∈ℛT^0_N(𝒯_h)-∫_Ωf_ε_h^*(|Π_h y_h |) dx-12α y_h+α g_h_L^2(Ω)^2+α2g_h_L^2(Ω)^2 = sup_y_h∈ℛT^0_N(𝒯_h)D_h,ε_h^rt(y_h) , which is the claimed discrete weak duality relation. ad (ii). By Lemma <ref>, the minimality of u_h,ε_h^cr∈𝒮^1,cr(𝒯_h) for (<ref>), for every v_h∈𝒮^1,cr(𝒯_h), yields that (f_ε_h'(|∇_ h u_h,ε_h^cr| )∇_ h u_h,ε_h^cr|∇_ h u_h,ε_h^cr|,∇_ h v_h)_Ω+α (Π_hu_h,ε_h^cr-g_h,Π_h v_h)_Ω=0 . By definition, the discrete flux z_h,ε_h^rt∈ℒ^1(𝒯_h)^d, defined by (<ref>), satisfies the discrete convex optimality condition (<ref>) and (z_h,ε_h^rt|_T)=α (Π_hu_h,ε_h^cr-g_h)|_T in T for all T∈𝒯_h. Choosing v_h=1∈𝒮^1,cr(𝒯_h) in (<ref>), we find that ∫_Ωα (Π_hu_h,ε_h^cr-g_h) dx=0. Hence, since for Γ_D=∅ the divergence operator ℛT^0_N(𝒯_h)→ℒ^0(𝒯_h)/ℝ is surjective, there exists y_h∈ℛT^0_N(𝒯_h) such that y_h=α (Π_hu_h,ε_h^cr-g_h) in ℒ^0(𝒯_h). Then, we have that ((z_h,ε_h^rt-y_h)|_T)=0 in T for all T∈𝒯_h, i.e., z_h,ε_h^rt-y_h∈ℒ^0(𝒯_h)^d. In addition, for every v_h∈𝒮^1,cr(𝒯_h), it holds (Π_h y_h,∇_ h v_h)_Ω =-( y_h,Π_h v_h)_Ω =-α (Π_hu_h,ε_h^cr-g_h,Π_h v_h)_Ω =(f_ε_h'(|∇_ h u_h,ε_h^cr| )∇_ h u_h,ε_h^cr|∇_ h u_h,ε_h^cr|,∇_ h v_h)_Ω =(Π_h z_h,ε_h^rt,∇_ h v_h)_Ω . In other words, for every v_h∈𝒮^1,cr(𝒯_h), it holds (y_h-z_h,ε_h^rt,∇_ h v_h)_Ω=(Π_h y_h-Π_h z_h,ε_h^rt,∇_ h v_h)_Ω=0 , i.e., y_h-z_h,ε_h^rt∈∇_ h(𝒮^1,cr_D(𝒯_h))^⊥. By the decomposition (<ref>), we have that ∇_ h(𝒮^1,cr_D(𝒯_h))^⊥=(|_ℛT^0_N(𝒯_h))⊆ℛT^0_N(𝒯_h). As a result, it holds y_h-z_h,ε_h^rt∈ℛT^0_N(𝒯_h). Due to y_h∈ℛT^0_N(𝒯_h), we conclude that z_h,ε_h^rt∈ℛT^0_N(𝒯_h). In particular, now from (z_h,ε_h^rt|_T)=α (Π_hu_h,ε_h^cr-g_h)|_T in T for all T∈𝒯_h, it follows the discrete optimality condition (<ref>). ad (iii). Using (<ref>), (<ref>), and the discrete integration-by-parts formula (<ref>), we find that I_h,ε_h^cr(u_h,ε_h^cr) = f_ε_h(|∇_ h u_h,ε_h^cr|)_L^1(Ω)+α2Π_h u_h,ε_h^cr-g_h_L^2(Ω)^2 =-∫_Ωf_ε_h^*(|Π_h z_h,ε_h^rt|) dx+(Π_h z_h,ε_h^rt,∇_ h u_h,ε_h^cr)_Ω+12α z_h,ε_h^rt_L^2(Ω)^2 =-∫_Ωf_ε_h^*(|Π_h z_h,ε_h^rt|) dx-( z_h,ε_h^rt,Π_hu_h,ε_h^cr)_Ω+12α z_h,ε_h^rt_L^2(Ω)^2 =-∫_Ωf_ε_h^*(|Π_h z_h,ε_h^rt|) dx-1α( z_h,ε_h^rt, z_h,ε_h^rt+α g_h)_Ω+12α z_h,ε_h^rt_L^2(Ω)^2 =-∫_Ωf_ε_h^*(|Π_h z_h,ε_h^rt|) dx-12α z_h,ε_h^rt+α g_h_L^2(Ω)^2 =D_h,ε_h^rt(z_h,ε_h^rt) , which is the claimed discrete strong duality relation and, thus, appealing to the discrete weak duality relation (<ref>), proves the maximality of z_h,ε_h^rt∈ℛT^0_N(𝒯_h) for (<ref>). The following proposition describes the approximative behavior the regularized, discretized ROF problem towards the (unregularized) discretized ROF problem, given uniform convergence (to zero) of the element-wise constant regularization parameter ε_h∈ℒ^0(𝒯_h). In what follows, in the convergence ε_h_L^∞(Ω)→ 0, the average mesh-size h>0 is always fixed.2mm If ε_h_L^∞(Ω)<1, then the following statements apply: (i) It holds α2Π_h u_h,ε_h^cr-Π_hu_h^cr_L^2(Ω)^2 ≤ε_h_L^∞(Ω)1-ε_h_L^∞(Ω) (α2 g_L^2(Ω)^2+2 |Ω|). (ii) z_h,ε_h^rt→α (Π_hu_h^cr-g_h) in ℒ^0(𝒯_h) (ε_h_L^∞(Ω)→ 0). (iii) f_ε_h^*(|Π_h z_h,ε_h^rt| )→ 0 in ℒ^0(𝒯_h) (ε_h_L^∞(Ω)→ 0). (iv) f_ε_h (|∇_h u_h,ε_h^cr|)→∇_h u_h^cr in ℒ^0(𝒯_h) (ε_h_L^∞(Ω)→ 0). ad (i). Using both the strong convexity of I_h^cr𝒮^1,cr(𝒯_h)→ℝ∪{+∞} and Lemma <ref> (ii), we obtain α2Π_h u_h,ε_h^cr-Π_hu_h^cr_L^2(Ω)^2 ≤ I_h^cr(u_h,ε_h^cr)-I_h^cr(u_h^cr) ≤11-ε_h_L^∞(Ω) I_h,ε_h^cr(u_h,ε_h^cr)+ε_h_L^∞(Ω)^21-ε_h_L^∞(Ω)|Ω| -I_h^cr(u_h^cr) ≤11-ε_h_L^∞(Ω) I_h,ε_h^cr(u_h^cr)+ε_h_L^∞(Ω)^21-ε_h_L^∞(Ω)|Ω|-I_h^cr(u_h^cr) ≤11-ε_h_L^∞(Ω) ( I_h^cr(u_h^cr) +2 ε_h_L^∞(Ω) |Ω|)-I_h^cr(u_h^cr) = ε_h_L^∞(Ω)1-ε_h_L^∞(Ω) (I_h^cr(u_h^cr)+2 |Ω|) . Since, by the minimality of u_h^cr∈𝒮^1,cr(𝒯_h) for (<ref>) and the L^2-stability of Π_h L^2(Ω)→ℒ^0(𝒯_h), it holds I_h^cr(u_h^cr)≤ I_h^cr(0)=α2g_h_L^2(Ω)^2≤α2g_L^2(Ω)^2 , from (<ref>) we conclude the claimed error estimate. ad (ii). From claim (i), it follows that Π_h u_h,ε_h^cr→Π_hu_h^cr in ℒ^0(𝒯_h) (ε_h_L^∞(Ω)→ 0) . Thus, using (<ref>), from z_h,ε_h^rt=α ( Π_h u_h,ε_h^cr-g_h) in ℒ^0(𝒯_h), cf. (<ref>), we conclude that z_h,ε_h^rt→α (Π_hu_h^cr-g_h) ℒ^0(𝒯_h) (ε_h_L^∞(Ω)→ 0) . ad (iii). Due to Π_h z_h,ε_h^rt=f_ε_h'(|∇_h u_h,ε_h^cr|)/|∇_h u_h,ε_h^cr|∇_h u_h,ε_h^cr and Lemma <ref> (iii), we have that |Π_h z_h,ε_h^rt| =| f_ε_h'(|∇_h u_h,ε_h^cr|)|≤ 1-ε_h a.e. in Ω . Therefore, using Lemma <ref> (iv) together with (<ref>), we conclude that . | f_ε_h^*(|Π_h z_h,ε_h^rt| )| = ε_h ((1-ε_h)^2-|Π_h z_h,ε_h^rt| ^2)^1/2 ≤ε_h (1-ε_h)≤ε_h } a.e. in Ω , which implies that f_ε_h^*(|Π_h z_h,ε_h^rt| )→ 0 in ℒ^0(𝒯_h) (ε_h_L^∞(Ω)→ 0). ad (iv). Due to (<ref>), (u_h,ε_h^cr)_ε_h_L^∞(Ω)→ 0⊆𝒮^1,cr(𝒯_h) is bounded. The finite-dimensionality of 𝒮^1,cr(𝒯_h) and the Bolzano–Weierstraß theorem yield a subsequence (u_h,ε_h'^cr)_ε_h'_L^∞(Ω)→ 0⊆𝒮^1,cr(𝒯_h) and a function ũ_h^cr∈𝒮^1,cr(𝒯_h) such that u_h,ε_h'^cr→ũ_h^cr in 𝒮^1,cr(𝒯_h) (ε_h'_L^∞(Ω)→ 0) . From (<ref>) it is readily derived that f_ε_h' (|∇_h u_h,ε_h'^cr|)→∇_hũ_h^cr in ℒ^0(𝒯_h) (ε_h'_L^∞(Ω)→ 0) . Consequently, for every v_h∈𝒮^1,cr(𝒯_h), we find that I_h^cr(ũ_h^cr) =lim_ε_h'_L^∞(Ω)→ 0I_h,ε_h'^cr(u_h,ε_h'^cr) ≤lim_ε_h'_L^∞(Ω)→ 0I_h,ε_h'^cr(v_h) =I_h^cr(v_h) . Thus, due to the uniqueness of u_h^cr∈𝒮^1,cr(𝒯_h) as a minimizer of (<ref>), we get ũ_h^cr=u_h^cr in 𝒮^1,cr(𝒯_h). Since this argumentation remains valid for each subsequence of (u_h,ε_h^cr)_ε_h_L^∞(Ω)→ 0⊆𝒮^1,cr(𝒯_h), the standard subsequence principle implies that f_ε_h (|∇_h u_h,ε_h^cr|)→∇_h u_h^cr in ℒ^0(𝒯_h) (ε_h_L^∞(Ω)→ 0). The approximation properties of the regularized, discrete ROF model (<ref>) (and (<ref>)) towards the (unregularized) discrete ROF model (<ref>) (and (<ref>)) enable us to transfer the discrete convex duality relations established in Proposition <ref>, which apply mainly due to the differentiability of the regularized, discrete ROF model, to the non-differentiable discrete ROF model. To the best of the authors' knowledge, the following discrete convex duality relations for the (unregularized) discrete ROF model (<ref>) seem to be new.7mm There exists a vector field z_h^rt∈ℛT^0_N(𝒯_h) with |Π_h z_h^rt|≤ 1 a.e. in Ω and the following properties: (i) For a not relabeled subsequence, it holds z_h,ε_h^rt→ z_h^rt in ℛT^0_N(𝒯_h) (ε_h_L^∞(Ω)→ 0) . (ii) There hold the following discrete convex optimality relations: z_h^rt =α (Π_h u_h^cr-g_h) in ℒ^0(𝒯_h) , Π_hz_h^rt·∇_h u_h^cr =|∇_h u_h^cr| in ℒ^0(𝒯_h) . (iii) The discrete flux z_h^rt∈ℛT^0_N(𝒯_h) is maximal for D_h^rtℛT^0_N(𝒯_h)→ℝ and discrete strong duality applies, i.e., I_h^cr(u_h^cr)=D_h^rt(z_h^rt) . ad (i). Due to Proposition <ref> (ii) and (<ref>), the sequence (z_h,ε_h^rt)_ε_h_L^∞(Ω)→ 0⊆ℛT^0_N(𝒯_h) is bounded. Thus, by the finite-dimensionality of ℛT^0_N(𝒯_h), the Bolzano–Weierstraß theorem yields a not relabeled subsequence and a vector field z_h^rt∈ℛT^0_N(𝒯_h) such that z_h,ε_h^rt→ z_h^rt in ℛT^0_N(𝒯_h) (ε_h_L^∞(Ω)→ 0) . Due to the continuity of Π_h L^1(Ω)→ℒ^0(𝒯_h) and ℛT^0_N(𝒯_h)↪ L^1(Ω), from (<ref>), we obtain Π_h z_h,ε_h^rt→Π_h z_h^rt in ℒ^0(𝒯_h) (ε_h_L^∞(Ω)→ 0) . From |Π_h z_h,ε_h^rt|≤ 1-ε_h a.e. in Ω, cf. (<ref>), and (<ref>), we obtain |Π_h z_h^rt|≤ 1 a.e. in Ω, i.e., I_K_1(0)(Π_h z_h^rt)=0 . ad (ii). Using Proposition <ref>, (<ref>), and (<ref>), we find that . z_h^rt =lim_ε_h_L^∞(Ω)→ 0 z_h,ε_h^rt =lim_ε_h_L^∞(Ω)→ 0α (Π_hu_h,ε_h^cr-g_h) =α (Π_h u_h^cr-g_h) } a.e. in Ω , as well as.Π_h z_h^rt·∇_h u_h^cr =lim_ε_h_L^∞(Ω)→ 0Π_h z_h,ε_h^rt·∇_h u_h,ε_h^cr =lim_ε_h_L^∞(Ω)→ 0f_ε_h^*(|Π_h z_h,ε_h^rt| )+f_ε_h(|∇_h u_h,ε_h^cr|) =|∇_h u_h^cr| } a.e. in Ω , i.e., the claimed discrete convex optimality conditions. ad (iii). Using Proposition <ref> and (<ref>), we find that I_h^cr(u_h^cr) =lim_ε_h_L^∞(Ω)→ 0I_h,ε_h^cr(u_h,ε_h^cr) =lim_ε_h_L^∞(Ω)→ 0D_h,ε_h^rt(z_h,ε_h^rt) =D_h^rt(z_h^rt) , i.e., the claimed discrete strong duality relation. § NUMERICAL EXPERIMENTS 5mm In this section, we review the theoretical findings of Section <ref> via numerical experiments. To compare approximations to an exact solution, we impose Dirichlet boundary conditions on Γ_D=∂Ω, though an existence theory is difficult to establish, in general. However, the concepts derived in Section <ref> carry over verbatimly with Γ_N=∅ provided that the existence of a minimizer is given. All experiments were conducted deploying the finite element software package (version 2019.1.0), cf. <cit.>. All graphics were generated using the library (version 3.5.1), cf. <cit.>, and the library (version 2023.4.4), cf. <cit.>. §.§ Implementation details regarding the optimization procedure All computations are based on the regularized, discrete ROF problem (<ref>). This is motivated by the fact that appealing to Proposition <ref> (i), in order to bound the error u-Π_h u_h^cr_L^2(Ω), it suffices to determine the error u-Π_h u_h,ε_h^cr_L^2(Ω). The iterative minimization of (<ref>) is realized using a semi-implicit discretized L^2-gradient flow from <cit.> (see also <cit.>) modified with a residual stopping criterion guaranteeing the necessary accuracy in the optimization procedure. Appealing to <cit.>, the iterates u_h^k∈𝒮^1,cr_D(𝒯_h), k∈ℕ, the residuals r_h^k∈𝒮^1,cr_D(𝒯_h), k∈ℕ, generated by Algorithm <ref>, and the minimizer u_h,ε_h^cr∈𝒮^1,cr_D(𝒯_h) of (<ref>) satisfy u_h,ε_h^cr-u_h^k_L^2(Ω)≤ 2 r_h^k_L^2(Ω) . In consequence, if we choose as a stopping criterion that r_h^k^*_L^2(Ω)≤ε_stop^hc_stop h for k^*∈ℕ, where c_stop>0 does not depend on h>0, then, owing to Proposition <ref> (i) and (<ref>), we have that Π_h(u_h^cr-u_h^k^*)_L^2(Ω)^2≤ε_h_L^∞(Ω)1-ε_h_L^∞(Ω) (2 g_L^2(Ω)^2+8α |Ω|)+8 c_stop^2 h^2 . If ε_h_L^∞(Ω)≤ c_reg h^2, where c_reg∈ (0,1), then, we arrive at Π_h(u_h^cr-u_h^k^*)_L^2(Ω)=𝒪(h). Thus, to bound the error u-Π_hu_h^cr_L^2(Ω) experimentally, it is sufficient to compute u-Π_hu_h^k^*_L^2(Ω). The following proposition proves the well-posedness, stability, and convergence of Algorithm <ref>. Let the assumptions of Algorithm <ref> be satisfied and let ε_h∈ℒ^0(𝒯_h) such that ε_h>0 a.e. in Ω and ε_h_L^∞(Ω)<1. Then, the following statements apply: (i) Algorithm <ref> is well-posed, i.e., for every k∈ℕ, given the most-recent iterate u_h^k-1∈𝒮^1,cr_D(𝒯_h), there exists a unique iterate u_h^k∈𝒮^1,cr_D(𝒯_h) solving (<ref>). (ii) Algorithm <ref> is unconditionally strongly stable, i.e., for every L∈ℕ, it holds I_h,ε_h^cr(u_h^L)+τ∑_k=1^Ld_τ u_h^k_L^2(Ω)^2≤ I_h,ε_h^cr(u_h^0) . (iii) Algorithm <ref> terminates after a finite number of steps, i.e., there exists k^*∈ℕ such that r_h^k^*_L^2(Ω)≤ε_stop^h.6mm The proof of Proposition <ref> (ii) is essentially based on the following inequality. For every ε∈ (0,1) and a,b∈ℝ^d, it holds10mm f_ε'(| a|)| a| b·(b-a)≥ f_ε(| b|)-f_ε(| a|)+12f_ε'(| a|)| a|| b-a|^2 . Follows from <cit.>, since f_ε∈ C^1(ℝ_≥ 0) and (t↦ f_ε'(t)/t)∈ C^0(ℝ_≥ 0) is positive and non-decreasing for all ε∈ (0,1). ad (i). Since f_ε'(t)/t≥ 0 for all ε∈ (0,1) and t≥ 0, the  of Algorithm <ref> is a direct consequence of the Lax–Milgram lemma. ad (ii). Let L∈ℕ be arbitrary. Then, for every k∈{1,…,L}, choosing v_h=d_τ u_h^k∈𝒮^1,cr_D(𝒯_h) in (<ref>), we find that d_τ u_h^k_L^2(Ω)^2+(f_h,ε_h'(|∇_hu_h^k-1| )|∇_hu_h^k-1|∇_hu_h^k,∇_h d_τ u_h^k)_Ω+α (Π_hu_h^k-g_h,Π_h d_τ u_h^k)_Ω . Appealing to Lemma <ref> with a=∇_hu_h^k-1|_T∈ℝ^d and b=∇_h u_h^k|_T∈ℝ^d applied for all T∈𝒯_h, for every k∈{1,…,L}, we have that f_h,ε_h'(|∇_hu_h^k-1| )|∇_hu_h^k-1|∇_hu_h^k·∇_h d_τ u_h^k≥ d_τ f_h,ε_h(|∇_hu_h^k| ) a.e. in Ω . In addition, since d_τ g_h=0, for every k∈{1,…,L}, we have that (Π_hu_h^k-g_h)Π_h d_τ u_h^k =(Π_hu_h^k-g_h)d_τ(Π_h u_h^k-g_h) =d_τ2|Π_hu_h^k-g_h|^2 . Using (<ref>) and (<ref>) in (<ref>), for every k∈{1,…,L}, we arrive at d_τ u_h^k_L^2(Ω)^2+d_τ I_h,ε_h^cr(u_h^k)≤ 0 . Summation of (<ref>) with respect to k∈{1,…,L}, using ∑_k=1^Ld_τ I_h,ε_h^cr(u_h^k)=I_h,ε_h^cr(u_h^L)-I_h,ε_h^cr(u_h^0), yields the claimed stability estimate. ad (iii). Due to (i), we have that d_τ u_h^k_L^2(Ω)^2→ 0 (k→∞), i.e., by the finite-dimensionality of 𝒮^1,cr_D(𝒯_h) and the equivalence of norms, it holds u_h^k-u_h^k-1→ 0 in 𝒮^1,cr_D(𝒯_h) (k→∞) . In addition, due to (i), we have that I_h,ε_h^cr(u_h^k)≤ I_h,ε_h^cr(u_h^0), which, using Lemma <ref>, implies that (u_h^k)_k∈ℕ⊆𝒮^1,cr_D(𝒯_h) is bounded. Due to the finite-dimensionality of 𝒮^1,cr_D(𝒯_h), the -straß theorem yields a subsequence (u_h^k_l)_l∈ℕ⊆𝒮^1,cr_D(𝒯_h) and a function ũ_h∈𝒮^1,cr_D(𝒯_h) such that u_h^k_l→ũ_h in 𝒮^1,cr_D(𝒯_h) (l→∞) . Due to (<ref>), from (<ref>), we deduce that u_h^k_l-1→ũ_h in 𝒮^1,cr_D(𝒯_h) (l→∞) . As a result, using (<ref>)–(<ref>), by passing for l→∞ in (<ref>), for every v_h∈𝒮^1,cr_D(𝒯_h), we obtain (f_h,ε_h'(|∇_hũ_h| )|∇_hũ_h|∇_hũ_h ,∇_hv_h )_Ω+α (Π_hũ_h-g_h,Π_hv_h)_Ω=0 , and, by uniqueness, ũ_h=u_h,ε_h^cr. Hence, using (<ref>) and (<ref>), for every v_h∈𝒮^1,cr_D(𝒯_h), we obtain (r_h^k_l,v_h)_Ω =(f_h,ε_h'(|∇_hu_h^k_l| )|∇_hu_h^k_l|∇_hu_h^k_l,∇_hv_h )_Ω+α (Π_hu_h^k_l-g_h,Π_hv_h)_Ω →(f_h,ε_h'(|∇_hu_h,ε_h^cr| )|∇_hu_h,ε_h^cr|∇_hu_h,ε_h^cr ,∇_hv_h )_Ω+α (Π_hu_h,ε_h^cr-g_h,Π_hv_h)_Ω=0 (l→∞) , i.e., r_h^k_l⇀ 0 in 𝒮^1,cr_D(𝒯_h) (l→∞), and, thus, by the finite-dimensionality of 𝒮^1,cr_D(𝒯_h), r_h^k_l→ 0 in 𝒮^1,cr_D(𝒯_h) (l→∞), which implies that r_h^k_l→ 0 in L^2(Ω) (l→∞). As this  remains valid for each subsequence of (r_h^k)_k∈ℕ⊆𝒮^1,cr_D(𝒯_h), the standard convergence principle yields that r_h^k→ 0 in L^2(Ω) (k→∞). In particular, there exists k^*∈ℕ such that r_h^k^*_L^2(Ω)≤ε^h_stop. §.§ Implementation details regarding the adaptive mesh refinement procedure 8mm Before we present numerical experiments, we briefly outline the details of the implementations regarding the adaptive mesh refinement procedure. In general, we follow the adaptive algorithm, cf. <cit.>: (i) The regularized, discrete primal solution u_i^cr∈𝒮^1,cr_D(𝒯_i) in step (Solve'Solve') is computed using the semi-implicit discretized L^2-gradient flow, cf. Algorithm <ref>, for fixed step-size τ=1.0, stopping criterion ε_stop^h_ih_i/√(20), and initial condition u_i^0=0∈𝒮_D^1,cr(𝒯_i). Appealing to Proposition <ref> (ii), Algorithm <ref> is unconditionally strongly stable, so that employing the fixed step-size τ=1.0 is a reasonable choice. The stopping criterion ε_stop^h_ih_i/√(20) ensures (cf. the argumentation below Algorithm <ref>) that the final iterate u_h_i^k^*∈𝒮^1,cr_D(𝒯_i) is a sufficiently accurate approximation of the discrete primal solution, in the sense that its accuracy does not violate the best possible linear convergence rate, cf. Remark <ref> (below). (ii) As an approximation u_i^cr∈𝒮^1,cr_D(𝒯_i) with u_i^cr=0 on ∂Ω, we employ u_i^cr u_i^cr if u_i^cr=0 on ∂Ω , I_k^∂ u_i^cr else , where the operator I_i^∂𝒮^1,cr(𝒯_i)→𝒮^1,cr_D(𝒯_i) for every v_h_i∈𝒮^1,cr(𝒯_i) is defined by I_i^∂v_i∑_S∈𝒮_h_i;S∩∂Ω=∅v_h_i(x_S) φ_S . (iii) Note that the particular choices in (ii) are only due to the imposed homogeneous Dirichlet boundary condition. In the case Γ_D=∅, the choice u_i^cru_i^cr∈𝒮^1,cr(𝒯_i) is always admissible. (iv) If not otherwise specified, we employ the parameter θ=1/2 in (Estimate'Mark'). (v) To find the set ℳ_i⊆𝒯_i in step (Mark'Mark'), we deploy the Dörfler marking strategy, cf. <cit.>. (vi) The (minimal) conforming refinement of 𝒯_i with respect to ℳ_i in step (Refine'Refine') is  by deploying the red-green-blue-refinement algorithm, cf. <cit.>. (vii) For the construction of the adaptively modified regularization parameter ε_i∈ℒ^0(𝒯_i) in step (Refine'Refine'), we employ separately the following two cases: ε_iαd|Π_h_i-1 u_i-1^cr-g_h_i| h_i^2 + h_i^3 (locallocal) , h_i^2 (globalglobal) . §.§ Example with Lipschitz continuous dual solution We examine an example from <cit.>. In this example, we let Ω=(-1,1)^d, Γ_D=∂Ω, d∈{2,3}, r=1/2, α =10, and g=χ_B_r^d(0)∈ BV(Ω)∩ L^∞(Ω). Then, the primal solution u∈ BV(Ω)∩ L^∞(Ω) and a dual solution z∈ W^2(;Ω)∩ L^∞(Ω;ℝ^d), for a.e. x∈Ω are defined by u(x) (1-dα r) g(x) , z(x) -xr | x| < r , -rx| x|^d | x|≥ r . Note that z∈ W^1,∞(Ω;ℝ^d), so that, appealing to <cit.>, uniform mesh-refinement (i.e., θ=1 in Algorithm <ref>) is expected to yield the quasi-optimal convergence rate 𝒪(h^1/2). 2D Case. The coarsest triangulation 𝒯_0 of Figure <ref> (initial triangulation of Algorithm <ref>) consists of 16 halved squares. More precisely, Figure <ref> displays the triangulations 𝒯_i, i∈{0,15,20,25}, generated by Algorithm <ref> using either the adaptively modified ε_i∈ℒ^0(𝒯_i), cf. (locallocal), or the global choice ε_i h_i^2, cf. (globalglobal). For both choices, a refinement towards the circle ∂ B_r^2(0), i.e., the jump set J_u of the exact solution u∈ BV(Ω)∩ L^∞(Ω), cf. (<ref>), is reported. This behavior is also seen in Figure <ref>, where the regularized, discrete primal solution u_15^cr∈𝒮^1,cr_D(𝒯_15), the (local) L^2-projection onto element-wise constant functions Π_h_15 u_15^cr∈ℒ^0(𝒯_15), and the (local) L^2-projections onto element-wise affine functions of the modulus of the regularized, discrete dual solution z_15^rt∈ℛT^0_N(𝒯_15) and of the projected regularized, discrete dual solution z_15^rt∈ℛT^0_N(𝒯_15) are plotted. Figure <ref>, in addition, shows that using the adaptively modified ε_i∈ℒ^0(𝒯_i), cf. (locallocal), the refinement is more concentrated at the jump set J_u of the exact solution u∈ BV(Ω)∩ L^∞(Ω), cf. (<ref>). However, in Figure <ref> it is seen that (locallocal) does not result in an improved error decay, but an error decay comparable to (globalglobal). In addition, Figure <ref> demonstrates that Algorithm <ref> improves the experimental convergence rate of about 𝒪(h^1/2) predicted by <cit.> for uniform mesh-refinement to the quasi-optimal rate 𝒪(h), cf. Remark <ref> (below). In addition, Figure <ref> indicates the primal-dual error estimator is reliable and efficient with respect to the error quantity ρ̃^2(u_i^cr,z_i^rt)α2u_i^cr-u^2_L^2(Ω)+12α z_i^rt- z^2_L^2(Ω) , i∈ℕ , which, appealing to Remark <ref> (iv), is a lower bound for sum of the optimal convexity measures. 7mm 3D Case. The initial triangulation 𝒯_0 of Algorithm <ref> consists of 27 cubes each divided into six tetrahedrons. Using either the adaptively modified ε_i∈ℒ^0(𝒯_i), cf. (locallocal), or the global choice ε_i h_i^2, cf. (globalglobal), we report similar results to the 2D case: for both choices, a refinement towards the sphere ∂ B_r^3(0), i.e., the jump set J_u of the exact solution u∈ BV(Ω)∩ L^∞(Ω), cf. (<ref>), is re-ported, which can be seen in Figure <ref>, where the regularized, discrete primal solution u_10^cr∈𝒮^1,cr_D(𝒯_10) and the (local) L^2-projection onto element-wise affine functions of the modulus of the regularized, discrete dual solution z_10^rt∈ℛT^0_N(𝒯_10) are plotted. Figure <ref> shows that the adaptive Algorithm <ref> improves the experimental convergence rate of about 𝒪(h^1/2) predicted by <cit.> for uniform mesh-refinement to the quasi-optimal rate 𝒪(h), cf. Remark <ref> (below). 12.5mm In one dimension, the L^2-best-approximation error of the sign function on quasi-uniform partitions is of order 𝒪(h^1/2), cf. <cit.>. More generally, using that the intersection BV(Ω) ∩ L^∞(Ω) is contained in fractional Sobolev spaces W^s,2(Ω) for all s<1/2, cf. <cit.>, one cannot expect a higher convergence rate than 𝒪(h^1/2) for generic, essentially bounded functions of bounded variation. For triangulations that are graded towards the jump sets of certain discontinuous functions with a quadratic grading strength, i.e., the local mesh-size satisfies h_T ∼ h^2 for all elements T∈𝒯_h at the discontinuity set, with the average mesh-size h∼(𝒩_h)^-1/d, a linear convergence rate 𝒪(h) has been established in <cit.>. Since our error estimates not only bound squared L^2-errors but also control squares of L^p-norms of non-linear error quantities involving derivatives, cf. , a higher convergence rate than linear cannot be expected. In view of these aspects, the linear convergence rate 𝒪(h) for the devised adaptive strategy is quasi-optimal. §.§ Example without Lipschitz continuous dual solution 3mm We examine an example from <cit.>. In this example, we let Ω=(-1.5,1.5)^2, Γ_D=∂Ω, r=1/2, α =10, and g=χ_B_r^2(re_1)-χ_B_r^2(-re_1)∈ BV(Ω)∩ L^∞(Ω). Then, the primal solution u∈ BV(Ω)∩ L^∞(Ω) and a dual solution z∈ W^2(;Ω)∩ L^∞(Ω;ℝ^2), for a.e. x∈Ω are defined by u(x) (1-2α r) g(x) , z(x)∓x∓ r e_1r | x∓ r e_1| < r , ∓r(x∓ r e_1)| x∓ r e_1|^2 | x∓ r e_1|≥ r . Note that z∉ W^1,∞(Ω;ℝ^2), so that we cannot refer to <cit.> in order to expect uniform mesh-refinement to yield the convergence rate 𝒪(h^1/2). However, since z|_Ω^±∈ W^1,∞(Ω^±;ℝ^2), where Ω^+Ω∩ (ℝ_>0×ℝ) and Ω^-Ω∩ (ℝ_<0×ℝ), and since the coarsest triangulation 𝒯_0 of Figure <ref> and, hence, also all resulting refinements 𝒯_i, i∈ℕ, of 𝒯_0 resolve J_zΩ∩ ({0}×ℝ), i.e., the jump set of z∈ W^2(;Ω)∩ L^∞(Ω;ℝ^2), in the sense that J_z⊆⋃_S∈𝒮_h_iS for all i∈ℕ, referring to <cit.>, we can expect uniform mesh-refinement to yield the convergence rate 𝒪(h^1/2). The coarsest triangulation 𝒯_0 of Figure <ref> (initial triangulation of Algorithm <ref>) consists of 16 halved squares. More precisely, Figure <ref> displays the triangulations 𝒯_i, i∈{0,15,20,25}, generated by Algorithm <ref> using either the adaptively modified ε_i∈ℒ^0(𝒯_i), cf. (locallocal), or the global choice ε_i h_i^2, cf. (globalglobal). For both choices, a refinement towards ∂ B_r^2(re_1)∪∂ B_r^2(-re_1), i.e., the jump set J_u of the exact solution u∈ BV(Ω)∩ L^∞(Ω), cf. (<ref>), is reported. This behavior is also seen in Figure <ref>, where the regularized, discrete primal solution u_15^cr∈𝒮^1,cr_D(𝒯_15), the (local) L^2-projection onto element-wise constant functions Π_h_15 u_15^cr∈ℒ^0(𝒯_15), and the (local) L^2-projections onto element-wise affine functions of the modulus of the regularized, discrete dual solution z_15^rt∈ℛT^0_N(𝒯_15) and of the scaled regularized, discrete dual solution z_15^rt∈ℛT^0_N(𝒯_15) are plotted. Figure <ref>, in addition, shows that employing the adaptively modified regularization , cf. (locallocal), the refinement is more concentrated at the jump set J_u of the exact solution u∈ BV(Ω)∩ L^∞(Ω), cf. (<ref>). However, in Figure <ref> it can be seen that (locallocal) does not result in an improved error decay, but an error decay comparable to (globalglobal). In addition, Figure <ref> demonstrates that Algorithm <ref> improves the experimental convergence rate of about 𝒪(h^1/2) predicted by <cit.> for uniform mesh-refinement to the quasi-optimal rate 𝒪(h), cf.  <ref>. In addition, Figure <ref> indicates the primal-dual error estimator is both reliable and efficient with respect to the error quantity (<ref>). 7mm §.§ Example with Lipschitz continuous primal solution and Lipschitz continuous dual solution We examine an example from <cit.>. In this example, we let Ω=(-1.5,1.5)^2, Γ_D=∂Ω, α =10, s(t)√(3t) and r(t)1/2√(1-4t) for t=0.1, and g∈ BV(Ω)∩ L^∞(Ω) for a.e. x∈Ω, be defined by2mm g(x) 1 +2-α(s(t)^2+t)/s(t) if | x|≤ s(t) , 1 +1-α(| x|^2+t)/| x| if s(t)<| x|≤ r(t) , 0 else . Then, the primal solution u∈ BV(Ω)∩ L^∞(Ω) and a dual solution z∈ W^2(;Ω)∩ L^∞(Ω;ℝ^2) with | z|≤ 1 a.e. in Ω, for a.e. x∈Ω are defined by u(x) 1 - s(t)^2+t/s(t) if | x|≤ s(t) , 1 -| x|^2+t/| x| if s(t)<| x|≤ r(t) , 0 else , z(x) -x/s(t) if | x|≤ s(t) , -x/| x| if s(t)<| x|≤ r(t) , -xr(t)/| x|^2 else . Note that z∈W^1,∞(Ω;ℝ^2), so that, appealing to <cit.>, uniform mesh-refinement is expected to yield the quasi-optimal convergence rate 𝒪(h^1/2). The coarsest triangulation 𝒯_0 of Figure <ref> (initial triangulation of Algorithm <ref>) consists of 16 halved squares. More precisely, Figure <ref> displays the triangulations 𝒯_i, i∈{0,5,10,15}, generated by Algorithm <ref> employing either ε_i∈ℒ^0(𝒯_i), cf. (locallocal), or ε_i h_i^2, cf. (globalglobal). For both choices, a refinement mainly towards and on the set {|∇ u| >0} is reported. This is also seen in Figure <ref>, where the regularized, discrete primal solution u_15^cr∈𝒮^1,cr_D(𝒯_10), the (local) L^2-projection onto element-wise constant functions Π_h_10 u_10^cr∈ℒ^0(𝒯_10), and the (local) L^2-projections onto element-wise affine functions of the modulus of the regularized, discrete dual solution z_10^rt∈ℛT^0_N(𝒯_10) and of the scaled regularized, discrete dual solution z_10^rt∈ℛT^0_N(𝒯_10) are plotted. Figure <ref> shows that employing the adaptively modified regularization parameter, cf. (locallocal), the refinement takes place at and on the set {|∇ u| >0}. However, in Figure <ref>, again, it can be seen that (locallocal) does not result in an improved error decay, but an error decay comparable to (globalglobal). In addition, Figure <ref> demonstrates that Algorithm <ref> improves the experimental convergence rate of about 𝒪(h^1/2) predicted by <cit.> for uniform mesh-refinement to the quasi-optimal rate 𝒪(h), cf. Remark <ref>. In addition, Figure <ref> indicates the primal-dual error estimator is both reliable and efficient with respect to the error quantity (<ref>). 7mm §.§ Example without Dirichlet boundary condition and without exact solution We examine an example from <cit.>. In this example, we let Ω=(-1,1)^2, r=1/2, Γ_D=∅, α =100, and g=χ_[-r,r]^2∈ BV(Ω)∩ L^∞(Ω). Then, the primal solution and the dual solutions are not known. However, appealing to <cit.>, given the regularity of g∈ BV(Ω)∩ L^∞(Ω), we can expect the convergence rate 𝒪(h^1/4) using uniform mesh refinement. The coarsest triangulation 𝒯_0 of Figure <ref> (initial triangulation of Algorithm <ref>) consists of 16 halved squares. More precisely, Figure <ref> displays the triangulations 𝒯_i, i∈{0,15,20,25}, generated by Algorithm <ref> using either the adaptively modified ε_i∈ℒ^0(𝒯_i), cf. (locallocal), or the global choice ε_i h_i^2, cf. (globalglobal). For both choices, a refinement towards the square ∂ [-r,r]^2, i.e., the jump set J_g of the data g∈ BV(Ω)∩ L^∞(Ω) is reported. This behavior is also seen in Figure <ref>, where the regularized, discrete primal solution u_15^cr∈𝒮^1,cr_D(𝒯_15), the (local) L^2-projection onto element-wise constant functions Π_h_15 u_15^cr∈ℒ^0(𝒯_15), and the (local) L^2-projections onto element-wise affine functions of the modulus of the regularized, discrete dual solution z_15^rt∈ℛT^0_N(𝒯_15) and of the projected regularized, discrete dual solution z_15^rt∈ℛT^0_N(𝒯_15) are plotted. Figure <ref>, in addition, shows that using the adaptively modified ε_i∈ℒ^0(𝒯_i), cf. (locallocal), the refinement is, again, more concentrated at the jump set J_g of the data g∈ BV(Ω)∩ L^∞(Ω). However, in Figure <ref> it can be seen that (locallocal) does not result in an improved error decay, but an error decay comparable to (globalglobal). In addition, Figure <ref> demonstrates that Algorithm <ref> improves the experimental convergence rate of about 𝒪(h^1/4) predicted by <cit.> for uniform mesh-refinement to the value 𝒪(h^2/5). This, on the one hand, confirms the optimality of the a priori error estimates established in <cit.> and, on the other hand, appealing to <cit.>, let us expect that there exists no Lipschitz continuous dual solution to the given data g=χ_[-r,r]^2∈ BV(Ω)∩ L^∞(Ω). The reported reduced error decay of 𝒪(h^2/5) compared to <cit.>, where an error decay of 𝒪(h^1/2) is reported, might only be pre-asymptotic and due to slight accuracy losses resulting due to the global scaling step. This might be due to potential singularities of a dual solution located at the corners of the square ∂ [-r,r]^2, as indicated in Figure <ref>. Therefore, it is possible that the error decay 𝒪(h^1/2) in <cit.> may be reported after surpassing a potential pre-asymptotic regime. 10mm §.§ Numerical experiments with application to image processing In order to benchmark the performance of the proposed numerical scheme (cf. Algorithm <ref> and Algorithm <ref>) in a problem related to image processing, we examine a standard example from the field of image processing (cf. Section <ref>) and a new example (cf. Section <ref>). 11mm §.§.§ The Cameraman image We examine the cameraman image, which in a similar context has been considered in <cit.>. In this example, we let Ω (0,1)^2, Γ_D=∅, α=1e+4, and g∈ BV(Ω)∩ L^∞(Ω) a piece-wise constant function taking its values in the interval [0,1], representing the cameraman image on a uniform triangulation with 66.049 nodes, cf. Figure <ref>. The adaptive algorithm (cf. Algorithm <ref>), employed as coarsening strategy, reduces the number of nodes within 30 iteration steps to 25.059 nodes which corresponds to 38.0% of the initial number of nodes, which results in a squared L^2-error of u_30^cr-g_L^2(Ω)^2≈ 2.211e-3. The resulting coarsened image,  by u_30^cr∈𝒮^1,cr(𝒯_30), is shown in Figure <ref>. The underlying grid 𝒯_30 shown in Figure <ref> reveals the expected coarsening of the triangulation away from the edges. §.§.§ The Merle image 10mm We examine an image of Merle, the male cat of the second author. In this example, we let Ω (0,1)^2, Γ_D=∅, α=1e+4, and g∈ BV(Ω)∩ L^∞(Ω) a piece-wise constant function taking its values in the interval [0,1], representing the Merle image on a uniform triangulation with 140.625 nodes, cf. Figure <ref>. The adaptive algorithm (cf. Algorithm <ref>), employed as coarsening strategy, reduces the number of nodes within 30 iteration steps to 41.749 nodes which is 30.0% of the initial number of nodes, which results in a squared L^2-error of u_30^cr-g_L^2(Ω)^2≈ 2.162e-3. The resulting coarsened image, represented by u_30^cr∈𝒮^1,cr(𝒯_30), is shown in Figure <ref>. The underlying grid 𝒯_30 shown in Figure <ref> reveals the expected coarsening of the triangulation away from the edges. 5mm 10 AO00 M. Ainsworth and J. T. Oden, A posteriori error estimation in finite element analysis, Pure and Applied Mathematics (New York), Wiley-Interscience [John Wiley & Sons], New York, 2000. 10.1002/9781118032824. Bar12 S. Bartels, Total variation minimization with finite elements: convergence and iterative solution, SIAM J. Numer. Anal. 50 no. 3 (2012), 1162–1180. 10.1137/11083277X. Bar15 S. Bartels, Numerical methods for nonlinear partial differential equations, Springer Series in Computational Mathematics 47, Springer, Cham, 2015. 10.1007/978-3-319-13797-1. Bar21 S. Bartels, Nonconforming discretizations of convex minimization problems and precise relations to mixed methods, Comput. Math. Appl. 93 (2021), 214–229. 10.1016/j.camwa.2021.04.014. BDN18 S. Bartels, L. Diening, and R. H. Nochetto, Unconditional stability of semi-implicit discretizations of singular flows, SIAM J. Numer. Anal. 56 no. 3 (2018), 1896–1914. 10.1137/17M1159166. BKROF22 S. Bartels and A. Kaltenbach, Error estimates for total-variation regularized minimization problems with singular dual solutions, Numer. Math. 152 no. 4 (2022), 881–906. 10.1007/s00211-022-01324-w. BK22Obstacle S. Bartels and A. Kaltenbach, Error analysis for a Crouzeix-Raviart approximation of the obstacle problem, 2023. 10.48550/ARXIV.2302.01646. BM20 S. Bartels and M. Milicevic, Primal-dual gap estimators for a posteriori error analysis of nonsmooth minimization problems, ESAIM Math. Model. Numer. Anal. 54 no. 5 (2020), 1635–1660. 10.1051/m2an/2019074. BNS15 S. Bartels, R. H. Nochetto, and A. J. Salgado, A total variation diminishing interpolation operator and applications, Math. Comp. 84 no. 296 (2015), 2569–2587. 10.1090/mcom/2942. BTW21 S. Bartels, R. Tovey, and F. Wassmer, Singular solutions, graded meshes,and adaptivity for total-variation regularized minimization problems, ESAIM Math. Model. Numer. Anal. 56 no. 6 (2022), 1871–1888. 10.1051/m2an/2022056. BW21 S. Bartels and Z. Wang, Orthogonality relations of Crouzeix-Raviart and Raviart-Thomas finite element spaces, Numer. Math. 148 no. 1 (2021), 127–139. 10.1007/s00211-021-01199-3. bartels15 S. Bartels, Error control and adaptivity for a variational model problem defined on functions of bounded variation, Math. Comp. 84 no. 293 (2015), 1217–1240. 10.1090/S0025-5718-2014-02893-7. BC08 S. Bartels and C. Carstensen, A convergent adaptive finite element method for an optimal design problem, Numer. Math. 108 no. 3 (2008), 359–385. 10.1007/s00211-007-0122-x. BBHSVN23 L. Baumgärtner, R. Bergmann, R. Herzog, S. Schmidt, and J. Vidal-Núnez, Total generalized variation for piecewise constant functions on triangular meshes with applications in imaging, SIAM Journal on Imaging Sciences 16 no. 1 (2023), 313–339. 10.1137/22M1505281. BC11 H. H. Bauschke and P. L. Combettes, Convex analysis and monotone operator theory in hilbert spaces, in CMS Books in Mathematics, 2011. BW22 L. Baňas and A. Wilke, A posteriori estimates for the stochastic total variation flow, SIAM J. Numer. Anal. 60 no. 5 (2022), 2657–2680. 10.1137/21M1447982. BB20 F. Bertrand and D. Boffi, The Prager-Synge theorem in reconstruction based a posteriori error estimation, in 75 years of mathematics of computation, Contemp. Math. 754, Amer. Math. Soc., [Providence], RI, [2020] 2020, pp. 45–67. 10.1090/conm/754/15152. Braess13 D. Braess, Finite Elemente. Theorie, schnelle Löser und Anwendungen in der Elastizitätstheorie, 5th revised ed. ed., Springer-Lehrb. Mastercl., Berlin: Springer Spektrum, 2013 (German). 10.1007/978-3-642-34797-9. Brae09 D. Braess, An a posteriori error estimate and a comparison theorem for the nonconforming P_1 element, Calcolo 46 no. 2 (2009), 149–155. 2520373. 10.1007/s10092-009-0003-z. braides98 A. Braides, Approximation of free-discontinuity problems, Lecture Notes in Mathematics 1694, Springer-Verlag, Berlin, 1998. 10.1007/BFb0097344. bregman67 L. Brégman, The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming, USSR Computational Mathematics and Mathematical Physics 7 no. 3 (1967), 200–217. https://doi.org/10.1016/0041-5553(67)90040-7. CL15 C. Carstensen and D. J. Liu, Nonconforming FEMs for an optimal design problem, SIAM J. Numer. Anal. 53 no. 2 (2015), 874–894. 10.1137/130927103. CKNS08 J. Cascon, C. Kreuzer, R. Nochetto, and K. Siebert, Quasi-optimal convergence rate for an adaptive finite element method, SIAM J. Numer. Anal. 46 no. 5 (2008), 2524–2550. 10.1137/07069047X. CCMN08 V. Caselles, A. Chambolle, S. Moll, and M. Novaga, A characterization of convex calibrable sets in ℝ^N with respect to anisotropic norms, Ann. Inst. H. Poincaré Anal. Non Linéaire 25 no. 4 (2008), 803–832. 10.1016/j.anihpc.2008.04.003. CP20 A. Chambolle and T. Pock, Crouzeix-Raviart approximation of the total variation on simplicial meshes, J. Math. Imaging Vision 62 no. 6-7 (2020), 872–899. 10.1007/s10851-019-00939-3.5mm CR73 M. Crouzeix and P.-A. Raviart, Conforming and nonconforming finite element methods for solving the stationary Stokes equations. I, Rev. Française Automat. Informat. Recherche Opérationnelle Sér. Rouge 7 no. R-3 (1973), 33–75. Dac08 B. Dacorogna, Direct methods in the calculus of variations, second ed., Applied Mathematical Sciences 78, Springer, New York, 2008. DK08 L. Diening and C. Kreuzer, Linear convergence of an adaptive finite element method for the p-Laplacian equation, SIAM J. Numer. Anal. 46 no. 2 (2008), 614–638. 10.1137/070681508. DR07 L. Diening and M. Růžička, Interpolation operators in Orlicz-Sobolev spaces, Numer. Math. 107 no. 1 (2007), 107–129. 10.1007/s00211-007-0079-9. Doe96 W. Dörfler, A convergent adaptive algorithm for Poisson's equation, SIAM J. Numer. Anal. 33 no. 3 (1996), 1106–1124. 10.1137/0733054. ET99 I. Ekeland and R. Témam, Convex analysis and variational problems, english ed., Classics in Applied Mathematics 28, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1999, Translated from the French. 10.1137/1.9781611971088. EG21 A. Ern and J. L. Guermond, Finite Elements I: Approximation and Interpolation, Texts in Applied Mathematics no. 1, Springer International Publishing, 2021. 10.1007/978-3-030-56341-7. FV04 F. Fierro and A. Veeser, A posteriori error estimators for regularized total variation of characteristic functions, SIAM J. Numer. Anal. 41 no. 6 (2003), 2032–2055. 10.1137/S0036142902408283. HK04 M. Hintermüller and K. Kunisch, Total bounded variation regularization as a bilaterally constrained optimization problem, SIAM J. Appl. Math. 64 no. 4 (2004), 1311–1333. 10.1137/S0036139903422784. Hun07 J. D. Hunter, Matplotlib: A 2d graphics environment, Computing in Science & Engineering 9 no. 3 (2007), 90–95. 10.1109/MCSE.2007.55. LW10 A. Logg and G. N. Wells, DOLFIN: automated finite element computing, ACM Trans. Math. Software 37 no. 2 (2010), Art. 20, 28. 10.1145/1731022.1731030. Mar85 L. D. Marini, An inexpensive method for the evaluation of the solution of the lowest order Raviart-Thomas mixed method, SIAM J. Numer. Anal. 22 no. 3 (1985), 493–496. 10.1137/0722029. vedo M. e. a. Musy, marcomusy/vedo: 2023.4.4, March 2023. 10.5281/zenodo.7734756. NSV00 R. H. Nochetto, G. Savaré, and C. Verdi, A posteriori error estimates for variable time-step discretizations of nonlinear evolution equations, Communications on Pure and Applied Mathematics 53 no. 5 (2000), 525–589. https://doi.org/10.1002/(SICI)1097-0312(200005)53:5<525::AID-CPA1>3.0.CO;2-M. OBGXY05 S. Osher, M. Burger, D. Goldfarb, J. Xu, and W. Yin, An iterative regularization method for total variation-based image restoration, Multiscale Modeling & Simulation 4 no. 2 (2005), 460–489. 10.1137/040605412. PraSyn47 W. Prager and J. L. Synge, Approximations in elasticity based on the concept of function space, Quart. Appl. Math. 5 (1947), 241–269. 10.1090/qam/25902. RT75 P.-A. Raviart and J. M. Thomas, A mixed finite element method for 2nd order elliptic problems, in Mathematical aspects of finite element methods (Proc. Conf., Consiglio Naz. delle Ricerche (C.N.R.), Rome, 1975), 1977, pp. 292–315. Lecture Notes in Math., Vol. 606. Repin18 S. Repin and J. Valdman, Error identities for variational problems with obstacles, ZAMM Z. Angew. Math. Mech. 98 no. 4 (2018), 635–658. 10.1002/zamm.201700105. Rep99 S. I. Repin, A posteriori error estimates for approximate solutions to variational problems with strongly convex functionals, J. Math. Sci. (New York) 97 no. 4 (1999), 4311–4328, Problems of mathematical physics and function theory. 10.1007/BF02365047. ROF92 L. I. Rudin, S. Osher, and E. Fatemi, Nonlinear total variation based noise removal algorithms, Phys. D 60 no. 1-4 (1992), 259–268, Experimental mathematics: computational issues in nonlinear science (Los Alamos, NM, 1991). 10.1016/0167-2789(92)90242-F. dr-nafsa M. Růžička and L. Diening, Non–Newtonian fluids and function spaces, in Nonlinear Analysis, Function Spaces and Applications, Proceedings of NAFSA 2006 Prague, 8, 2007, pp. 95–144. Tart07-book L. Tartar, An introduction to Sobolev spaces and interpolation spaces, Lecture Notes of the Unione Matematica Italiana 3, Springer, Berlin; UMI, Bologna, 2007. Ver13 R. Verfürth, A Posteriori Error Estimation Techniques for Finite Element Methods, Oxford University Press, 04 2013. 10.1093/acprof:oso/9780199679423.001.0001.9mm ZeiIII E. Zeidler, Nonlinear functional analysis and its applications. III, Springer-Verlag, New York, 1985, Variational methods and optimization, Translated from the German by Leo F. Boron. 10.1007/978-1-4612-5020-3.
http://arxiv.org/abs/2307.10052v1
20230714084336
FaIRGP: A Bayesian Energy Balance Model for Surface Temperatures Emulation
[ "Shahine Bouabid", "Dino Sejdinovic", "Duncan Watson-Parris" ]
stat.AP
[ "stat.AP", "stat.ML" ]
Shahine Bouabid1, Dino Sejdinovic2, Duncan Watson-Parris3 1Department of Statistics, University of Oxford, Oxford, UK 2School of CMS & AIML, University of Adelaide, Adelaide, Australia 3Scripps Institution of Oceanography and Halicioğlu Data Science Institute, University of California, San Diego, US Shahine [email protected] * We introduce FaIRGP, a physically-informed Bayesian machine learning emulator for global and local mean surface temperatures. * The model outperforms both purely physically-driven and purely data-driven baseline emulators on several metrics across realistic future scenarios. * The model is fully mathematically tractable, which makes it a convenient and easy to use probabilistic tool for emulation of surface temperatures, but also for downstream applications such as detection and attribution or precipitation emulation. Emulators, or reduced complexity climate models, are surrogate Earth system models that produce projections of key climate quantities with minimal computational resources. Using time-series modeling or more advanced machine learning techniques, data-driven emulators have emerged as a promising avenue of research, producing spatially resolved climate responses that are visually indistinguishable from state-of-the-art Earth system models. Yet, their lack of physical interpretability limits their wider adoption. In this work, we introduce FaIRGP, a data-driven emulator that satisfies the physical temperature response equations of an energy balance model. The result is an emulator that (i) enjoys the flexibility of statistical machine learning models and can learn from observations, and (ii) has a robust physical grounding with interpretable parameters that can be used to make inference about the climate system. Further, our Bayesian approach allows a principled and mathematically tractable uncertainty quantification. Our model demonstrates skillful emulation of global mean surface temperature and spatial surface temperatures across realistic future scenarios. Its ability to learn from data allows it to outperform energy balance models, while its robust physical foundation safeguards against the pitfalls of purely data-driven models. We also illustrate how FaIRGP can be used to obtain estimates of top-of-atmosphere radiative forcing and discuss the benefits of its mathematical tractability for applications such as detection and attribution or precipitation emulation. We hope that this work will contribute to widening the adoption of data-driven methods in climate emulation. § PLAIN LANGUAGE SUMMARY Emulators are simplified climate models that can be used to rapidly explore climate scenarios — they can run in less than a minute on an average computer. They are key tools used by the Intergovernmental Panel on Climate Change to explore the diversity of possible future climates. Data-driven emulators use advanced machine learning techniques to produce climate predictions that look very similar to the predictions of complex climate models. However, they are not easy to interpret, and therefore to trust in practice. In this work, we introduce FaIRGP, a data-driven emulator based on physics. The emulator is flexible and can learn from data to improve its predictions, but is also grounded on physical energy balance relationships, which makes it robust and interpretable. The model performs well in predicting future global and local temperatures under realistic future scenarios, outperforming purely physics-driven or purely data-driven models. Further, the probabilistic nature of our model allows for mathematically tractable uncertainty quantification. By gaining trust in such a data-driven yet physically grounded model, we hope the climate science community can benefit more widely from their potential. § INTRODUCTION Earth system models (ESMs) <cit.> are key tools to understand current climate dynamics and climate change responses to greenhouse gas emissions. They constitute an extensive physical simulation of Earth's atmosphere and ocean fluid dynamics, used for example in the Couple Model Intercomparison Project <cit.> to study past and future climate. As such, they offer the most comprehensive view of what future climate could look like. They are also used as an idealized fully controlled environment to study climate dynamics and understand its underlying drivers. In particular, they play a central role in the estimation of key properties of the climate system such as timescales and equilibrium responses to the change in carbon dioxide concentration in the atmosphere <cit.> and the effect of aerosols <cit.>. Running simulations with an ESM requires an astute understanding of the climate science background, of the numerical schemes used to simulate climate dynamics, and access to an adequate computational infrastructure[As an order of magnitude, running the CESM2 model <cit.> for a single year ahead takes about 2000 core hours on a supercomputer.]. Therefore, only a limited number of research teams around the world can realistically afford to perform climate simulations. A direct consequence is that a variety of scientific applications relying on future climate projections — such as agricultural studies <cit.>, energy models <cit.> or global socio-economic human models <cit.> — must settle for publicly available precomputed climate projections that have been cherry-picked by climate scientists, and may not be tailored to the application needs. The need for expensive computational resources also serves as an important barrier, making it inaccessible for independent researchers and less well-equipped research teams to run experiments with ESMs. This exacerbates the already unequal representation in high-impact climate science research, where the global north is disproportionately represented <cit.>. Furthermore, even when the resource needs are met to run experiments with an ESM, their computational cost remains a critical obstacle. Indeed, the uncertainty over the ESM parameterisation, the climate system internal variability and the emission pathway the world will choose, together span a high-dimensional uncertainty space. Therefore, obtaining a comprehensive coverage of this uncertainty requires running numerous climate simulations, which quickly meets computational cost limitations. As a result, much of the climate variability and potential socio-economic pathways remain in practice unexplored. Together, these limitations have fostered the emergence of simpler surrogate ESMs which are inexpensive to compute and referred to as emulators. Unlike ESMs, emulators do not explicitly model the fluid dynamics of the atmosphere and oceans and focus on a limited number of climate features, such as surface temperatures or precipitations. They can run thousands of years of simulation in less than a minute on an average personal computer <cit.>, hence making accessible the emulation of climate projections under configurations unexplored by ESMs. An important class of emulators are simple climate models (SCMs) <cit.>, which propose a reduced order representation of the climate system that describes the changes in global surface temperature through imbalances in Earth's energy budget <cit.>. A well-established class of simple climate models are energy balance models (EBMs) <cit.>. They represent the atmosphere-ocean system as a set of connected boxes forced by radiative flux at the top of the atmosphere. Whilst they constitute a drastic simplification of the climate system, EBMs are robust physically-motivated models, and therefore remain the main tool used to connect the IPCC working physical basis research <cit.> with the adaptation and mitigation efforts <cit.>. Another important class of emulators are data-driven emulators. In contrast to SCMs, they do not explicitly model forcing dynamics, and rather rely on statistical modelling techniques to emulate key climate variables such as temperature or precipitation <cit.>. Statistical modelling enjoys greater flexibility and has produced powerful emulators, capable of successfully approximating ESMs' spatially resolved climate projections with visually indistinguishable outputs. More recently, statistically driven emulators drawing from advances in statistical machine learning have demonstrated a remarkable capacity at regressing global emission profiles onto spatially resolved temperature and precipitation maps <cit.>. However, both SCMs and data-driven emulators display fundamental limitations. Whilst EBMs provide a robust physical framework to reason about the climate system, they remain a simplistic model which may display a poor fit to ESM's outputs in certain scenarios <cit.> and can only operate at the global level. Reasoning only in terms of global mean temperatures fails to capture the difference in exposure of different world regions and limits use for scientific applications that require regional projections. On the other hand, data-driven emulators are also limited in their ability to provide a complete, reliable picture of the climate system. Indeed, their lack of robust physical grounding limits their capacity to make inference about the climate system <cit.>. Further, the outputs from these emulators are often subject to qualitative evaluation and may not be trustworthy[Whilst statistical explanability methods <cit.> may help understand the contributions to predictions, we argue that the lack of a physically grounded model would still harm trust in the predictions.], in part because of limited understanding on how they might behave outside the observed data regimes. In this work, we address these limitations by formulating a hybrid physical/statistical emulator that will both enjoy the robust physical grounding of SCMs, and the flexibility of statistical machine learning methods, thereby combining the advantages of both classes of emulators. We propose a probabilistic emulator for the task of reproducing an ESM temperature response to greenhouse gas and aerosols emissions. Our model builds upon a simple EBM, but places a Gaussian process (GP) prior over the radiative forcing, thereby inducing a stochastic temperature response model. We show that the resulting emulated temperature turns out to also be a GP, with a physically-informed covariance structure that reflects the dynamics of the EBM. In consequence, we can update the EBM with temperature data to learn a posterior distribution over global temperatures, but also over radiative forcing. Further, we demonstrate how our model can be easily extended to emulate spatially-resolved maps of annual surface temperatures. Experiments demonstrate skilful prediction of global and spatial mean surface temperatures, with improvements over a simple EBM and over a simple GP model. We obtain robust predictions even when only training with historical data, or when predicting over scenarios outside the range of emissions of greenhouse gases and aerosols observed during training. Additionally, we show that the model improves over baselines to emulate temperature changes induced by anthropogenic aerosols emissions, and can provide useful estimates of the top-of-atmosphere radiative forcing. § BACKGROUND §.§ Energy Balance Models Energy balance models (EBMs) <cit.> are a lower-order representation of the climate system, where the changes in global temperature are explained by the imbalance in the Earth's energy budget. The most common and established class of EBM are box models. They represent the atmosphere and ocean as a set of vertically stacked boxes, where the uppermost box is exposed to top-of-atmosphere radiative flux. In a box model EBM, each box has a different heat capacity C_i, heat transfer coefficients κ_i with the adjacent boxes, and its own temperature T^(i)(t), as depicted in Figure <ref>. The uppermost box represents the fast components of the climate system, generally limited to the atmosphere, while the following boxes are used to represent slower components of the climate system such as shallow ocean and deep ocean. Let (t) = [ T^(1)(t) … T^(k)(t) ]^⊤ be the vector concatenation of the temperatures within the boxes in a model with k boxes. The change in temperature of a k-box EBM is described by the following simple first order linear ordinary differential equation (ODE) (t)/ṭ = (t) + F(t), where is a tridiagonal temperature feedback matrix[explicit form of the matrix is provided in Appendix <ref>] that depends on heat capacities C_i, heat transfer coefficients κ_i, and deep ocean uptake efficacy ε, and is a radiative forcing feedback vector given by = [ 1/C_1 0 … 0 ]^⊤. F(t) denotes the top of atmosphere effective radiative forcing — radiative forcing for short — i.e. the change in energy flux caused by natural or anthropogenic factors of climate change. It is only applied to the surface box, i.e. the uppermost box. Whilst physically motivated, box-models remain an abstract representation of the climate system. Therefore their parameters, such as boxes heat capacities, are not realistic quantities that can be calculated, but rather need to be tuned against data using calibration methods <cit.> or maximum likelihood strategies <cit.>. The simplest box models only use k = 2 boxes, thereby splitting the climate system into 2 groups: fast and slow components. Whilst reductive, this split has proven to be a robust approximation of the climate system <cit.>. In fact, 2-box EBMs are today the primary tool used for linking the IPCC physical basis research <cit.> with the adaptation and mitigation efforts <cit.>. It is however worth noting recent advocacy for 3-box models <cit.>, highlighting the insufficiency of 2-box models to capture the full range of behaviour observed in CMIP6 models <cit.>. §.§ Impulse response formulation The main box of interest in a box-model is the uppermost box since it describes the global mean surface temperature response T^(1)(t) to the radiative forcing F(t). In general, computing T^(1)(t) can be simplified by diagonalising the ODE (<ref>). The result is an equivalent impulse response formulation of the temperature response model, depicted on the right in Figure <ref>, and referred to as the thermal response ODE <cit.>. Let us rename the temperature of the first box as T(t) for simplicity. Then, for a k-box model, the thermal response ODE is formally given by { Ṣ_i(t)/ṭ = 1/d_i(q_i F(t) - S_i(t)), 1≤ i≤ k T(t) = ∑_i=1^k S_i(t), . where S_i(t) is the ith thermal response, d_i is the ith response timescale and q_i is the ith equilibrium response. Table <ref> provides a brief description of named parameters and units. The primary benefit of the impulse response formulation is that each thermal response ODE can be solved independently, thereby avoiding the intricacies of solving a coupled system of ODE. Further, the timescale and equilibrium parameters d_i and q_i can be expressed in terms of the original boxes heat capacities C_i and heat transfer coefficients κ_i. Detailed derivations can be found in <cit.>. Throughout this work, we will use as a reference SCM FaIRv2.0.0 <cit.> (for Finite amplitude Impulse Response), a recent update of a well established SCM <cit.>, that offers a minimal level of structural complexity. FaIRv2.0.0 is effectively composed of 3 submodels: a gas cycle model, which converts emissions to concentrations, a radiative forcing model, which converts concentrations to radiative forcing, and a temperature response model, which converts radiative forcing into temperatures. The temperature response model of FaIRv2.0.0 exactly corresponds to the impulse response EBM described in (<ref>). We refer to reader to the work of leach2021fairv2 for a comprehensive presentation of FaIRv2.0.0. In the rest of the paper, we permit ourselves to drop the v.2.0.0 and will refer to the model as FaIR. §.§ Gaussian processes Gaussian processes (GPs) <cit.> are a ubiquitous class of Bayesian priors over real-valued functions. They enjoy convenient closed-form expressions, principled uncertainty quantification, and cover a rich class of complex functions. As a result, they have been widely used in various nonlinear and nonparametric regression problems in geosciences <cit.>. We say that a real-valued stochastic process function (t) is a GP if any finite collection of its evaluations has a joint multivariate normal distribution. A GP (t) is fully determined by its mean function m(t) and covariance function k(t, t'). Formally, we write (t)∼(m, k), where the mean and covariance functions are defined as m(t) = [(t)], k(t, t') = Cov((t), (t')). m(t) and k(t, t') are typically user-specified. The covariance function, commonly called kernel, is a positive definite bivariate function that computes a notion of similarity between t and t'. For example, a widely used family of kernels are the Matérn kernels <cit.>, where the one-dimensional Matérn-1/2 kernel is given by k(t, t') = σ^2 exp(-|t-t'|/ℓ), where σ^2 is a variance hyperparameter and ℓ a lengthscale hyperparameter. More broadly, kernels (and hence GPs) can also be defined over multidimensional inputs, for example by substituting |t - t'| by the Euclidean distance between input vectors. When each dimension has a different lengthscale hyperparameter, the kernel is referred to as an automatic relevance determination kernel <cit.>. Let = [ y_1 … y_n ]^⊤ denote independent observations at inputs = [ t_1 … t_n ]^⊤ of the noisy observation process = (t) +, where ∼(0, σ^2). It is possible to inform the GP with these observations, thereby updating the prior into a posterior. In particular, the posterior distribution gives rise to a posterior GP, with a posterior mean function m̅(t) and a posterior covariance function k̅(t, t'), (t)|∼(m̅, k̅). The posterior mean and covariance enjoy a closed-form expression given by m̅(t) = m(t) + k(t, )( + σ^2_n)^-1 ( - m()), k̅(t, t') = k(t, t') - k(t, )( + σ^2_n)^-1k(, t), where _n denotes the identity matrix of size n, k(t, ) = [ k(t, t_1) … k(t, t_n) ], k(, t) = k(t, )^⊤ and = k(, ) = [ k(t_i, t_j) ]_1≤ i, j≤ n. <ref> provides an illustrative example on GP regression. For many more details on GPs, we refer the reader to rasmussen2005gaussian. § FAIRGP In this section, we present FaIRGP, a GP emulator for global mean surface temperature that leverages the thermal response model from FaIR. We begin by motivating a Bayesian treatment of the radiative forcing and formulate a GP prior over the forcing. We show this modification naturally results in a stochastic formulation of the thermal response model, which admits a GP with a physically-informed covariance structure for solution. In addition, we show how this framework can seamlessly account for the climate internal variability. Finally, we provide closed-form expressions for the posterior distributions over temperature and forcing, which can readily be used for emulation. In what follows, we adopt the following notational conventions: deterministic variables are denoted with serif font (Z), stochastic variables are denoted with sans-serif font () and vector/matrix versions are denoted in bold (, ). §.§ Radiative forcing as a Gaussian process The top-of-atmosphere radiative forcing is the fundamental quantity used to describe Earth's energy imbalance. The value of this forcing is primarily determined by the emissions of greenhouse gases and aerosols in the atmosphere. In FaIR, the radiative forcing term F(t) drives the temperature response model (<ref>). Having access to a reliable estimate of the forcing, and in particular to the forcing response to greenhouse gas and aerosol emissions, is therefore critical to produce trustworthy emulated temperatures. Computing an accurate estimate of the radiative forcing requires modelling how greenhouse gases and aerosols interact with radiations in the atmosphere, accounting for atmospheric adjustments and uncertainty in the measured parameters which may further complicate the calculation. The intricacy of such task has lead to the development of forcing estimation methods which rely on simplifying assumptions. For example, in FaIR, the forcing model is formulated as a combination of logarithmic, linear and square-root terms of greenhouse gas and aerosols concentrations. Namely, let χ denote a given atmospheric agent (e.g. CO2, CH4, SO2), the radiative forcing induced by χ is modeled as F^χ(t) = α_log^χlog(C^χ(t)/C_0^χ) + α_lin^χ(C^χ(t) - C_0^χ) + α_sqrt^χ(√(C^χ(t)) - √(C_0^χ)), where C^χ(t) denotes the concentration in the atmosphere of agent χ, C_0^χ the agent concentration at pre-industrial period and α_log^χ, α_lin^χ, α_sqrt^χ are scalar sensitivity coefficients. The total radiative forcing is then obtained by combining the contribution of each agent F(t) = ∑_χ F^χ(t). This choice of forcing model is motivated by studies of temperature and historical trajectories of these concentrations  <cit.>, which have provided substantial evidence that the concentration-to-forcing relationship can be reasonably approximated by the combination of terms in (<ref>). However, it is important to emphasise that (<ref>) does not propose a robust physically-motivated model, but is rather more akin to statistical modelling, where the coefficients α_log^χ, α_lin^χ, α_sqrt^χ need to be fitted against climate model data. In fact, the values of the coefficients can vary substantially depending on the data used and the fitting procedure <cit.>. Because the radiative forcing is a key driving quantity of the climate system, and therefore of the emulator, we argue it is critical to account for the uncertainty introduced by the FaIR model of radiative forcing. Specifically, this choice of simplified model of forcing reflects uncertainty caused by the lack of knowledge about the phenomenon we wish to describe, referred to as epistemic uncertainty <cit.>. We propose to account for this epistemic uncertainty over the forcing model through a Bayesian formalism, by placing a GP prior over the radiative forcing. We propose to use the deterministic forcing function F(t) as the mean function of our GP, thereby ensuring that our prior will behave in expectation like the FaIR forcing model. To model uncertainty, we introduce a covariance function K(t, t') acting over past emissions trajectories of the form K(t, t') = ρ(E(t),E(t')), where E(t) = [ E^χ_1(t) … E^χ_d(t) ]^⊤ is a vector that represents emissions — or cumulative emissions — of atmospheric agents χ_1, …, χ_d at a given time, and ρ is a user-specified kernel. We assume throughout a simple Matérn-3/2 kernel with automatic relevance determination for ρ and discuss more sophisticated options in Section <ref>. The prior we specify over the forcing is then formally defined by (t) ∼(F, K), where we emphasise that (t) is now a stochastic process[The prior is effectively a function of time and emissions (t, E(t)), but we permit ourselves to drop notations and write (t) for the sake of conciseness.] by using a sans-serif notation. This Bayesian formulation therefore directly hinges on the forcing model from (<ref>), but introduces a notion of uncertainty quantification, thereby accounting for the epistemic uncertainty over the forcing response. §.§ Thermal response model with GP forcing We will now show that by combining the GP prior over the radiative forcing and the FaIR thermal response model, we obtain a GP prior over temperatures, which we name FaIRGP. Recall the thermal impulse response model presented in Section <ref>, described by a system of k independent linear first order ODEs Ṣ_i(t)/ṭ = 1/d_i (q_i F(t) - S_i(t)), where the thermal responses S_i(t) are such that T(t) = ∑_i=1^k S_i(t) is the global mean surface temperature. We propose to substitute the deterministic forcing function F(t) in (<ref>) by its Bayesian counterpart (t). In doing so, we naturally induce a stochastic version of the thermal impulse response model. Namely, the resulting ith stochastic thermal response, which we denote _i(t), will satisfy the linear stochastic differential equation (SDE) given by _i(t) = 1/d_i (q_i (t) - _i(t))ṭ. The resolution of this thermal response SDE is similar to the resolution of the thermal response ODE. Namely, assuming _i(0) = 0 at pre-industrial time, the solution to (<ref>) takes the canonical form _i(t) = q_i/d_i∫_0^t (s) e^-(t-s)/d_iṣ. However, the solution is now a stochastic process. Specifically, because GPs are stable under linear transformations, _i(t) must also be a GP, with a mean and covariance function shaped by the form of this linear transformation. In our case, the linear transformation is given by the convolution operator with the exponential function in (<ref>). Therefore, the ith thermal response can be characterised as a GP with the following mean and covariance functions { _i(t) ∼(m_i, k_ii) m_i(t) = q_i/d_i∫_0^t F(s) e^-(t-s)/d_iṣ k_ii(t, t') = (q_i/d_i)^2∫_0^t∫_0^t' K(s, s') e^-(t-s)/d_ie^-(t'-s')/d_iṣṣ'. . We observe that the mean function m_i(t) exactly corresponds to the solution of the deterministic thermal response ODE (<ref>). Hence, similarly to the GP forcing (t), the GP thermal response _i(t) will behave in expectation like the FaIR thermal response model. Further, the covariance k_ii(t, t') is expressed as a function of the forcing prior covariance K, but also of the parameters of the EBM, d_i and q_i. As such, k_ii(t, t') defines a physically-informed covariance structure that propagates the uncertainty over the forcing (t) — specified by our Bayesian prior — into uncertainty over the thermal response _i(t). If we now define the global mean surface temperature as the sum of thermal response GPs (t) = ∑_i=1^k _i(t), then we can show that (t) must also be a GP. Namely, let us define the cross-covariance function between thermal responses _i(t) and _j(t') as k_ij(t, t') := Cov(_i(t), _j(t')) = q_i q_j/d_i d_j∫_0^t∫_0^t' K(s, s') e^-(t-s)/d_ie^-(t'-s')/d_jṣṣ'. Then, the global mean surface temperature is a GP specified by the following mean and covariance functions { (t) ∼(m_, k_), m_(t) = ∑_i=1^k m_i(t), k_(t, t') = ∑_i=1^k∑_j=1^k k_ij(t,t'). . By specifying a GP prior over the radiative forcing, we have obtained a GP prior over the temperature that uses the FaIR thermal response model as its backbone, which we name FaIRGP. Using FaIRGP serves as a principled measure of epistemic uncertainty over the emulator design. In particular, the integral form of its covariance function allows to account for past trajectories, thereby capturing the climate system memory effect, and producing robust uncertainty estimates. In addition, as we will see in Section <ref>, FaIRGP can go beyond a standard impulse response model by learning from data using standard GP regression techniques. We emphasise that whilst we abuse notations for conciseness, the forcing prior (t) is effectively a function of emissions (or cumulative emissions) through its covariance function ρ(E(t), E(t')). Therefore, (t) is also a function of emissions, and its covariance function can be understood as k_(E(t), E(t')). §.§ Accounting for climate internal variability An important component of the climate system is its internal variability, which integrates the effects of weather phenomena — typically operating on the scale of days — into elements of the climate system, such as the ocean, cryosphere and land vegetation — which rather operate on the scale of months, years or decades <cit.>. The climate internal variability can classically be modeled in a k-box model by introducing a white noise forcing disturbance over the uppermost box, i.e. the atmosphere box <cit.>. Formally, let (t) be the standard one-dimensional Brownian motion and let (t) denote a stochastic version of the k-box model temperatures from (<ref>). The temperature response model with internal variability is given by (t) = (t)ṭ + F(t)ṭ + σ(t), where σ > 0 is a variance term that controls the amplitude of the white noise, and we recall that has zero everywhere but its first entry, therefore the white noise disturbance is only applied to the uppermost box. When the radiative forcing F(t) is deterministic, this is equivalent to adding red noise onto the global mean surface temperature, or equivalently, modelling it as an Ornstein-Uhlenbeck process[In the literature, it is common to encounter its discrete time analogue, the autoregressive process of order 1 AR(1).]. The corresponding diagonalised impulse response system is given by _i(t) = 1/d_i(q_i F(t) - _i(t))ṭ + σq_i/d_i(t), where derivations are detailed in Appendix <ref>. If we now again substitute the deterministic forcing F(t) with the GP forcing (t), it turns out that the long time regime solution to this stochastic impulse response system can be expressed as _i(t) ∼(m_i, k_ii + σ^2γ_i), where γ_i(t, t') is an exponential, or Matérn-1/2, kernel function given by γ_i(t, t') = q_i^2/2d_iexp(-|t - t'|/d_i). Therefore, FaIRGP can easily model the internal variability of the climate system simply by modifying its covariance structure. We observe that accounting for internal variability essentially corresponds to adding an independent autocorrelated noise process _i(t) ∼(0, σ^2γ_i) to the thermal response GP obtained in (<ref>). Ultimately, we sum the thermal response together to obtain a surface temperature GP (t) = ∑_i=1^k _i(t), which in the long time regime takes the form { (t) ∼(m_, k_ + σ^2 γ_) γ_(t, t') = ∑_i=1^k ν_i γ_i(t, t'), . where ν_i is a dimensionless weight determined by q_i, d_i which accounts for cross-covariances between thermal responses internal variabilities. Its detailed expression can be found in Appendix <ref>. Therefore, we can account for the climate internal variability simply by adding an independent noise process _(t) = ∑_i=1^k _i(t) ∼(0, σ^2γ_) to the temperature response GP obtained in (<ref>). §.§ Posterior distribution over temperature and radiative forcing In FaIRGP, the surface temperature response and the radiative forcing model can both be informed by global temperature observations. Using standard GP regression techniques <cit.>, FaIRGP can learn from data how to deviate from its backbone impulse response model to best account for actual temperature observations. The resulting model can then be used to emulate future temperatures, benefiting from both the robustness of its prior, and the flexibility of GP regression. As we will demonstrate it in Section <ref>, we can inform FaIRGP with historical observations and climate model data. Assume we are under a fixed emission scenario e.g. working with historical emissions only. For observation times t_1 < … < t_n, suppose we observe global mean surface temperatures T_1, …, T_n, and also have access to greenhouse gas and aerosols emissions data E_1, …, E_n ∈^d, where d corresponds to the number of atmospheric agents χ_1, …, χ_d. We concatenate these observations into = [ t_1 … t_n ]^⊤, = [ E_1 … E_n ]^⊤ and = [ T_1 … T_n ]^⊤. With notation abuse, the kernel k_ effectively evaluates the covariance between times t_i and t_j of the FaIRGP prior following k_(t_i, t_j) = k_(E_i, E_j) because the forcing prior kernel ρ is a function of emissions. The internal variability kernel γ_ evaluates the covariance between times t_i and t_j of the additive variability component following γ_(t_i, t_j). We can therefore define the Gram matrices induced by k_ and γ_ over and , i.e. = k_(, ) = [ k_(E_i, E_j) ]_1≤ i,j≤ n = γ_(, ) = [ γ_(t_i, t_j) ]_1≤ i, j≤ n. Using these Gram matrices, we can update our prior over (t) with observations to obtain a posterior GP over global mean surface temperatures given by (t)|∼(m̅_, k̅_), where the posterior mean m̅_(t) and the posterior covariance k̅_(t, t') have the following closed-form expressions m̅_(t)_posterior mean = m_(t)_prior mean + k_(t, )( + σ^2 )^-1( - m_())_posterior correction k̅_(t, t')_posterior covariance = k_(t, t')_prior covariance - k_(t, )( + σ^2 )^-1k_(, t)_posterior correction. The posterior mean m̅_(t) explicitly corrects the prior mean m_(t), which exactly corresponds to the FaIR deterministic temperature response T(t). The correction introduced by the posterior mean in (<ref>) is a data-driven deviation term, informed by the observed temperatures , observation times and emissions in the matrix . A data-corrected uncertainty quantification is similarly provided by the posterior covariance k̅_(t, t'). This posterior can then be used to emulate temperatures by making prediction at future time steps. A major interest of this formulation is that whilst the emulator hinges on robustness of the energy balance model for its prior, it can also benefit from the flexibility of GP regression by informing it with data in its posterior, thereby going beyond a simple impulse response model. In addition, the GP approach enjoys full probabilistic tractability, and detailed analytical expression of the posterior probability distribution are provided in Appendix <ref>. Similarly, we can also update the radiative forcing with observations of temperatures to formulate a posterior distribution over the forcing. Indeed, because the radiative forcing (t) is also GP, it is jointly Gaussian with the temperature (t). Namely, we can define a cross-covariance function as k_(t, t') := Cov((t), (t')) = ∑_i=1^kq_i/d_i∫_0^t'K(t, s')e^-(t'-s')/d_iṣ'. Therefore, it is also possible to update our prior over (t), resulting in a posterior distribution over the radiative forcing given by { (t)|∼(m̅_, k̅_) m̅_(t) = F(t) + k_(t, )( + σ^2 )^-1( - m_()) k̅_(t, t') = K(t, t') - k_(t, )( + σ^2 )^-1k_(t, )^⊤. . By updating the radiative forcing with observations of temperatures, we obtain an estimates of the forcing that corrects for the observations . Like for temperatures, the posterior mean m̅_(t) corrects the FaIR forcing model F(t) with a data-informed inductive bias, and corresponding uncertainty quantification provided by the posterior covariance k̅_(t,t'). Further, we can verify that solving the thermal response SDE for the posterior forcing (t)| yields the posterior temperature (t)| as the solution. Therefore, the forcing posterior and the temperature posterior are consistent with each other. Finally, we note that access to a closed-form probability density for the prior allows us to tune the model parameters using a maximum likelihood strategy. Specifically, we may want to tune model parameters such as the internal variability magnitude σ, parameters of the prior kernel K(t, t'), but also FaIR parameters such as d_i, q_i or the forcing model coefficients α_log^χ, α_lin^χ, α_sqrt^χ. This can be achieved by using for example a stochastic gradient approach to maximise the marginal log-likelihood log p(|, ) with respect to the model parameters. The analytical expression of the marginal log-likelihood is provided in Appendix <ref>. §.§ Spatial FaIRGP Whilst informative, the global mean surface temperature fails to capture the difference in exposure of world regions to a changing climate. It is therefore a necessity to invest efforts in obtaining spatially-resolved climate projection. We propose an extension of FaIRGP to emulate spatially-resolved surface temperature maps that grounds itself on a pattern scaling prior. Pattern scaling is a well-established technique to model changes in local surface temperature as a function of changes in global mean surface temperature <cit.>. It consists in a simple scaling of a fixed spatial pattern by global mean temperature changes. Whilst very simple, such approach is supported by findings that regional changes in temperature scale robustly with global temperature <cit.>, and has been successfully used in existing spatial temperature emulation models <cit.>. Formally, pattern scaling assumes that for a given spatial location x, the local temperature response T(x,t) is given by T(x,t) = β^(x) T(t) + β^(x)_0, with regression coefficient β^(x) and intercept β^(x)_0. These coefficients are typically obtained by fitting independent local linear regression models of global temperature T(t) onto local temperature T(x,t). If we substitute the deterministic global temperature response T(t) with its GP version (t), we therefore obtain a local FaIRGP prior temperature response given by (x,t) ∼(β^(x) m_ + β^(x)_0, (β^(x))^2 k_), which admits the local pattern scaling temperature response for its mean, and a locally scaled covariance. As for the global response, this local prior can be updated with local temperature observations to obtain a posterior temperature response at spatial location x. This allows FaIRGP to learn from data how to deviate from a fixed spatial pattern in order to better account for observations. § EXPERIMENTAL SETUP In this section, we introduce the dataset used in our emulation experiments, the baseline models we benchmark FaIRGP against, and the evaluation metrics. Code and data to reproduce experiments are publicly available[<https://github.com/shahineb/FaIRGP>]. §.§ Dataset description The data is obtained from the ClimateBench v1.0 <cit.> climate emulation benchmark dataset. ClimateBench v1.0 proposes a curated dataset of annual mean surface temperature and emissions for four of the main anthropogenic forcing agents: carbon dioxide (CO2), methane (CH4), sulfur dioxide (SO2) and black carbon (BC). The temperature data is generated from the latest version of the Norwegian Earth System Model (NorESM2) <cit.> as part of the sixth coupled model intercomparison project (CMIP6) <cit.>. The emission data is constructed from the exact input data used to drive the original NorESM2-LM simulations. We use as inputs for FaIRGP the global cumulative emissions of CO2 and global emissions of CH4, SO2 and BC. The dataset includes the CMIP6 historical experiment and four experiments corresponding to different possible shared socio-economic pathways (SSPs) from the ScenarioMIP protocol <cit.>: SSP126, SSP245, SSP370 and SSP585. These scenarios are designed to span a range of emissions trajectories corresponding to plausible mitigation scenarios and end-of-century forcing possibilities. Table <ref> provides a brief description of these experiments and the period they cover. Whilst our experiments are effectively conducted on data from a single climate model, we argue they still provide a valid assessment of an emulator's ability to reproduce climate models outputs. Indeed, the response characteristics from different CMIP6 models are similar, and it has been shown that even when they differ most, SCMs are still capable of capturing the variety of forcing responses spanned <cit.>. Therefore, there are reasons to believe that the insights of experiments conducted over NorESM2 data should carry over to data from other CMIP6 models. §.§ Baseline emulators To develop intuition on the benefit of combining FaIR with a GP, with propose to compare our model to the temperature projections obtained with (i) FaIR only and (ii) with a purely data-driven GP regression model only. By comparing to the emulation with FaIR, we hope to highlight the benefit that there is to combine the flexibility of a data-driven approach to an impulse response model. On the other hand, by comparing FaIRGP with a plain GP regression model, we hope to demonstrate the importance of having a robust physical prior underlying a data-driven approach. Both baseline models take as inputs greenhouse gas and aerosols emissions data, and predict temperatures anomalies with respect to preindustrial period. We use for FaIR parameter values that have been tuned against NorESM2 simulations <cit.>. Therefore, the FaIR model we use in experiments is fully deterministic with fixed parameter values. The plain GP model is entirely physics-free and uses a zero mean and a Matérn-3/2 kernel with automatic relevance determination. This is exactly the same kernel ρ we use in our prior over the forcing in FaIRGP, which takes as inputs the global cumulative emissions of CO2 and global emissions of CH4, SO2 and BC. This kernel is to contrast with the physics-informed kernel k_ used for temperatures in FaIRGP. We assume a standard Gaussian homoscedastic observation model for the plain GP. The observation noise and the kernel hyperparameters are tuned using marginal likelihood maximisation. §.§ Evaluation metrics We use two kinds of metrics to evaluate the predicted probabilistic surface temperatures: deterministic metrics, which compare only the posterior mean prediction to the ClimateBench temperature data, and probabilistic metrics, which evaluate the entire posterior probability distribution against ClimateBench temperature data. Table <ref> provides a brief description of the metrics used. The log-likelihood (LL) score evaluates the log probability of the groundtruth temperature data under the predictive posterior probability distribution predicted by our model. Therefore, greater LL means that the predicted distribution is a good fit for the test data. The 95% calibration score (Calib95) computes the percentage of groundtruth temperature data that fall within the 95% credible interval of the predicted posterior distribution. Therefore, if the predicted probability distribution is well calibrated, this score should be close to 95%. The Continuous Ranked Probability Score (CRPS) is an extension of the RMSE for probability distributions, which measures the distance between the cumulative distribution functions of the predicted posterior distribution and the test data. When evaluating prediction of spatial temperatures, we compute global mean metrics using a weighed mean that accounts for the decreasing grid-cell area toward the poles. Namely, we take ⟨Score⟩ = 1/N_lon∑_i=1^N_latw_i∑_i=1^N_lat∑_j=1^N_lonw_iScore_i,j, where w_i = cos(lat(i)). § APPLICATION: GLOBAL SURFACE TEMPERATURES EMULATION In this section, we benchmark FaIRGP against baseline models for the task of emulating mean global surface temperatures over SSP scenarios. We first briefly illustrate how the model concretely applies on an example. Then, we evaluate then emulated global temperatures trajectories when the model is trained on historical and SSP data, and when trained on historical data only. Finally, we probe the potential of FaIRGP to emulate the top-of-atmosphere radiative forcing. §.§ FaIRGP for global temperature emulation We propose in Figure <ref> a concrete illustration of global mean surface temperature emulation for SSP370 with FaIRGP, that parallels the posterior mean and covariance expressions from (<ref>) and (<ref>). The prior temperature response over SSP370 admits as its mean the FaIR response, but with an additional layer of uncertainty quantification that arises from the GP. We then use temperature observations from scenarios _train = {historical, SSP126, SSP245, SSP585} as training data to learn a posterior correction. The posterior correction allows deviation from the prior — both in mean and variance — to provide a better fit to the training observations. Finally, by linearly adding the posterior correction to the prior, we obtain a posterior temperature response over SSP370. §.§ Shared socio-economic pathways emulation In this experiment, we consider the experiment dataset given by = {historical, SSP126, SSP245, SSP370, SSP585}. We iteratively remove one SSP experiment from the dataset to construct a training set (e.g. retain SSP245 to obtain _train = {historical, SSP126, SSP370, SSP585}) and use it to train FaIRGP and the baseline GP model. We run predictions over the retained SSP experiment and evaluate the emulated global temperatures against NorESM2-LM data. We find that on average, FaIRGP outperforms the baseline models in every evaluation metric, as reported in Table <ref>. These results suggest that FaIRGP provides a better emulation of the global temperature response to anthropogenic forcing compared to both FaIR and the plain GP model. We include in Appendix <ref> a comparison with the GP emulator from ClimateBench <cit.>. Figure <ref> shows that FaIRGP provides a better temperature projection than FaIR in the near future — over the 2015-2050 period — for all scenarios. This is because being informed by data grants FaIRGP the flexibility to deviate from the impulse response prior on which it hinges, and provide prediction that are better aligned with the historical and near future observations from the training set. However, when further away from the training data, over the 2080-2100 period, FaIRGP reverts back to the prior behavior of FaIR. This suggests that FaIRGP is better suited for emulating global temperature response in the near future, and at is least as good as FaIR for longer term emulation. The plain GP model provides excellent predictions on SSP245, and even outperforms FaIRGP by a slight margin on deterministic metrics for this scenario. This is because GPs are notorious to excel at interpolation tasks, and the SSP245 scenario is a medium forcing scenario that perfectly lies within the range of the other low and high forcing scenarios used in the training data. However, when the evaluation scenario lies outside of the range of the training data, the plain GP model predictions will simply revert to the prior, and are therefore much less reliable. This is an important drawback of purely data-driven emulation, which by essence are interpolation models, and struggle to extrapolate outside the range of the training set. In Figure <ref>, this is particularly evident in the plain GP prediction on SSP585, where the model struggles at the end of the century due to unprecedented forcing levels that have not been seen in the training data. FaIRGP addresses this shortcoming by using FaIR as its prior, and therefore naturally reverts to an impulse response model when the prediction scenario is too distant from the training data. Finally, FaIRGP provides a tighter and more robust uncertainty quantification over emulated temperatures compared to the plain GP model, which tends to overestimate the variance. This is reflected in Table <ref> by a substantial improvement in LL of FaIRGP against the Plain GP, and suggests that the physics-informed covariance structure of FaIRGP is a sound choice to quantify emulator uncertainty. Further, the model formulated is able to quantify the uncertainty introduced by the internal variability separately from the uncertainty introduced by the emulator design. As shown in Figure <ref>, the uncertainty due to internal variability dominates in the near future, but the emulator uncertainty becomes predominant toward the end of the century, in accordance with the well-established assessment of hawkins2009potential. §.§ Emulation from historical observations In this experiment, we propose to investigate how well can the emulators generalise when they are solely trained on historical observations, and no simulated future data[with the exception of the FaIR parameters which have been calibrated using NorESM2 outputs for future scenarios.]. We train the plain GP and FaIRGP using the historical experiment only, and evaluate predictions on all SSP scenarios. The results are reported in Table <ref>. We find that FaIRGP outperforms other models in almost every metric, and displays performance similar to FaIR only in mean bias. In this context a purely data-driven method struggles to extrapolate because it cannot marry the new previously unseen emission values with the underlying physical model. Figure <ref> shows that the plain GP posterior immediately reverts to its prior after 2015, therefore providing uninformative temperature projections. On the other hand, FaIRGP manages to learn from historical observations to provide a better fit to the SSP over the 2015-2050 period, and slowly reverts back to the prior behavior of FaIR toward the end of the century. This demonstrates the importance of having a robust underlying physical model to an emulator, and further suggests that FaIRGP is a well suited candidate for this task, and can provide meaningful temperature projections based only on historical observations. §.§ Emulating radiative forcing from temperatures In this experiment, we propose to evaluate how global surface temperature anomaly data can be used to inform an estimate of the effective radiative forcing using FaIRGP. We use the complete dataset of global temperature time series from the historical and SSP126, SSP245, SSP370, SSP585 experiments, which are depicted in Figure <ref>. We update the GP prior placed over the radiative forcing (t) with temperature data to obtain a posterior radiative forcing response which incorporates information from temperature observations. We emphasise that the posterior over (t) does not use any forcing observations, only temperature observations. As for temperature emulation, the posterior radiative forcing response is given in closed-form by (<ref>). We evaluate the posterior historical radiative forcing response against NorESM2-LM historical top-of-atmosphere radiative forcing. Figure <ref> illustrates that the posterior forcing response learns from temperature time series how to deviate from the FaIR forcing trend to better account for observations. This is particularly evident between 1960 and 1980 where the posterior reproduces a decrease in forcing to better account for the global cooling trend in that time period, which FaIR struggles to capture. The results reported in Table <ref> show that FaIRGP outperforms FaIR in RMSE and Bias at emulating historical global radiative forcing. Whilst FaIR performs better at emulating the stable forcing before 1950, FaIRGP better accounts for the forcing variations after 1950 and outperforms FaIR by a significant margin. FaIRGP therefore proves to be useful to emulate global radiative forcing trends informed by surface temperatures. This is possible because our prior is specified within an energy balance model which explicitly connects the changes in surface temperatures to the changes in radiative forcing. On the contrary, this would not be possible with the plain GP model because it ignores forcing dynamics. § APPLICATION: SPATIAL SURFACE TEMPERATURES EMULATION In this section, we pursue the comparison of FaIRGP to baseline models, but for the task of emulating spatially-resolved temperatures. As for global emulation, we start by briefly illustrating how the model concretely applies. Then, we benchmark it against baseline models for SSP emulation and evaluate the emulation of surface temperatures forced by anthropogenic aerosols only. We conclude by investigating how FaIRGP can help emulate spatial top-of-atmosphere radiative forcing maps. §.§ FaIRGP for spatial temperatures emulation Figure <ref> illustrates the spatial surface temperature anomaly emulation with FaIRGP for test scenario SSP245. As in the global case, a prior response based on FaIR is first specified, and then shifted by a data-informed posterior correction map. The posterior correction is learned from training scenarios _train = {historical, SSP126, SSP370, SSP585}. The prior spatial response is constructed using a pattern scaling model trained on _train, and therefore introduces a fixed spatial pattern that is only rescaled by changes in global mean temperature. The posterior correction at location x is obtained by updating the prior over (x,t) from (<ref>) with local surface temperature observations from _train. The posterior correction maps effectively provide a data-driven way to deviate from this fixed spatial pattern, and better account for the possibly varying spatial temperature patterns of the emulated scenario. Finally, by linearly adding the posterior correction map to the prior, we obtain a posterior spatial temperature response over SSP245. §.§ Shared socio-economic pathways emulation In this experiment, we follow the same procedure as in the global emulation experiment and iteratively train models on the dataset deprived from one SSP scenario, then use the retained scenario as a test scenario for evaluation. We benchmark FaIRGP against a FaIR pattern scaling model and a purely data-driven plain GP emulator. The pattern scaling model is obtained by fitting a linear regression model using the same training data as the other models. The plain GP model is analogous to the baseline GP emulator in watsonparris2021climatebench, but differs in two aspects: (i) we adopt a simpler construction for the covariance with a Matérn-3/2 kernel with automatic relevance determination, and (ii) our model takes as input global aerosols emissions whereas watsonparris2021climatebench use spatially-resolved aerosols emission maps. Scores are computed over the 2080-2100 period since the start of all SSPs is quite similar. Mean scores are reported in Table <ref>. We find that FaIRGP has on average lower error than baseline models. Figure <ref> shows that the spatial bias patterns of FaIRGP are similar to the bias patterns obtained with the FaIR pattern scaling model. This indicates that the prior has a strong influence on the predicted posterior. Nonetheless, we observe that the posterior correction in FaIRGP helps mitigate the spatial inaccuracies of its prior. This is particularly evident for SSP126 and SSP245 where the magnitude of the spatial bias in FaIRGP is overall smaller than the one for the FaIR pattern scaling model. Regarding the high forcing scenario SSP585, the spatial correction is more subtle. This is due to it becoming an extrapolation task, and as a result, FaIRGP exhibits behavior closer to its pattern scaling prior. Figure <ref> also shows that, as for global temperature emulation, the plain GP model predicts sound surface temperature maps for low and medium forcing scenarios, but struggles at extrapolating over high forcing scenarios. §.§ Emulating anthropogenic aerosols forcing In this experiment, we want to evaluate emulation of temperature changes induced by anthropogenic aerosols emissions. We use the hist-aer experiment from the ClimateBench v1.0 dataset. The hist-aer experiment is generated using NorESM2-LM, using only historical anthropogenic aerosols emissions, and setting long-lived greenhouse gases emissions to zero. We emulate surface temperatures over this scenario using emulators trained on all available historical and SSPs experiments. Figure <ref> illustrates the challenges faced by the baseline pattern scaling model in reproducing the magnitude and spatial patterns of the temperature cooling effect induced by aerosols. The introduction of a correction through FaIRGP improves the overall magnitude of the cooling effect. The predicted spatial temperature pattern with FaIRGP remains nonetheless strongly influenced by the prior spatial pattern. This shows that FaIRGP clearly improves over the pattern scaling baseline at emulating the temperature changes resulting from anthropogenic aerosols emissions. This task is notoriously difficult for pattern scaling models, which excel at emulating responses to greenhouse gas emissions but struggle under strong aerosol forcing scenarios <cit.>. Considering the significant impact of the prior choice on FaIRGP, this fosters advocacy for a prior local response model that goes beyond pattern scaling models. Finally, whilst we only use global aerosols emissions, using spatially-resolved emissions maps should improve emulated temperatures given the strong influence spatial patterns of aerosols have over radiative forcing <cit.>. In addition we note that, for parameters calibrated against NorESM2-LM, FaIR exhibits difficulties in capturing the aerosol forcing, as illustrated in Figure <ref>. This limitation is likely to affect the performance of the pattern scaling model. <ref> provides emulation results with a plain GP model. As anticipated, the purely data-driven emulator faces difficulties in emulating surface temperatures solely based on aerosols emissions inputs, primarily because every scenario from its training data includes greenhouse gas emissions. §.§ Emulating radiative forcing from temperatures As in Section <ref>, we propose to probe whether FaIRGP can be used to estimate spatial forcing maps. Since we do not have a spatial forcing model, we simply use a spatially constant forcing prior for (x,t). We update it with spatially-resolved temperature observations from historical and SSPs scenarios. Figure <ref> compares the obtained posterior mean forcing with historical forcing maps simulated with NorESM2-LM. The magnitude of the emulated posterior forcing is relatively conservative and does not reach the same level as the ground truth maps. However, the emulated forcing successfully captures the hemispheric contrast during the 1960-1990 period, as well as the overall temporal increase and large-scale spatial forcing patterns. Despite struggling to reproduce the same magnitude of the forcing, this is encouraging considering that the emulated spatial patterns are solely inferred from temperature patterns. Using a more informative prior for the spatial forcing could easily help improve these results. § DISCUSSION §.§ About the GP approach §.§.§ Comparison to placing Bayesian priors over the SCM parameters A simple way to introduce model variability in simple climate models is to place Bayesian priors over model parameters, such as the carbon cycle feedback terms or the forcing model coefficients <cit.>. Such priors can then be updated with global temperatures observations to formulate posterior distributions. This specifies a probabilistic climate model calibrated against observations. Whilst aligned in spirit with the work proposed in this paper, we argue that our GP-based approach displays several advantages over placing priors on model parameters. Chiefly, FaIRGP formulates analytical expressions for the posterior distribution over forcing and temperatures, which have an intuitive interpretation as the sum of a physics-driven prior and a data-driven correction. In contrast, when placing a Bayesian prior over parameters, we do not in general have access to a probability distribution for temperatures. Therefore, probabilistic emulation may need to be sampling based, and one must resort to more complex Markov-Chain Monte-Carlo (MCMC) techniques to sample from the posterior distribution. Sampling-based approaches have two main shortcomings: (i) a thorough uncertainty quantification requires storing a tremendous amount of scenarios, which can be limited by memory capacity — beusch2022emission require an ensemble of 9 millions emulations; (ii) it reduces the statistical representation of uncertainty to summary statistics such as the mean, standard deviation or quantiles. In contrast, having a closed-form expression for the posterior distribution with GPs allows to analytically conduct probabilistic studies using the full probability density, and additionally draw samples, if needed. Finally, having access to the probability density expression allows to evaluate the likelihoods of observations. This can critically be used as a maximisation objective to tune the model parameters against observations, but also in the context of Bayesian optimisation routines to find optimal emission trajectories to meet climate goals. §.§.§ Connection with stochastic energy balance models Our work is related to the work of cummins2020optimal, which formulates a stochastic energy balance model by introducing a white noise variability term in the temperature response and the forcing models. This allows them to account for climate internal variability, and formulate a Kalman filtering strategy to obtain maximum likelihood estimators of the energy balance model parameters. Our work similarly introduces a white noise in the temperature response model to account for climate internal variability, which results in an additional temporal Matérn-1/2 covariance term in the prior over temperatures. Whilst an extended discussion goes beyond the scope of this work, it can be shown that in the long term regime, these two approaches are equivalent, and that more broadly, Kalman filtering models are in fact equivalent to temporal GPs with Matérn covariance functions <cit.>. Our work differs from cummins2020optimal in that beyond the stochasticity arising from internal variability, we also introduce a GP prior over the radiative forcing. This GP prior introduces stochasticity over the SCM design, which is not only a function of time, but also of emission levels. Because our modelling is not purely temporal (the GP is also a function of emissions), we cannot employ Kalman filtering strategies and instead choose to use GP regression techniques. §.§.§ Choice of kernel ρ The choice of covariance function ρ in (<ref>) is an important choice that allows the user to incorporate their domain knowledge into the prior over the radiative forcing. Let u and u' be generic notations for input data (in our work, greenhouse gas and aerosols emissions), the kernel ρ(u, u') specifies how will the prior covary between these two inputs. For example, choosing ρ(u, u') = δ(u-u') makes the GP independent at any two inputs. On the other hand, choosing ρ(u, u') = 1 causes the GP to covary equally between any two inputs. The Matérn family are a common family of kernel parameterised by a degree ν. They allow to control for the functional regularity of the GP. For ν = 1/2, draws from the GP are continous functions, for ν=3/2 they are once differentiable, and in the limit ν=∞ they become infinitely differentiable[The Matérn-∞ kernel actually corresponds to the squared exponential kernel.]. A detailed presentation of the Matérn kernels is provided in Appendix <ref>. Additions or multiplications of kernels can be used to construct more elaborate covariance functions that reflect an additive or multiplicative structure in the forcing. Periodic kernels can also be introduced to model seasonality. Going further, more complex choices of kernels include the spectral mixture kernel <cit.> which attemps to learn the spectral density of the data, or even kernels parametrised as neural networks <cit.>. Whilst kernel selection plays a key role in GP regression, our focus in this work is on the development of the FaIRGP framework rather than refining the kernel itself. The kernel is treated throughout as a modular component, with potential for refinement. As a result, we choose to work with a simple Matérn-3/2 kernel with automatic relevance determination throughout. Preliminary findings indicate that the choice of kernel does not significantly degrade the results. Hence, dedicating efforts to constructing more elaborate kernels is likely to yield comparable or better results than our current approach. §.§.§ Computational efficiency and scalability We report that to emulate 100 years of surface temperature anomaly with FaIRGP, it takes less than a second for global and spatial emulation on an average personal laptop[16Go memory], without requiring any parallelization methods. Scalability issues are commonly associated with Gaussian Processes (GPs) when the training set grows in size. This is because computing their posterior distribution involves a matrix inversion, which has a cubic computational cost in the number of training samples. Fortunately, unlike neural networks which require large amounts of data <cit.>, GPs excel in scenarios with limited data <cit.>. Consequently, it is possible to develop skilful GP emulators with limited training data. In cases where using a larger training dataset becomes a necessity, one can still employ linear conjugate gradients methods and parallelisation schemes <cit.> to scale exact GPs to millions of data points. Alternatively, sparse approximation techniques can be used to obtain a scalable estimate of the posterior distribution <cit.>. §.§ Climate modelling considerations §.§.§ Beyond FaIR: broader applicability of the method While we have chosen to use FaIR as the backbone climate model for our work, the rationale behind the development of FaIRGP is easily transferable to other commonly used simple climate models such as MAGICC <cit.> or OSCAR <cit.>. These models share linear time invariant dynamics, allowing us to incorporate a GP prior into the forcing term of these dynamics. By doing so, we can obtain a GP-based solution that is informed the model parameters, exactly like in FaIRGP. The dynamical systems of interest can naturally describe the temperature response to radiative forcing, as it is the case in our work. However, we could also imagine extending this to carbon cycle models, where emission levels prescribe the forcing function, and the output of the dynamical system are atmospheric concentrations. This GP framework over dynamical systems is in fact highly general, and has been introduced by alvarez2009latent in the context of dynamical systems where the forcing function is unknown. §.§.§ Pixel independence assumption The pattern scaling model used in our prior effectively uses independent linear regressions at each location to map changes in global mean temperature onto changes in local temperature. However, this modeling approach challenges our intuition as it overlooks the spatial dependence of temperature fields, despite our expectation that temperatures at nearby locations should covary. To address this modeling concern, a common solution is to incorporate spatially correlated innovations into the pattern scaling response, which represent the spatial expression of climate internal variability <cit.>. Alternatively, link2019fldgen design a procedure based on the Wiener-Khinchin theorem <cit.> to emulate a climate variability field with the same variance and spatiotemporal correlation structure as the one in ESMs outputs. From a statistical modeling standpoint, these approaches introduce spatial variability as multivariate Gaussian variables, differing only in their covariance structure. Consequently, they can be easily incorporated into our GP framework. However, for the sake of clarity, we chose not to delve into these additional considerations and instead focus on the exposition of the Bayesian energy balance model. Further, whilst it has been pointed out that, in general, regional changes in temperature scale robustly with global temperature <cit.>, this may not be true under strong mitigation scenarios or under strong aerosol forcing <cit.>. This fosters advocacy for development of spatialised simple climate models going beyond pattern scaling. §.§.§ Opportunities for precipitation emulation Going beyond surface temperature emulation, we can explore how FaIRGP can be used to emulate precipitations. One approach is to combine the Gaussian process GP emulator for precipitation described in watsonparris2021climatebench with FaIRGP. By leveraging Gaussian conjugacy relationships, a natural cross-covariance between the two emulators is induced. This enables the exchange of information between precipitation and temperature fields. This is in line with recent advocacy for joint emulation of temperatures and precipitations <cit.>. Another option is to use the emulated temperatures from FaIRGP as input for a statistical emulator of precipitation, such as a Gamma regression model <cit.>. Indeed, many climate impacts are routinely assumed to be a function of temperatures. Additionally, the full probability distribution predicted by FaIRGP can be incorporated into the model, allowing for the propagation of epistemic uncertainty associated with FaIRGP. §.§.§ Application to detection and attribution With FaIRGP, we have access to the analytical expression of the probability density distribution of emulated temperatures. Therefore, we can emulate surface temperatures under historical scenarios, both with and without anthropogenic forcing, and analytically compute the probability of temperature occurrences in each scenario. By comparing these probabilities, we can assess the extent to which human activity has made a certain temperature range more likely, enabling us to conduct attribution studies. Conducting detection and attribution studies with emulators is not exclusive to FaIRGP, and could in principle be conducted with any emulator as discussed in watsonparris2021climatebench. However, the strength of FaIRGP lies in its ability to input temperature ranges directly into a known probability density function, providing a precise probability between 0 and 1 of such temperatures to occur under a given emission scenario. § CONCLUSION AND OUTLOOKS Simple climate models (SCMs) are robust physically-motivated emulators of changes in global mean surface temperatures. Gaussian processes (GPs) are powerful Bayesian machine learning model capable of learning complex relationships, and emulate from data how changes in emissions affect changes in surface temperatures. By combining them together, we reconcile these two paradigms of emulator design, which mutually address their respective limitations. We introduce FaIRGP, a Bayesian energy balance model that (i) maintains the robustness and interpretability of a simple climate model, (ii) gains the flexibility of modern statistical machine learning models with the ability to learn from data, and (iii) provides principled uncertainty quantification over the emulator design. We demonstrate skilful emulation of global mean surface temperatures over realistic emission scenarios. Unlike GPs, FaIRGP has a robust physical grounding which allows it to provide reliable predictions even on out-of-sample scenarios. On the other hand, unlike SCMs, FaIRGP can learn complex non-linear relationships to deviate from an SCM and improve predictions. In particular, FaIRGP better accounts for the temperature response to anthropogenic aerosols emissions. We further show that these findings carry over for the task of emulating spatially-resolved surface temperature maps. In addition, we find that FaIRGP can also be used to produce estimates of top-of-atmosphere radiative forcing given temperature observations. The full mathematical tractability, with analytical expressions for probability distributions, provides great control over the modelling, and a rich framework to reason about probability distributions over temperatures. This is of great relevance to detection and attribution studies. Further, whilst our work focuses on temperature emulation — which have already been thoroughly studied — we envision FaIRGP as a foundation for the development of robust data-driven emulators for more complex climate variables, such as precipitations. Harnessing the mathematical properties of GPs, we believe that emulating climate impacts using FaIRGP will provide additional control over pure machine learning methods, whilst being able to capture complex non-linear relationships to forcing. We hope this work will contribute to building trust in data-driven models, and thereby allow the climate science community to benefit more widely from their potential. Shahine Bouabid receives funding from the European Union’s Horizon 2020 research and innovation programme under Marie Skłodowska-Curie grant agreement No 860100. § SUPPORTING MATERIALS FOR FAIRGP DERIVATION §.§ Useful results Let a, b > 0 and u, v ≥ 0. We have (a + b) min(u, v) - (au + bv) = -b|u - v| if u ≤ v -a|u - v| if u ≥ v. If u ≤ v, (a + b) min(u, v) - (au + bv) = (a + b) u - au - bv = bu - bv = -b |u - v|. If u ≥ v, (a + b) min(u, v) - (au + bv) = (a + b) v - au - bv = av - au = -a |u - v|. §.§ Derivation of FaIRGP Consider a k-box EBM specified by the stochastic temperature response model (t) = (t)ṭ + (t)ṭ, where (t) ∼(F, K), a forcing feedback vector = [ 1/C_1 0 … 0 ]^⊤, and a forcing feedback matrix given by !A = [ -(κ_1 + κ_2)/C_1 κ_2/C_1 0 … 0 0 0; κ_2/C_2 -(κ_2+κ_3)/C_2 κ_3/C_2 … 0 0 0; 0 κ_3/C_3 -(κ_3+κ_4)/C_3 … 0 0 0; ⋮ ⋮ ⋮ ⋱ ⋮ ⋮ ⋮; 0 0 0 … -(κ_k-2 + κ_k-1)/C_k-2 κ_k-1/C_k-2 0; 0 0 0 … κ_k-1/C_k-1 -(κ_k-1 + ϵκ_k)/C_k-1 ϵκ_k/C_k-1; 0 0 0 … 0 κ_k/C_k -κ_k/C_k. ] Denote = ^-1 the diagonalisation of the feedback matrix, where is a diagonal matrix with diagonal elements = Diag(D_1, …, D_k). Then we have (t) = (t)ṭ + (t)ṭ ⇒ ^-1(t) = ^-1(t)ṭ + ^-1(t)ṭ ⇒ [^-1(t)] = [^-1(t)]ṭ + [^-1](t) ṭ. Therefore, by using the notations from <cit.> D_i = -1/d_i ^-1(t) = [ _1(t) … _k(t) ]^⊤ ^-1 = [ q_1/d_1 … q_k/d_k ]^⊤, the diagonalised system can be written as an impulse response system, where for i ∈{1, …, k} we have _i(t) = -1/d_i_i(t)ṭ + q_i/d_i(t)ṭ = 1/d_i(q_i (t) - _i(t))ṭ. Consider now the thermal response _i(t) of the ith SDE. This thermal response is given by _i(t) = q_i/d_i∫_0^t (s)e^-(t-s)/d_iṣ, where we recall that (t) ∼(F, K). Because GPs are closed under linear transformations, _i(t) must also be GPs for all i∈{1, …, k}. Their mean functions can be computed following m_i(t) := [_i(t)] = [q_i/d_i∫_0^t (s)e^-(t-s)/d_iṣ] = q_i/d_i∫_0^t [(s)]e^-(t-s)/d_iṣ = q_i/d_i∫_0^t F(s)e^-(t-s)/d_iṣ. Their cross-covariance functions can be computed following k_ij(t, t') := Cov(_i(t), _j(t')) = Cov(q_i/d_i∫_0^t (s)e^-(t-s)/d_iṣ, q_j/d_j∫_0^t'(s')e^-(t'-s')/d_jṣ') = q_i q_j/d_i d_j∫_0^t∫_0^t'Cov((s), (s')) e^-(t-s)/d_ie^-(t'-s')/d_jṣṣ' = q_i q_j/d_i d_j∫_0^t∫_0^t' K(s, s') e^-(t-s)/d_ie^-(t'-s')/d_jṣṣ'. So we have _i(t)∼(m_i, k_ii) for any i∈{1, …, k}. Finally, if we define (t) = ∑_i=1^k _i(t), since the sum of Gaussians is still a Gaussian, we know that (t) must also be a GP. And we can compute its mean function following m_(t) := [(t)] = [∑_i=1^k_i(t)] = ∑_i=1^k[_i(t)] = ∑_i=1^k m_i(t), and its covariance function following k_(t, t') := Cov((t), (t')) = Cov(∑_i=1^k _i(t), ∑_j=1^k _j(t')) = ∑_i=1^k∑_j=1^k Cov(_i(t), _j(t')) = ∑_i=1^k∑_j = 1^k k_ij(t,t'). We conclude that { (t) ∼(m_, k_) m_(t) = ∑_i=1^k m_i(t) k_(t, t') = ∑_i=1^k∑_j = 1^k k_ij(t,t'). . §.§ Accounting for climate internal variability in FaIRGP Consider again the same problem, but introducing an additional white noise term in the temperature response SDE (t) = (t)ṭ + (t)ṭ + σ(t), where (t) denotes the standard Brownian motion. Following the same derivation steps we get (t) = (t)ṭ + (t)ṭ + σ(t) ⇒ ^-1(t) = ^-1(t)ṭ + ^-1 F(t)ṭ + ^-1(t) ⇒ [^-1(t)] = [^-1(t)]ṭ + [^-1](t) ṭ + σ[^-1] (t). which gives in impulse response form _i(t) = 1/d_i(q_i (t) - _i(t))ṭ + σq_i/d_i(t),∀ i∈{1, …, k}. The solution to the ith SDE is now given by _i(t) = q_i/d_i∫_0^t (s)e^-(t-s)/d_iṣ__i^∘(t) + σq_i/d_i∫_0^t e^-(t-s)/d_i(s)_η_i(t) = _i^∘(t) + ση_i(t). _i^∘(t) corresponds to the GP we have obtained in the previous derivation without white noise, i.e. _i^∘(t)∼(m_i, k_ii). η_i(t) is also a GP, with mean zero, and cross-covariance function given by Cov(η_i(t), η_j(t')) = [(η_i(t) - [η_i(t)])(η_j(t') - [η_j(t')])] = [η_i(t)η_j(t')] = [(q_i/d_i∫_0^t e^-(t-s)/d_i(s))(q_j/d_j∫_0^t' e^-(t'-s')/d_j(s'))] = q_iq_j/d_id_je^-t/d_i - t'/d_j[∫_0^t e^s/d_i(s)∫_0^t'e^s'/d_j(s')] = q_iq_j/d_id_j e^-t/d_i - t'/d_j∫_0^min(t, t')e^(1/d_i + 1/d_j)sṣ = q_iq_j/d_id_j e^-t/d_i - t'/d_jd_id_j/d_i + d_j(e^d_i + d_j/d_id_jmin(t, t') - 1) = q_iq_j/d_i + d_j e^-(d_j t + d_i t') / d_id_j(e^d_i + d_j/d_id_jmin(t, t') - 1) = q_iq_j/d_i + d_j(e^-|t-t'| / d_i - e^-(d_j t + d_i t') / d_id_j) if t ≤ t' q_iq_j/d_i + d_j(e^-|t-t'| / d_j - e^-(d_j t + d_i t') / d_id_j) if t ≥ t' (Lemma <ref>) ∼q_iq_j/d_i + d_j e^-|t-t'| / d_i if t ≤ t' q_iq_j/d_i + d_je^-|t-t'| / d_j if t ≥ t' when t≫ d_i or t'≫ d_j. Therefore, if we define γ_ij(t, t') = q_iq_j/d_i + d_jexp(-|t-t'|/d_i 1_{t ≤ t'} + d_j 1_{t > t'}), which simplifies when i = j to γ_i(t, t') = γ_ii(t, t') = q_i^2/2d_iexp(-|t-t'|/d_i), we obtain that in the long time regime, we can approximate η_i(t)∼(0, γ_i). And because _i^∘(t) and η_i(t) are independent processes, we obtain that _i(t) ∼(m_i, k_ii + σ^2 γ_i). Finally, if we take again (t) = ∑_i=1^k _i(t), then (t) must be a GP. Its mean function is given by m_(t) := [(t)] = [∑_i=1^k_i(t)] = ∑_i=1^k[_i(t)] = ∑_i=1^k m_i(t), and its covariance function is given by k_(t, t') := Cov((t), (t')) = Cov(∑_i=1^k _i(t), ∑_j=1^k _j(t')) = ∑_i, j = 1^k Cov(_i(t), _j(t')) = ∑_i, j = 1^k Cov(_i^∘(t) + ση_i(t), _j^∘(t') + ση_j(t')) = ∑_i, j = 1^k Cov(_i^∘(t), _j^∘(t')) + σ^2Cov(η_i(t), η_j(t')) (_i^∘, _j^∘η_i, η_j) = ∑_i, j = 1^k k_ij(t, t') + σ^2 ∑_i, j = 1^k γ_ij(t, t'). However, if t ≤ t' we have ∑_i,j=1^k γ_ij(t, t') = ∑_i,j=1^k q_iq_j/d_i + d_jexp(-|t-t'|/d_i) = ∑_i=1^k (∑_j=1^k q_iq_j/d_i + d_j)exp(-|t-t'|/d_i) = ∑_i=1^k 2d_i/q_i^2(∑_j=1^k q_iq_j/d_i + d_j)_ν_iq_i^2/2d_iexp(-|t-t'|/d_i) = ∑_i=1^k ν_iγ_i(t, t'). When t ≥ t' we can refactor terms similarly into a sum over j. By symmetry of the indices we conclude that { (t) ∼(m_, k_) m_(t) = ∑_i=1^k m_i(t) k_(t, t') = ∑_i,j=1^k k_ij(t, t') + σ^2 ∑_i=1^k ν_i γ_i(t, t') ν_i = ∑_j=1^k 2d_iq_j/q_i(d_i + d_j) . §.§ Analytical expression for FaIRGP probability distribution A key benefit of FaIRGP is its complete mathematical tractability. This tractability in particular includes access to the analytical expression of: (i) the marginal probability distribution over observations, which can be used as an objective to tune the model hyperparameters and (ii) of the probability distribution predicted over emulated temperatures, which can be useful for downstream applications of emulation. In what follows, we assume access to a training set = {, , } of size n, where we follow the notational conventions from Section <ref>. §.§.§ Analytical expression of the marginal log-likelihood Using the prior mean and covariance of the FaIRGP prior over (t), we define a prior mean vector and a prior covariance matrix over , given by = m_() = [ m_(t_1); ⋮; m_(t_n) ] = k_(, ) = [ k_(E_i, E_j) ]_1 ≤ i, j ≤ n. Further, consider the temperatures internal variability covariance matrix defined by = γ_(, ) = [ γ_(t_i, t_j) ]_1 ≤ i, j ≤ n. Then the prior distribution over temperatures is exactly given by the multivariate normal distribution (, + σ^2 ). Let us denote _σ^2 = + σ^2 for conciseness. Then, we can exactly evaluate the marginal probability of ∈^n under the prior distribution following p( | , ) = 1/√((2π)^n (_σ^2))exp(-1/2 ( - )^⊤_σ^2^-1( - )), which can be used as maximisation objective to tune model hyperparameters such that the observed temperatures have the greatest possible likelihood under the prior. In practice, we prefer working with the marginal log-likelihood for computational stability. It is given by closed-form by log p(|, ) = -1/2{n log(2π) + log(_σ^2) + ( - )^⊤_σ^2^-1( - )} §.§.§ Analytical expression of the posterior distribution Suppose that we want to emulate the global temperature response for a different emission scenario where we have access to greenhouse gas and aerosols emission data E_1^*, …, E^*_m at observation times t_1^* < … < t^*_m. We concatenate them into ^* = [ t_1^*; ⋮; t_m^* ]∈^m, ^* = [ E_1^*; ⋮; E_m^* ]∈^m× d, where d ≥ 0 is the number of emission agents χ_1, …, χ_d we observe emissions from. Using the posterior mean and covariance functions when (t) is updated with the training set , we define a posterior mean vector and posterior covariance matrix over ^*, ^* given by ^* = m̅_(^*) ^* = k̅_(^*, ^*). Further, consider the emulated temperatures internal variability covariance matrix defined by ^* = γ_(^*, ^*). Then, the posterior distribution over emulated temperatures is exactly given by the multivariate normal distribution (^*, ^* + σ^2^*). Let us denote ^*_σ^2 = ^* + σ^2^* for conciseness. This means that for a given temperature vector ^*∈^m, we can exactly evaluate the probability of ^* under the predicted distribution following p(^* | , ^*, ^*) = 1/√((2π)^m (^*_σ^2))exp(-1/2 (^* - ^*)^⊤(_σ^2^*)^-1(^* - ^*)). The vector ^* may be a retained test scenario, in which case computing p(^* | , ^*, ^*) provides an evaluation of the emulated posterior (^*, ^*_σ^2). ^* may also simply correspond to temperature for which we would like to assess the probability under emission scenario {^*, ^*}. A natural application is in detection attribution studies. Indeed, we can take ^* to be observed historical temperatures, and {^*, ^*} to be a counterfactual historical scenario without anthropogenic forcing. By evaluating p(^* | , ^*, ^*) we would be able to assess to the probability of historical temperatures observations to occur in a scenario without anthropogenic forcing. The posterior distribution can also be used to draw a sample if needed following = (^*_σ^2)^1/2 + ^*, ∼(0, _m). Finally, whilst the above is formulated for global mean surface temperatures, we can also derive analytical forms for the posterior distribution over the radiative forcing and for spatially-resolved emulation. § COMPLEMENTARY EXPERIMENTAL RESULTS §.§ Comparison of plain GP baseline with ClimateBench GP emulator from watsonparris2021climatebench §.§.§ Modelling differences The plain GP baseline we use is analogous to the GP emulator from ClimateBench <cit.>, but differs in two aspects. First, we use in our plain GP baseline a covariance structure with automatic relevance determination, denoted as ρ(E, E') = C_3/2(E, E'), where C_3/2 denotes the Matérn-3/2 covariance (see Appendix <ref>). This kernel introduces a different lengthscale parameter ℓ_χ for each atmospheric agent χ, which is tuned through maximizing the marginal loglikelihood. It is explicitely given by ρ(E, E') = C_3/2(E, E') = (1 + √(3 ∑_χ(E^χ - E'^χ)^2/ℓ_χ^2))exp(-√(3 ∑_χ(E^χ - E'^χ)^2/ℓ_χ^2)), where E^χ denotes global emission level for agent χ∈{CH_4, SO_2, BC}, while for χ = CO2, it denotes global cumulative emission levels. In contrast, watsonparris2021climatebench adopt an additive kernel structure given by ρ(E, E') = ∑_χσ_χ C_3/2(E^χ, E'^χ) = ∑_χσ_χ(1 + √(3)|E^χ - E'^χ|/ℓ_χ)exp(-√(3)|E^χ - E'^χ|/ℓ_χ). This additive structure introduces additional complexity through the variance terms σ_χ, which modulate the of each atmospheric agent to the covariance, and need to be tuned alongside the lengthscales ℓ_χ. From a functional perspective, choosing an additive kernel is equivalent to representing the temperature response as the sum of independent GPs, where each GP model the response to a single atmospheric agent. Further details on the covariance function be found in <cit.>. Second, in our plain GP baseline E^SO_2 and E^BC correspond to global emission levels, whereas watsonparris2021climatebench used the 5 principal components of spatial emission maps for SO2 and BC. We anticipate that spatially-resolved inputs for aerosols emissions should improve the model's ability to predict spatially-resolved surface temperatures features. §.§.§ Predictive performance comparison To compare the predictive performance against our plain GP and FaIRGP, we evaluate the ClimateBench GP for the emulation of global and spatial mean surface temperature anomaly and report scores in Table <ref>. All emulators are trained on the same training data : historical, SSP126, SSP370, SSP585. Predictive performance is evaluated for the emulation of SSP245 since this is the test data used in <cit.>. Overall, FaIRGP demonstrates improved scores compared to the ClimateBench baseline GP across various metrics, both in terms of global and spatial emulation. In terms of global emulation, the predictive performance of the Plain GP appears to be better than that of the ClimateBench GP. However, when it comes to spatial surface temperature emulation, the ClimateBench GP outperforms the Plain GP. This improvement can likely be attributed to the utilization of spatially-resolved aerosol information as input in the ClimateBench GP. blo §.§ Emulating anthropogenic aerosols forcing § COMPLEMENTARY MATERIAL ON GAUSSIAN PROCESSES §.§ Additional illustrations for Gaussian processes We provide here additional illustrations to develop intuition on how GPs can be used for a regression task. Consider a simple regression problem y = (x) + where ∼(0, σ^2). Let k(x, x') be a user-specified covariance function, and suppose we place a prior (x) ∼(0, k). Figure <ref> plots the mean function and the 95% credible interval that results from our choice of prior over the regression function. Since this is effectively a probability distribution over functions, we can draw samples from it. Figure <ref> shows the plots of 10 functions samples from this distribution. Figure <ref> shows how the prior can be updated with observations of y, thereby updating it into a posterior. Because the posterior is still a probability distribution, we can still draw samples from it as shown in Figure <ref>. However, the functions drawn are now constrained by the data and provide a better fit to observations. The posterior GP therefore induces a probability distribution over functions that provide a sound fit to observations §.§ Matérn covariance The Matérn covariances are a class of stationary covariance functions widely used in spatial statistics. The Matérn-ν covariance between two points x, x'∈ is given by C_ν(x, x') = 2^1-ν/Γ(ν)(√(2ν)|x-x'|/ℓ)^νK_ν(√(2ν)|x-x'|/ℓ), where Γ is the gamma function, K_ν is the modified Bessel function and ℓ > 0 is a lengthscale hyperparameter. The covariance function expression considerably simplifies for ν = p + 1/2 where p∈. For example, for ν=1/2 (p=0) and ν=3/2 (p=1) we have C_1/2(x, x') = exp(-|x-x'|/ℓ) C_3/2(x, x') = (1 + √(3)|x-x'|/ℓ)exp(-√(3)|x-x'|/ℓ) When x, x'∈^d, the distance |x - x'| can be substituted by the norm x - x' = √(∑_i=1^d (x_i - x_i')^2). The covariance is called an automatic relevance determination (ARD) kernel when each dimension has its own independent lengthscale parameter ℓ_i > 0. For example, the Matérn-1/2 and Matérn-3/2 ARD kernel write C_1/2(x, x') = exp(-√(∑_i=1^d (x_i - x_i')^2/ℓ_i^2)) C_3/2(x, x') = (1 + √(3∑_i=1^d (x_i - x_i')^2/ℓ_i^2))exp(-√(3∑_i=1^d (x_i - x_i')^2/ℓ_i^2)). When a Matérn-p + 1/2 covariance function is used as a kernel for a GP, draws from the GP are p times continuously differentiable (with convention that 0 times means simply continuous).
http://arxiv.org/abs/2307.05302v1
20230711144803
Robust design under uncertainty in quantum error mitigation
[ "Piotr Czarnik", "Michael McKerns", "Andrew T. Sornborger", "Lukasz Cincio" ]
quant-ph
[ "quant-ph" ]
Institute of Theoretical Physics, Jagiellonian University, Krakow, Poland. Mark Kac Center for Complex Systems Research, Jagiellonian University, Kraków, Poland Information Sciences, Los Alamos National Laboratory, Los Alamos, NM, USA. Information Sciences, Los Alamos National Laboratory, Los Alamos, NM, USA. Quantum Science Center, Oak Ridge, TN 37931, USA. Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM, USA. Quantum Science Center, Oak Ridge, TN 37931, USA. Error mitigation techniques are crucial to achieving near-term quantum advantage. Classical post-processing of quantum computation outcomes is a popular approach for error mitigation, which includes methods such as Zero Noise Extrapolation, Virtual Distillation, and learning-based error mitigation. However, these techniques have limitations due to the propagation of uncertainty resulting from a finite shot number of the quantum measurement. To overcome this limitation, we propose general and unbiased methods for quantifying the uncertainty and error of error-mitigated observables by sampling error mitigation outcomes. These methods are applicable to any post-processing-based error mitigation approach. In addition, we present a systematic approach for optimizing the performance and robustness of these error mitigation methods under uncertainty, building on our proposed uncertainty quantification methods. To illustrate the effectiveness of our methods, we apply them to Clifford Data Regression in the ground state of the XY model simulated using IBM's Toronto noise model. Robust design under uncertainty in quantum error mitigation Lukasz Cincio August 12, 2023 =========================================================== § INTRODUCTION Quantum computers promise to outperform the best classical computers. Such quantum advantage has already been claimed for some tasks <cit.>. Nevertheless, the potential of current gate-based quantum computers is severely limited due to decoherence and imperfect implementations of quantum gates, so-called hardware noise <cit.>. It is commonly expected that in future devices Quantum Error Correction (QEC) will enable fault tolerant quantum computation with errors continuously corrected as a computation is executed. However, successfully implementing QEC requires multiple, high-fidelity qubits to encode a single logical qubit. Although initial implementations of error correction codes have been demonstrated <cit.>, QEC at a scale resulting in quantum advantage requires substantial further improvement in quantum hardware. Consequently, techniques reducing the impact of errors without performing QEC are crucial to obtain a near-term quantum advantage. Error mitigation methods are techniques for reducing errors in near-term quantum hardware. They can be applied on devices with larger error rates and smaller qubit numbers than required by QEC <cit.>. Various error mitigation techniques have been proposed, including dynamical decoupling <cit.>, measurement error mitigation <cit.>, and noise-aware circuit compilation <cit.>. A widely-used approach to error mitigation aims to correct noisy expectation values of observables of interest with classical post-processing of measurement outcomes <cit.>. Examples of such methods are Zero Noise Extrapolation (ZNE) <cit.>, Virtual Distillation <cit.> and learning-based error mitigation <cit.>. ZNE measures an observable of interest at multiple noise strengths and extrapolates it to the zero-noise limit. Virtual Distillation uses multiple copies of a noisy quantum state to “distill” its purer version suppressing incoherent errors. Classical post-processing of measurements of such purified states is used to obtain mitigated expectation values for observables of interest. Learning-based error mitigation uses classically simulable quantum circuits similar to a circuit of interest to train an ansatz that corrects effects of noise on expectation values of observables. Among other approaches to error mitigation utilizing the classical post-processing are quasi-probabilistic error decomposition <cit.>, verified phase estimation <cit.>, truncated Neumann series <cit.> and application-specific approaches leveraging symmetries of a circuit of interest <cit.>. A fundamental limitation of the power of such error mitigation techniques is shot noise. Noisy expectation values are estimated using a finite number of state measurements called shots. Due to finite shot numbers the accuracy of these estimates is limited. This effect is called shot noise uncertainty. Shot noise uncertainty propagates through a classical post-processing procedure affecting error-mitigated expectation values. It is well-known that error-mitigated observables typically have larger shot noise uncertainty than their noisy counterparts <cit.>. Furthermore, for a wide class of error-mitigation protocols, the number of shots required for a given uncertainty grows exponentially with circuit depth fundamentally limiting the power of error mitigation <cit.>. In the worst case, this growth is even faster <cit.>. Moreover, while error mitigation reduces bias caused by noise, it can introduce subtler biases. For example, ZNE performed with imperfect noise strength control or improper choice of extrapolation method can result in biased outcomes. Similarly, coherent errors in the case of Virtual Distillation and poor choice of training circuits for learning-based error mitigation produce bias. Taking into account these limitations, it is crucial to account for the outcome uncertainty while applying and designing error mitigation methods to correct noisy observables. While methods to estimate shot uncertainty are known for some particular techniques <cit.>, no general approach to quantify the uncertainty of error-mitigated results or to optimize the robustness of the error mitigation under uncertainty have been proposed. In this work, we fill this gap by introducing such methods, as shown schematically in Fig. <ref>. We build upon a rigorous framework for uncertainty quantification based on the observation that, given a set of assumptions and information about the problem, there exist optimal bounds on uncertainties that are obtained as values of well-defined optimization problems corresponding to extremizing probabilities of failure, or of deviations, subject to the constraints imposed by the scenarios compatible with the assumptions and information <cit.>. In particular, this framework is structured to not implicitly impose inappropriate assumptions, nor repudiate relevant information, and thus is well-suited for rigorous calculations of statistical quantities and optimal bounds on statistical quantities integral for the robust design of complex systems <cit.>. We introduce methods for robust design under uncertainty in Section <ref>. Next, we provide a proof of principle demonstration. We briefly introduce an error mitigation method (Clifford Data Regression) used here in Section <ref>. We describe our test-case error mitigation experiment in Section <ref>. In Section <ref>, we perform uncertainty quantification for this application, while in Section  <ref>, we demonstrate robust error mitigation design for our system. We conclude in Section <ref>. Additional examples of the application of our method are provided in App. <ref>, <ref>. § ROBUST DESIGN UNDER UNCERTAINTY FOR ERROR MITIGATION Here, we are mitigating O^ noisy - a noisy estimate of an expectation value. O is a given observable computed for a circuit of interest. We assume that an error-mitigated estimate of this expectation value, O^ mitigated, is obtained by classical post-processing of the expectation values of noisy observables obtained from the circuit of interest or its modifications. As these noisy expectation values are estimated with a finite shot number, they are random variables that can be characterized by probability distributions. Consequently, O^ mitigated is also a random variable characterized by a probability distribution, P(O^ mitigated). To reliably quantify the uncertainty of O^ mitigated, one needs to estimate properties of P(O^ mitigated). At present, only algorithms to estimate the variance of O^ mitigated for some error mitigation methods <cit.> have been proposed. Furthermore, these algorithms frequently assume that P(O^ mitigated) is a Gaussian distribution. This assumption is not valid for many error mitigation protocols. For instance, a ratio of Gaussian distributions gives rise to a Cauchy probability distribution, which is heavy-tailed, and does not have a well-defined variance <cit.>. Such a ratio of observables gives O^ mitigated in the case of Virtual Distillation <cit.>. The most reliable way to calculate the expected values of statistical quantities of unknown distributions like P(O^ mitigated) is through sampling, while the extremal values of its statistical quantities are most robustly calculated through a minimization (or maximization) over the parameterized distribution. We use this approach to estimate the expected value and variance of O^ mitigated, and also to accurately capture the behavior at the tails of its distribution. Producing the correct behavior at both the expected and extremal values of a distribution are crucial for robust design. In this work, we demonstrate the feasibility of this approach to uncertainty quantification of error mitigation, by sampling a distribution of the relative error of O^ mitigated defined as η = 2|O^ exact-O^ mitigated|/|O^ exact+O^ mitigated|, where O^ exact is an exact expectation value of the observable of interest for the circuit of interest. A statistical characterization of P(O^ mitigated)'s distribution, or the distribution of its error, is crucial for understanding the behavior of the system under uncertainty, and enables us to design error mitigation methods that are thus robust under uncertainty. For instance, one can consider the balance of error mitigation bias versus uncertainty for a given shot budget, N_s^ tot. In particular, in the case of ZNE, higher-order polynomial extrapolation leads to a lower bias while resulting in larger shot uncertainty for a fixed shot budget <cit.>. Similarly, in the case of Virtual Distillation, increasing the number of state copies results in better suppression of coherent errors increasing shot uncertainty <cit.>. For learning-based error mitigation, an analogous effect occurs concerning the expressive power of the ansatz used to correct noisy observables <cit.>. Therefore, error mitigation methods considered here usually have hyperparameters, like ZNE's extrapolation method, Virtual Distillation copy number, or learning-based error mitigation ansatz choice that can be adjusted to optimize performance or robustness. It has been demonstrated that the choice of such hyperparameters affects error mitigation performance significantly <cit.>. While heuristic hyperparameter choices have been proposed <cit.>, no systematic methods of the hyperparameter choice have so far been introduced. In this work, we propose to use O^ mitigated's distributional properties or the distribution of its error to determine uncertainty and quality of error mitigation as a function of its hyperparameters and to find the optimal hyperparameter values. For example, one can consider ⟨η⟩ and estimate it for given hyperparameter values sampling error mitigation outcomes. In the spirit of a variational quantum algorithm, one can minimize ⟨η⟩ using a classical optimization algorithm in a feedback loop with the sampling that uses ⟨η⟩ as the cost function <cit.>. Similarly, one can optimize other properties of O^ mitigated or η distributions. Such an approach enables one to use state-of-the-art classical algorithms to maximize and minimize ⟨η⟩ with respect to hyperparameter values. Such minimal and maximal values characterize the error sensitivity to hyperparameter choice, informing the best choice of the hyperparameters. In this work, we present proof-of-principle of such an optimization. § NUMERICAL RESULTS §.§ Clifford Data Regression In this section, we use Clifford Data Regression (CDR) to demonstrate our uncertainty quantification and robust design methods. Clifford Data Regression (CDR) is a learning-based error mitigation technique <cit.>. It uses classically simulable near-Clifford training circuits similar to the circuit of interest in order to correct a noisy expectation value of an observable of interest. Using CDR, we first find N_t near-Clifford training circuits similar to a circuit of interest. Typically, one uses training circuits that differ from the circuit of interest only by gate rotation angles. Such training circuits can be obtained by substituting most non-Clifford gates in the circuit of interest with Clifford gates of the same type <cit.>. For example, in the case of a rotation around the z-axis, R_Z(θ) = e^-i (θ/2) Z with arbitrary θ, one can replace R_Z with a power of a phase gate, S=e^-i (π/2) Z, which is a Clifford gate. Here Z is a Pauli matrix. As long as the number of non-Clifford gates in a training circuit remains small enough, the gates can be efficiently simulated classically <cit.>. The Clifford substitutions can be performed randomly <cit.>. Alternatively, they can be done with a Markov Chain Monte Carlo (MCMC) procedure to impose constraints on the expectation values of the training circuits <cit.>. Subsequently, we create training data by taking a pair of exact, x_i ≡ O^ exact_i, and noisy, y_i ≡ O^ noisy_i, expectation values of the observable of interest evaluated both classically and with a quantum computer, respectively. We do this for each training circuit. Here index i enumerates training circuits. We fit the training data with a linear ansatz, y_i = a x_i+b , where a and b are coefficients found by the least-squares linear regression. Finally, we use the resulting fitted coefficients to mitigate O^ noisy. We compute the mitigated expectation value as O^ mitigated = a O^ noisy + b. Clifford Data Regression is based on the assumption that as long as the noise does not depend strongly on gate rotation angles, noise affects nearby circuits similarly, and therefore its effects can be learned from near-Clifford circuits. CDR has been found to match or outperform other state-of-the-art error mitigation methods while mitigating real-device noise <cit.>. §.§ The setup As a test case, we use a circuit that prepares the ground state of a 6-qubit, one-dimensional XY model given by the Hamiltonian H = ∑_⟨ i,j ⟩ X_i X_j +Z_i Z_j with periodic boundary conditions, where X and Z are Pauli matrices and ⟨ i,j ⟩ denotes a pair of nearest-neighbor sites. For this circuit, we mitigate the expectation value of a two-site correlator X_0 X_3. To prepare the ground state, we use a hardware-efficient ansatz, with parameters found by classical optimization, that matches the ground state energy with an accuracy better than 10^-13. The optimized circuit was then compiled into the native IBM gate set <cit.>. The compiled circuit contained 60 CNOTs. For this proof-of-principle demonstration, we performed noisy simulations using IBM's Toronto quantum computer noise model obtained using a built-in Qiskit <cit.> function that creates a noise model based on the device calibration. In our tests, we use a modest shot number N_s^ tot=10^4 and N_t=10. We divide the shots between the noisy expectation values uniformly, i.e. for each O^ noisy_i, we use N_s^ tot/(N_t+1) shots. For such a modest error mitigation shot cost, it has been found that the shot noise significantly affects CDR performance. To mitigate the detrimental effects of the shot noise, we previously proposed the use of training circuits with well-distributed exact expectation values for mitigated observables generated by MCMC <cit.>. Here, we use this technique in the uncertainty quantification context. We use training circuits with y_i uniformly distributed between -y_ max and y_ max. These circuits are generated with the MCMC-based algorithm of Ref. <cit.>. Our training circuits have 10 non-Clifford gates, while the circuit of interest has 150 non-Clifford gates. The MCMC procedure is initialized randomly and samples near-Clifford circuits with y_i within 0.01 from the desired y_i. Therefore, the choice of the training circuits is another source of uncertainty affecting O^ mitigated. To exemplify our robust design methods, we consider a more general form of the y_i distribution parametrized by two parameters y_ max, a. Namely, y_i = y_ max sgn(r_i) |r_i|^a, with r_i values distributed uniformly from [-1,1], and sgn denoting a sign function. y_ max determines extreme values of the distribution, while a determines a deviation from the uniform distribution. More precisely, a=1 corresponds to the uniform distribution, a<1 results in clustering of y_i around |y_i|=y_ max, and a>1 causes clustering around 0. This form of distribution systematically tests a heuristic strategy of distributing the training data proposed in Ref. <cit.>. §.§ Uncertainty quantification To demonstrate our uncertainty quantification method, we perform error mitigation for our test-case observable N times. For each error-mitigated estimate, we compute the relative error η Eq.(<ref>). We use the generated sample to estimate the expected value, maximum, and minimum of η. For this purpose, we use the sample mean, minimum, and maximum, respectively. As η is a random variable, these finite-sample estimates are also random variables that have probability distributions with variance dependent on N. To show the usefulness of this approach, we analyze in detail the convergence of the results with increasing N. We consider N=10-3000, and for each N generate 1000 samples of size N. We show the results for y_ max=0.5, and a=1 in Fig. <ref>(a-c) as boxplots. We find that the expected values and the minima converge with increasing N quickly to 0.22 and 0, respectively, indicating that they can be used in practice to quantify the uncertainty. In contrast, the maxima appear to diverge logarithmically with N. Therefore, the maxima are not suitable for quantifying the robustness of the error mitigated results. Instead, we propose to use a tail value at risk (TVaR^ right_α), which is an expected value of an upper tail of the η distribution. To define it we use the α-quantile of the relative error η^α, that is the smallest value of η for which P( η≤η^α) ≥α, where P( η≤η^α) is the probability of η being smaller or equal to η^α. We then define, TVaR^ right_α = E(η | η≥η^α ) = 1/1 - α∫_η^α^∞η f(η) dη, where E(η | η≥η^α ) is the expected value of η under the condition η≥η^α, and f(η) is a probability density function. Here, we consider α=0.9 finding that TVaR^ right_0.9 converges quickly to 0.64, as shown in Fig. <ref>(d). Hence, TVaR^ right_α can be used to quantify the behavior of the upper tail of η determining the robustness of error mitigation. Furthermore, in App. <ref> we show that one can also use η^α for this purpose. We estimate η^α as the α N-th element of the sample sorted in ascending order and TVaR^ right_α as the mean of the sample elements larger than or equal to TVaR^ right_α. We note that in sampling O^ mitigated, we sample both outcomes of quantum measurements and sets of training circuits consistent with hyperparameter values of y_ max=0.5 and a=1. Sampling the latter is done by sampling random starting points using the MCMC procedure. To minimize the classical cost of training circuit generation, we repeat the sampling of O^ mitigated using precomputed sets of 100, 1000, and 10000 training circuits from which we randomly choose a set of training circuits used for CDR error mitigation. We observe that distributions of statistical quantities (Fig. <ref>) are very similar for each choice, indicating that precomputed sets are sufficiently representative of the mitigated observable distribution and that the variance of error mitigation outcomes is due primarily to the shot noise. §.§ Robust design Next, we consider robust error mitigation design for optimizing CDR hyperparameters y_ max and a to minimize the expected relative error of the error-mitigated expectation value ⟨η⟩ for our benchmark setup. We perform a constrained optimization with 0.2 ≤ y_ max≤ 1 and 0.1 ≤ a ≤ 10, excluding values leading to extreme concentration of the training data. We obtain the best hyperparameter values y_ max=0.87 and a=1.2 corresponding to ⟨η⟩ =0.18. These values result in well-distributed training circuit expectation values y_i, validating the heuristic strategy proposed in Ref. <cit.>. Further, in App. <ref>, we show that the worst expected value of η is obtained for hyperparameters corresponding to the strongest clustering of y_i around 0 allowed by the constraints. This confirms that the clustering of the training data negatively affects the quality of CDR-mitigated observables, as argued in Ref. <cit.>. Both here and in App. <ref>, we perform our optimizations using mystic's <cit.> implementation of a differential evolution optimization algorithm <cit.>. To minimize the detrimental effects of local minima, we performed the optimization for 9 randomly chosen initial hyperparameter values and chose the optimization instance with the optimal value expected value of η. For each pair of y_ max, a, the expected value of η was estimated as the mean of a sample of N=1000 error mitigation outcomes. § CONCLUSIONS AND DISCUSSION Error-mitigated observables exhibit uncertainty due to the propagation of shot noise variability from quantum measurements. This is one of the fundamental limitations of the power of error mitigation. Furthermore, error mitigation methods typically introduce bias that further limits the accuracy of error-mitigated results. In this work, we address these limitations by introducing methods to quantify and minimize the uncertainty and error of error-mitigated expectation values. Our error uncertainty quantification methods are generally applicable, in that they utilize unbiased sampling from a probability distribution of error-mitigated results (and the unbiased determination of bounds thereof). They enable one to estimate both expected and extremal values of error-mitigated observables, making it possible to quantify the robustness of error mitigation in a system. By applying this approach to classically simulable circuits, they can be used to rigorously quantify the bias in error mitigation methods. We leverage our uncertainty quantification methods to introduce robust design under uncertainty for error mitigation. By utilizing the optimization of hyperparameters, like the choice of noise levels in ZNE or CDR training circuits, one can fine-tune these hyperparameters to maximize the resilience of error mitigation to shot noise uncertainty and bias. In this work, we propose to optimize estimates of uncertainty and error of error-mitigated expectation values over error mitigation hyperparameters. This approach enables one to systematically determine the sensitivity of error-mitigated results to the choice of hyperparameters and to find their best values, enhancing the potential of error mitigation. Here we demonstrate both uncertainty quantification and robust design for a test case of CDR error mitigation for correlators of the ground state of a 6-qubit one-dimensional XY model simulated with IBM's Toronto noise model. In particular, we estimate the expected value and tail value at risk of the relative error of mitigated observables. Subsequently, we minimize the expected value of the relative error with respect to hyperparameters controlling the choice of the CDR training circuits. We note that while this work showcases the feasibility of robust design for error mitigation methods, such optimizations are generally costly as it requires estimation of the error-mitigated observable uncertainties for multiple values of the hyperparameters. A natural follow-up question is how to improve the efficiency of these methods. One possible avenue for improvement comes from an observation that noise similarly affects similar circuits as exploited by learning-based error mitigation. Consequently, one can expect that the optimal hyperparameters found with the method for a particular circuit result in good error mitigation performance for circuits resembling a given circuit. We leave the exploration of this idea to future work. § ACKNOWLEDGMENTS We thank Frédéric Sauvage and Mike Martin for helpful conversations. The research for this publication has been supported by a grant from the Priority Research Area DigiWorld under the Strategic Programme Excellence Initiative at Jagiellonian University. PC acknowledges support by the National Science Centre (NCN), Poland under project 2022/47/D/ST2/03393. MM acknowledges support by the Uncertainty Quantification Foundation under the Statistical Learning program. Research presented in this paper (ATS, MM) was also supported by the Laboratory Directed Research and Development (LDRD) program of Los Alamos National Laboratory under project number 20210116DR. The research was also supported (LC) by the Quantum Science Center, a National Quantum Science Initiative of the Department of Energy, managed by Oak Ridge National Laboratory. § UNCERTAINTY QUANTIFICATION WITH A QUANTILE OF THE RELATIVE ERROR Here, we show the convergence of finite-sample estimates of a quantile η^0.9 with increasing N for our test-case CDR error mitigation, see Fig. <ref>. These estimates were subsequently used to compute the tail value at risk shown in Fig. <ref>(d). We find that these estimates converge quickly with N to 0.47 demonstrating that high quantiles of η can be used to quantify the robustness of error mitigation. § MAXIMIZATION OF THE RELATIVE ERROR EXPECTED VALUE To determine the sensitivity of error-mitigated results to training data distribution (Eq. (<ref>)), we maximize the expected relative error with respect to y_ max and a. We perform a constrained optimization with the same constraints on y_ max and a, choice of the initial hyperparameter values, and value of N as in Sec. <ref>. We obtain η=310 for y_ max = 0.22 and a = 9.1 as shown in Fig. <ref>. This result demonstrates a strong dependence of the CDR performance on the training data distribution. The resulting parameters correspond to the strongest clustering of the training data around 0 allowed by the constraints, confirming that such clustering negatively impacts error mitigation quality, as found in Ref. <cit.>.
http://arxiv.org/abs/2307.04831v1
20230710181058
Single-Inclusive Particle Production from $pA$ Collision at Next-to-Leading Order
[ "Heikki Mäntysaari", "Yossathorn Tawabutr" ]
hep-ph
[ "hep-ph", "nucl-ex", "nucl-th" ]
[figure]justification=justified,singlelinecheck=false [subfigure]justification=centering justified =6.0in =8.25in =-0.3in =-0.20in #1 #1 #1 #1 #1 #1 and #1 Submitted to #1 Abstract Presented PRESENTED AT Single-Inclusive Particle Production from pA Collision at Next-to-Leading Order Heikki Mäntysaari Yossathorn Tawabutr Department of Physics, University of Jyväskylä, P.O. Box 35, 40014 University of Jyväskylä, Finland Helsinki Institute of Physics, P.O. Box 64, 00014 University of Helsinki, Finland We present the first fully consistent NLO calculation of the single-inclusive forward hadron production in proton-nucleus (pA) collisions under the color glass condensate (CGC) framework. In the dilute-dense limit, the NLO cross-section can be written as a convolution of the NLO impact factor, NLO parton distribution function (PDF), NLO fragmentation function (FF) and dipole-target scattering amplitude which satisfies the NLO small-x Balitsky-Kovchegov (BK) evolution. We demonstrate that, without the NLO corrections to the impact factor, we obtain a significant Cronin peak when the dipole amplitude satisfies the NLO BK equation. This would contradict the recent LHCb results <cit.>. However, the Cronin peak becomes suppressed when the NLO correction to the impact factor is included. This is the main result of this work. The dependence on resummation schemes for the NLO BK evolution will also be discussed. DIS2023: XXX International Workshop on Deep-Inelastic Scattering and Related Subjects, Michigan State University, USA, 27-31 March 2023 < g r a p h i c s > § INTRODUCTION This article is based on the work presented in <cit.>, which is in preparation. Single-inclusive hadron productions in forward proton-proton (pp) or proton-nucleus (pA) collisions at high energy can be expressed using the CGC formalism <cit.> in terms of the unintegrated gluon distribution <cit.>. This opens up an opportunity to compare CGC calculations against experimental measurements in order to probe the small-x structure of protons and nuclei <cit.>. As a result, forward hadron productions in pp and pA collisions have been an active area of study for more than two decades <cit.>. Consider a collision in the center-of-mass frame between a proton and a nucleon from the nucleus, which could be another proton, such that a forward parton from the proton – with a large longitudinal momentum fraction, x_p – interacts with a parton from the nucleus with small longitudinal momentum fraction, X_g. As the collision takes place, the forward parton receives a transverse momentum, k_⊥, but remains forward. Eventually, it fragments into a hadron. A direct calculation of the kinematics allows us to write x_p = k_⊥/√(s) e^y and X_g = k_⊥/√(s) e^-y , where y is the rapidity and s is the squared center-of-mass energy per nucleon of the pA collision. In this “dilute-dense” framework with k_⊥ greater than the saturation momentum, Q_s, the “hybrid formalism” applies <cit.>, allowing us to write the hadron production cross section as a convolution of PDFs – q_f(x_p) for quarks and g(x_p) for gluons – FFs – D_h/f(z) for quarks and D_h/g(z) for gluons – and the unintegrated gluon distribution, the Fourier transform of the dipole amplitude <cit.>. At the leading order (LO), we have <cit.> dσ_pA→ hX/d^2p_⊥ dy = ∫dz/z^2∫d^2x_0 d^2x_1/(2π)^2 e^-ik·(x_0-x_1)[∑_fx_pq_f(x_p) D_h/f(z) 1/N_c⟨tr[V_0V_1^†]⟩(X_g) . + . x_pg(x_p) D_h/g(z) 1/N_c^2-1⟨Tr[U_0U_1^†]⟩(X_g) ] , where N_c is the number of quark colors and ⟨⋯⟩(X_g) is the “CGC averaging” <cit.> over the target nucleus's quantum state evaluated at X_g. Finally, with the notation that x = (x^1,x^2) is a transverse vector, V_n≡ V_x_n = 𝒫 exp[ig∫_-∞^∞dx^-t^aA^+a(x^+=0,x^-,x_n) ] , U_n≡ U_x_n = 𝒫 exp[ig∫_-∞^∞dx^-T^aA^+a(x^+=0,x^-,x_n) ] , are the fundamental and adjoint light-cone Wilson lines, respectively, with 𝒫 being the path-ordering operator. Throughout this article, we employ the light-cone coordinates such that x^±=(x^0± x^3)/√(2). The first term in the square brackets of Eq. (<ref>) corresponds to the “quark channel,” while the second term corresponds to the “gluon channel” <cit.>. Eq. (<ref>) receives next-to-leading-order (NLO) corrections from an emission of a “primary parton” either before or after the interaction with the target. The resulting cross section follows from a direct calculation in the light-cone perturbation theory (LCPT) <cit.>. For single-inclusive cross-sections, we integrate over the transverse position of one of the two outgoing partons, while keeping track of the other parton as it fragments into a hadron <cit.>. This leads to 4 different NLO channels – qq, qg, gq and gg – denoting the incoming parton and the outgoing parton we track, respectively. The resulting NLO expression will be omitted in this article for brevity.[See <cit.> for the full expression and its derivation.] The NLO correction introduced above only concerns the “impact factor.” Additionally, each of the PDF, FF and dipole amplitude that enter Eq. (<ref>) receive NLO corrections. For the dipole amplitude, the corrections come through the high-energy BK evolution <cit.>. In this work, we perform for the first time the full-NLO calculation of single-inclusive hadron productions, including all NLO corrections outlined above <cit.>. The ingredients of our calculation are detailed below, followed by the preliminary results and the comparison with the recent LHCb's forward pPb π^0 production data <cit.>. § INGREDIENTS As mentioned previously, the dipole amplitude for the pp collision is taken from <cit.>, in which the NLO BK evolution <cit.> is applied to the initial condition given by the MV^γ model, S^(0)(x_0,x_1) ≡1/N_c⟨tr[V_0V_1^†]⟩(x_0) = exp[ - 1/4(x_10^2Q^2_s,0)^γln(1/x_10Λ + e) ] at initial value, x_0=0.01. Here, x_10 = |x_1-x_0| and Λ = 0.241 GeV is the QCD scale, while γ and Q_s,0 are the model parameters determined by fitting the evolved dipole amplitude to the HERA structure function data <cit.>. Note that the addition by e inside the logarithm is so that the infrared divergence is regulated. Since there are several schemes to resum the double-logarithmic terms in the NLO BK evolution, we follow the approach of <cit.> and perform the cross-section calculation separately for each resummation scheme, using the fitted parameters from <cit.> to obtain the dipole amplitude. A comparison of the resulting cross-section is given in Section <ref>. For the pA case, we employ the optical Glauber model introduced in <cit.>, which gives the following initial condition for the dipole amplitude in the pA case, S^(0)_pA(x_0,x_1;b_⊥) = exp[ - 1/4 σ_0/2AT_A(b_⊥)(x_10^2Q^2_s,0)^γln(1/x_10Λ + e) ] , where σ_0/2 is the transverse area of a proton and A is the mass number of the nucleus. Here, T_A(b_⊥) is the transverse thickness function of the nucleus, which can be obtained from the Woods-Saxon distribution of nuclear density. Eq. (<ref>) depends on the impact parameter, b_⊥, of the pA collision, in addition to the model parameters, γ and Q_s,0. From there, the NLO BK evolution is applied separately for each b_⊥ to obtain the evolved dipole amplitude that are eventually used to calculate the particle production yield in the pA collision as a function of b_⊥. Then, we integrate over b_⊥ weighted by the average number of binary collisions to obtain the overall pA cross-section <cit.>. The dipole amplitude in each case is convoluted with the PDF and FF. For the PDF, we employ the Martin-Stirling-Thorne-Watt (MSTW) PDF at NLO <cit.> through the LHAPDF library <cit.>. As for the FF, we use the de Florian-Sassot-Stratmann (DSS) results at NLO <cit.>. Finally, the NLO impact factor has collinear and rapidity divergences. The former is subtracted by the DGLAP evolution of the PDF and FF <cit.>. In <cit.>, the rapidity divergences are subtracted by the LO BK evolution of the dipole amplitude. However, in <cit.>, it is shown that one could leave the rapidity divergence in the NLO impact factor while evaluating the LO impact factor terms at the initial condition, X_g = x_0. This is called the “unsubtracted scheme,” and it is theoretically more exact because it does not require subtracting and adding a potentially large contribution, which can cause problems when the running coupling is at play <cit.>. In this work, we employ the momentum-space running coupling prescription for the impact factor, making the unsubtracted scheme a better choice. With all the ingredients specified above, we calculate the single-inclusive π^0 production cross-section in pPb collisions at the full NLO level, whose results are presented in the next section. Note that this is a novel development. For the first time, the dipole amplitude fitted to the data using NLO BK evolution <cit.> is employed in such calculations.[In <cit.>, the NLO corrections to the BK evolution of the dipole amplitude are not included.] § RESULTS §.§ Hadron Production Spectrum We perform the calculation at LHCb's kinematics, with center-of-mass energy, √(s) = 8.16 TeV, and rapidity, y=3, using two different resummation schemes in the NLO BK evolution of the dipole: (i) kinematically-constrained BK (KCBK) <cit.> and (ii) local-rapidity resummed BK (ResumBK) <cit.>. Respectively, the resulting π^0 cross-sections for the two resummation schemes are shown in Figure <ref>. There, the error bands are constructed by varying the factorization scale such that μ = 2p_⊥,4p_⊥,8p_⊥.[In <cit.>, the cross-section appears to be stable only for μ≳ 2p_⊥.] From Figure <ref>, we see that our spectra differ very slightly across the resummation schemes. On a more unfortunate note, they significantly overestimate the LHCb results. However, the functional form seems to be similar, with the discrepancy coming mainly from an overall factor. We suspect that the mismatch may result from a problem when the model with parameters fitted from HERA data is generalized to pA collisions using the optical Glauber model <cit.>. The issue will be studied in a future work. For the remainder of the article, we will only consider the b_⊥=0 case where the potential issues with pA dipoles are not as severe. §.§ Nuclear Modification Factor: Cronin Effect Despite the mismatch in our pPb spectra with the LHCb measurement, the most striking results of our calculation are in the nuclear modification factor, which is defined in the case of b_⊥=0 as R_pA = dN_pA→ hX/d^2p_⊥dy/[N_bin|_b_⊥=0] dN_pp→ hX/d^2p_⊥dy , where N_pp/pA→ hX is the particle production yield and N_bin|_b_⊥=0 is the number of binary collisions in pA collisions at b_⊥=0. The factor, R_pA, allows for a direct comparison between the pp and pA cross-sections, in such the way that R_pA=1 would imply that the nucleus behaved in the context of a pA collision as if it were A separate protons. As mentioned above, we only consider the pA collisions at b_⊥=0. The results for both resummation schemes are shown in Figure <ref>. With LO impact factor but PDF, FF and dipole at NLO (the orange bands), we see a clear Cronin effect around p_⊥≈ 4 - 5 GeV, which is larger with the KCBK evolution. However, in the full-NLO case (the blue bands), the Cronin peak disappears, and the discrepancy between KCBK and ResumBK results become much smaller. The former is especially desirable because the R_pPb measurement from LHCb displays no Cronin peak <cit.>. This result is of great importance – if the NLO corrections to the dipole's evolution is to be included, then the NLO corrections must consistently be included everywhere else: the impact factor, PDF and FF. § CONCLUSION AND OUTLOOK For the first time, we employ the CGC framework to compute the hadron production cross section in pA collisions at the full NLO accuracy consistently with the DIS data. The main result of this work is that the NLO corrections to the impact factor are essential to remove the Cronin effect at moderate hadron’s transverse momentum, p_⊥. Furthermore, the discrepancy in R_pA due to the NLO BK resummation scheme becomes suppressed in the full-NLO case where the impact factor is also at NLO. There remains a significant discrepancy between our pA spectra and the LHCb results <cit.>, possibly due to the dipole fit and its generalization to pA collisions. The issue will be investigated further in a future work. In light of upcoming forward scattering measurements <cit.>, the dependence of our results on the rapidity, y, will also be studied. Last but not least, as an additional cross-check, our calculation will be repeated with the target momentum fraction BK (TBK) evolution <cit.>, which is another available resummation scheme of the NLO BK evolution. § ACKNOWLEDGMENTS YT would like to thank Dr. Tuomas Lappi for helpful discussions and the DIS2023 organizers for the opportunity to present the work. The authors are supported by the Academy of Finland, the Centre of Excellence in Quark Matter, and projects 338263 and 346567, under the European Union’s Horizon 2020 research and innovation programme by the European Research Council (ERC, grant agreement No. ERC-2018-ADG-835105 YoctoLHC) and by the STRONG-2020 project (grant agreement No. 824093). The content of this article does not reflect the official opinion of the European Union and responsibility for the information and views expressed therein lies entirely with the authors. 10 LHCb:2022vfn LHCb collaboration, Nuclear modification factor of neutral pions in the forward and backward regions in pPb collisions, [https://arxiv.org/abs/2204.106082204.10608]. NLOsinc H. Mäntysaari and Y. Tawabutr, in preparation, 2023. Mueller:1989st A. H. Mueller, Small x Behavior and Parton Saturation: A QCD Model, https://doi.org/10.1016/0550-3213(90)90173-BNucl. Phys. B 335 (1990) 115–137. Mueller:1993rr A. H. Mueller, Soft gluons in the infinite momentum wave function and the BFKL pomeron, https://doi.org/10.1016/0550-3213(94)90116-3Nucl. Phys. B 415 (1994) 373–385. Balitsky:1995ub I. Balitsky, Operator expansion for high-energy scattering, https://doi.org/10.1016/0550-3213(95)00638-9Nucl. Phys. B 463 (1996) 99–160, [https://arxiv.org/abs/hep-ph/9509348hep-ph/9509348]. Gelis:2010nm F. Gelis, E. Iancu, J. Jalilian-Marian and R. Venugopalan, The Color Glass Condensate, https://doi.org/10.1146/annurev.nucl.010909.083629Ann. Rev. Nucl. Part. Sci. 60 (2010) 463–489, [https://arxiv.org/abs/1002.03331002.0333]. Dumitru:2002qt A. Dumitru and J. Jalilian-Marian, Forward quark jets from protons shattering the colored glass, https://doi.org/10.1103/PhysRevLett.89.022301Phys. Rev. Lett. 89 (2002) 022301, [https://arxiv.org/abs/hep-ph/0204028hep-ph/0204028]. Dumitru:2005gt A. Dumitru, A. Hayashigaki and J. Jalilian-Marian, The Color glass condensate and hadron production in the forward region, https://doi.org/10.1016/j.nuclphysa.2005.11.014Nucl. Phys. A 765 (2006) 464–482, [https://arxiv.org/abs/hep-ph/0506308hep-ph/0506308]. Chirilli:2011km G. A. Chirilli, B.-W. Xiao and F. Yuan, One-loop Factorization for Inclusive Hadron Production in pA Collisions in the Saturation Formalism, https://doi.org/10.1103/PhysRevLett.108.122301Phys. Rev. Lett. 108 (2012) 122301, [https://arxiv.org/abs/1112.10611112.1061]. Chirilli:2012jd G. A. Chirilli, B.-W. Xiao and F. Yuan, Inclusive Hadron Productions in pA Collisions, https://doi.org/10.1103/PhysRevD.86.054005Phys. Rev. D 86 (2012) 054005, [https://arxiv.org/abs/1203.61391203.6139]. Stasto:2013cha A. M. Stasto, B.-W. Xiao and D. Zaslavsky, Towards the Test of Saturation Physics Beyond Leading Logarithm, https://doi.org/10.1103/PhysRevLett.112.012302Phys. Rev. Lett. 112 (2014) 012302, [https://arxiv.org/abs/1307.40571307.4057]. Watanabe:2015tja K. Watanabe, B.-W. Xiao, F. Yuan and D. Zaslavsky, Implementing the exact kinematical constraint in the saturation formalism, https://doi.org/10.1103/PhysRevD.92.034026Phys. Rev. D 92 (2015) 034026, [https://arxiv.org/abs/1505.051831505.05183]. Shi:2021hwx Y. Shi, L. Wang, S.-Y. Wei and B.-W. Xiao, Pursuing the Precision Study for Color Glass Condensate in Forward Hadron Productions, https://doi.org/10.1103/PhysRevLett.128.202302Phys. Rev. Lett. 128 (2022) 202302, [https://arxiv.org/abs/2112.069752112.06975]. Altinoluk:2011qy T. Altinoluk and A. Kovner, Particle Production at High Energy and Large Transverse Momentum - 'The Hybrid Formalism' Revisited, https://doi.org/10.1103/PhysRevD.83.105004Phys. Rev. D 83 (2011) 105004, [https://arxiv.org/abs/1102.53271102.5327]. Lappi:2013zma T. Lappi and H. Mäntysaari, Single inclusive particle production at high energy from HERA data to proton-nucleus collisions, https://doi.org/10.1103/PhysRevD.88.114020Phys. Rev. D 88 (2013) 114020, [https://arxiv.org/abs/1309.69631309.6963]. Altinoluk:2014eka T. Altinoluk, N. Armesto, G. Beuf, A. Kovner and M. Lublinsky, Single-inclusive particle production in proton-nucleus collisions at next-to-leading order in the hybrid formalism, https://doi.org/10.1103/PhysRevD.91.094016Phys. Rev. D 91 (2015) 094016, [https://arxiv.org/abs/1411.28691411.2869]. Ducloue:2017dit B. Ducloué, E. Iancu, T. Lappi, A. H. Mueller, G. Soyez, D. N. Triantafyllopoulos et al., Use of a running coupling in the NLO calculation of forward hadron production, https://doi.org/10.1103/PhysRevD.97.054020Phys. Rev. D 97 (2018) 054020, [https://arxiv.org/abs/1712.074801712.07480]. Liu:2019iml H.-Y. Liu, Y.-Q. Ma and K.-T. Chao, Improvement for Color Glass Condensate factorization: single hadron production in pA collisions at next-to-leading order, https://doi.org/10.1103/PhysRevD.100.071503Phys. Rev. D 100 (2019) 071503, [https://arxiv.org/abs/1909.023701909.02370]. Kang:2019ysm Z.-B. Kang and X. Liu, Power Counting the Small-x Observables, [https://arxiv.org/abs/1910.101661910.10166]. Liu:2020mpy H.-Y. Liu, Z.-B. Kang and X. Liu, Threshold resummation for hadron production in the small-x region, https://doi.org/10.1103/PhysRevD.102.051502Phys. Rev. D 102 (2020) 051502, [https://arxiv.org/abs/2004.119902004.11990]. Kovchegov:2001sc Y. V. Kovchegov and K. Tuchin, Inclusive gluon production in DIS at high parton density, https://doi.org/10.1103/PhysRevD.65.074026Phys. Rev. D 65 (2002) 074026, [https://arxiv.org/abs/hep-ph/0111362hep-ph/0111362]. Kovchegov:2012mbw Y. V. Kovchegov and E. Levin, Quantum Chromodynamics at High Energy, vol. 33. Cambridge University Press, 2012. Lepage:1980fj G. P. Lepage and S. J. Brodsky, Exclusive Processes in Perturbative Quantum Chromodynamics, https://doi.org/10.1103/PhysRevD.22.2157Phys. Rev. D 22 (1980) 2157. Brodsky:1989pv S. J. Brodsky and G. P. Lepage, Exclusive Processes in Quantum Chromodynamics, https://doi.org/10.1142/9789814503266_0002Adv. Ser. Direct. High Energy Phys. 5 (1989) 93–240. Balitsky:1997mk I. Balitsky, Operator expansion for diffractive high-energy scattering, https://doi.org/10.1063/1.53693AIP Conf. Proc. 407 (1997) 953, [https://arxiv.org/abs/hep-ph/9706411hep-ph/9706411]. Kovchegov:1999yj Y. V. Kovchegov, Small-x F_2 structure function of a nucleus including multiple pomeron exchanges, https://doi.org/10.1103/PhysRevD.60.034008Phys. Rev. D 60 (1999) 034008, [https://arxiv.org/abs/hep-ph/9901281hep-ph/9901281]. Kovchegov:1999ua Y. V. Kovchegov, Unitarization of the BFKL pomeron on a nucleus, https://doi.org/10.1103/PhysRevD.61.074018Phys. Rev. D 61 (2000) 074018, [https://arxiv.org/abs/hep-ph/9905214hep-ph/9905214]. Balitsky:2007feb I. Balitsky and G. A. Chirilli, Next-to-leading order evolution of color dipoles, https://doi.org/10.1103/PhysRevD.77.014019Phys. Rev. D 77 (2008) 014019, [https://arxiv.org/abs/0710.43300710.4330]. Beuf:2020dxl G. Beuf, H. Hänninen, T. Lappi and H. Mäntysaari, Color Glass Condensate at next-to-leading order meets HERA data, https://doi.org/10.1103/PhysRevD.102.074028Phys. Rev. D 102 (2020) 074028, [https://arxiv.org/abs/2007.016452007.01645]. Beuf:2014uia G. Beuf, Improving the kinematics for low-x QCD evolution equations in coordinate space, https://doi.org/10.1103/PhysRevD.89.074039Phys. Rev. D 89 (2014) 074039, [https://arxiv.org/abs/1401.03131401.0313]. Iancu:2015vea E. Iancu, J. D. Madrigal, A. H. Mueller, G. Soyez and D. N. Triantafyllopoulos, Resumming double logarithms in the QCD evolution of color dipoles, https://doi.org/10.1016/j.physletb.2015.03.068Phys. Lett. B 744 (2015) 293–302, [https://arxiv.org/abs/1502.056421502.05642]. Ducloue:2019ezk B. Ducloué, E. Iancu, A. H. Mueller, G. Soyez and D. N. Triantafyllopoulos, Non-linear evolution in QCD at high-energy beyond leading order, https://doi.org/10.1007/JHEP04(2019)081JHEP 04 (2019) 081, [https://arxiv.org/abs/1902.066371902.06637]. H1:2009pze H1, ZEUS collaboration, F. D. Aaron et al., Combined Measurement and QCD Analysis of the Inclusive e+- p Scattering Cross Sections at HERA, https://doi.org/10.1007/JHEP01(2010)109JHEP 01 (2010) 109, [https://arxiv.org/abs/0911.08840911.0884]. H1:2012xnw H1, ZEUS collaboration, H. Abramowicz et al., Combination and QCD Analysis of Charm Production Cross Section Measurements in Deep-Inelastic ep Scattering at HERA, https://doi.org/10.1140/epjc/s10052-013-2311-3Eur. Phys. J. C 73 (2013) 2311, [https://arxiv.org/abs/1211.11821211.1182]. H1:2015ubc H1, ZEUS collaboration, H. Abramowicz et al., Combination of measurements of inclusive deep inelastic e^±p scattering cross sections and QCD analysis of HERA data, https://doi.org/10.1140/epjc/s10052-015-3710-4Eur. Phys. J. C 75 (2015) 580, [https://arxiv.org/abs/1506.060421506.06042]. H1:2018flt H1, ZEUS collaboration, H. Abramowicz et al., Combination and QCD analysis of charm and beauty production cross-section measurements in deep inelastic ep scattering at HERA, https://doi.org/10.1140/epjc/s10052-018-5848-3Eur. Phys. J. C 78 (2018) 473, [https://arxiv.org/abs/1804.010191804.01019]. Martin:2009iq A. D. Martin, W. J. Stirling, R. S. Thorne and G. Watt, Parton distributions for the LHC, https://doi.org/10.1140/epjc/s10052-009-1072-5Eur. Phys. J. C 63 (2009) 189–285, [https://arxiv.org/abs/0901.00020901.0002]. Buckley:2014ana A. Buckley, J. Ferrando, S. Lloyd, K. Nordström, B. Page, M. Rüfenacht et al., LHAPDF6: parton density access in the LHC precision era, https://doi.org/10.1140/epjc/s10052-015-3318-8Eur. Phys. J. C 75 (2015) 132, [https://arxiv.org/abs/1412.74201412.7420]. deFlorian:2007aj D. de Florian, R. Sassot and M. Stratmann, Global analysis of fragmentation functions for pions and kaons and their uncertainties, https://doi.org/10.1103/PhysRevD.75.114010Phys. Rev. D 75 (2007) 114010, [https://arxiv.org/abs/hep-ph/0703242hep-ph/0703242]. ALICE:2023fov ALICE collaboration, Physics of the ALICE Forward Calorimeter upgrade, ALICE-PUBLIC-2023-001 (2023).
http://arxiv.org/abs/2307.03942v1
20230708093617
Ariadne's Thread:Using Text Prompts to Improve Segmentation of Infected Areas from Chest X-ray images
[ "Yi Zhong", "Mengqiu Xu", "Kongming Liang", "Kaixin Chen", "Ming Wu" ]
eess.IV
[ "eess.IV", "cs.CV" ]
Using Text Prompts to Improve Segmentation Y. Zhong et al. Beijing University of Posts and Telecommunications, China {xiliang2017, xumengqiu, liangkongming, chenkaixin, wuming}@bupt.edu.cn Ariadne's Thread[Ariadne's thread, the name comes from ancient Greek myth, tells of Theseus walking out of the labyrinth with the help of Ariadne's golden thread.] : Using Text Prompts to Improve Segmentation of Infected Areas from Chest X-ray images Yi ZhongMengqiu Xu Kongming Liang Kaixin Chen Ming Wu August 12, 2023 =========================================================================================================================================================================================================================================================== Segmentation of the infected areas of the lung is essential for quantifying the severity of lung disease like pulmonary infections. Existing medical image segmentation methods are almost uni-modal methods based on image. However, these image-only methods tend to produce inaccurate results unless trained with large amounts of annotated data. To overcome this challenge, we propose a language-driven segmentation method that uses text prompt to improve to the segmentation result. Experiments on the QaTa-COV19 dataset indicate that our method improves the Dice score by 6.09% at least compared to the uni-modal methods. Besides, our extended study reveals the flexibility of multi-modal methods in terms of the information granularity of text and demonstrates that multi-modal methods have a significant advantage over image-only methods in terms of the size of training data required. § INTRODUCTION Radiology plays an important role in the diagnosis of some pulmonary infectious diseases, such as the COVID-19 pneumonia outbreak in late 2019<cit.>. With the development of deep learning, deep neural networks are more and more used to process radiological images for assisted diagnosis, such as disease classification, lesion detection and segmentation, etc. With the fast processing of radiological images by deep neural networks, some diagnoses can be obtained immediately, such as the classification of bacterial or viral pneumonia and the segmentation mask for pulmonary infections, which is important for quantifying the severity of the disease as well as its progression<cit.>. Besides, these diagnoses given by the AI allow doctors to predict risks and prognostics in a "patient-specific" way<cit.>. Radiologists usually take more time to complete lesion annotation than AI, and annotation results can be influenced by individual bias and clinical experience<cit.>. Therefore, it is of importance to design automatic medical image segmentation algorithms to assist clinicians in developing accurate and fast treatment plans. Most of the biomedical segmentation methods<cit.> are improved based on U-Net<cit.>. However, the performance of these image-only methods is constrained by the training data, which is also a dilemma in the medical image field. Radford et al. proposed CLIP<cit.> in 2021, where they used 4M image-text pairs for contrastive learning. With the rise of multi-modal learning in the recent years, there are also methods<cit.> that focus on vision-language pretraining/processing and applying them on local tasks. Li et al. proposed a language-driven medical image segmentation method LViT<cit.>, using a hybrid CNN-Transformer structure to fuse text and image features. However, LViT uses an early fusion approach and the information containd in the text is not well represented. In this paper, we propose a multi-modal segmentation method that using independent text encoder and image encoder, and design a GuideDecoder to fuse the features of both modalities at decoding stage. Our main contributions are summarized as follow: * We propose a language-driven segmentation method for segmenting infected areas from lung x-ray images. Source code of our method see: https://github.com/Junelin2333/LanGuideMedSeg-MICCAI2023https://github.com/Junelin2333/LanGuideMedSeg-MICCAI2023 * The designed GuideDecoder in our method can adaptively propagate sufficient semantic information of the text prompts into pixel-level visual features, promoting consistency between two modalities. * We have cleaned the errors contained in the text annotations of QaTa-COV19<cit.> and contacted the authors of LViT to release a new version. * Our extended study reveals the impact of information granularity in text prompts on the segmentation performance of our method, and demonstrates the significant advantage of multi-modal method over image-only methods in terms of the size of training data required. § METHOD The overview of our proposed method is shown in Fig. <ref>(a). The model consists of three main components: Image Encoder, Text Encoder and GuideDecoder that enables multi-modal information fusion. As you can see, our proposed method uses a modular design. Compared to early stage fusion in LViT, our proposed method in modular design is more flexible. For example, when our method is used for brain MRI images, thanks to the modular design, we could first load pre-trained weights trained on the corresponding data to separate visual and text encoders, and then only need to train GuideDecoders. §.§.§ Visual Encoder & Text Encoder The Visual Encoder used in the model is ConvNeXt-Tiny<cit.>. For an input image I∈ℝ^H× W×1, we extract multiple visual features from the four stages of ConvNeXt-Tiny, which are defined as f_4∈ℝ^H/4×W/4× C_1, f_8∈ℝ^H/8×W/8× C_2, f_16∈ℝ^H/16×W/16× C_3 and f_32∈ℝ^H/32×W/32× C_4, Note that C is the feature dimension, H and W are the height and width of the original image. For an input text prompt T ∈ℝ^L, We adopt the CXR-BERT<cit.> to extract text features g_t ∈ℝ^L× C. Note that C is the feature dimension, L is the length of the text prompt. §.§.§ GuideDecoder Due to our modular design, visual features and textual features are encoded independently by different encoders. Therefore, the design of the decoder is particularly important, as we can only fuse multi-modal features from different encoders in post stage. The structure of GuideDecoder is shown in Fig. <ref>(b). The GuideDecoder first processes the input textual features and visual features before performing multi-modal interaction. The input textual features first go through a projection module (i.e. Project in the figure) that aligns the dimensionality of the text token with that of the image token and reduces the number of text tokens. The projection process is shown in Equation 1. f_t = σ(Conv(T W_T)) where W_T is a learnable matrix, Conv(·) denotes a 1×1 convolution layer, and σ(·) denotes the ReLU activation function. Given an input feature T ∈ℝ^L× D, the output projected features is f_t ∈ℝ^M × C_1, where M is the number of tokens after projection and C_1 is the dimension of the projected features, consistent with the dimension of the image token. For the input visual features I∈ℝ^H× W× C_1, after adding the position encoding we use self-attention to enhance the visual information in them to obtain the evolved visual features. The process is shown in Equation 2. f_i = I + LN(MHSA(I)) where MHSA(·) denotes Multi-Head Self-Attention layer, LN(·) denotes Layer Normalization, and finally the evolved visual features f_i ∈ℝ^H× W× C_1 with residuals could be obtained. After those, the multi-head cross-attention layer is adopted to propagate fine-grained semantic information into the evolved image features. To obtain the multi-modal feature f_c ∈ℝ^H× W× C_1, the output further computed by layer normalization and residual connection: f_c = f_i + α (LN(MHCA(f_i,f_t))) where MHCA(·) denotes multi-head cross-attention and α is a learnable parameter to control the weight of the residual connection. Then, the multi-modal feature f_c ∈ℝ^(H× W)× C_1 would be reshaped and upsampling to obtain f'_c ∈ℝ^H'× W'× C_1. Finally the f'_c is concatenated with f_s∈ℝ^H'× W'× C_2 on the channel dimension, where f_s is the low-level visual feature obtained from visual encoder via skip connection. The concatenated features are processed through a convolution layer and a ReLU activation function to obtain the final decoded output f_o ∈ℝ^H'× W'× C_2 f'_c = Upsample(Reshape(f_c)) f_o = σ(Conv([f'_c, f'_s])) where [·,·] represents the concatenate operation on the channel dimension. § EXPERIMENTS §.§ Dataset The dataset used to evaluate our method performance is the QaTa-COV19 dataset<cit.>, which is compiled by researchers from Qatar University and Tampere University. It consists of 9258 COVID-19 chest radiographs with pixel-level manual annotations of infected lung areas, of which 7145 are in the training set and 2113 in the test set. However, the original QaTa-COV19 dataset does not contain any matched text annotations. Li et al. <cit.>have made significant contributions by extending the text annotations of the dataset, their endeavors are worthy of commendation. We conducted a revisitation of the text annotations and found several notable features. Each sentence consists of three parts, containing position information at different granularity. However, these sentences cannot be considered as medical reports for lacking descriptions of the disease, we consider them as a kind of "text prompt" just as the title of the paper states. Besides, we found some obvious errors (e.g. misspelled words, grammatical errors and unclear referents) in the extended text annotations. We have fixed these identified errors and contacted the authors of LViT to release a new version of the dataset. Dataset see Github link: https://github.com/HUANGLIZI/LViThttps://github.com/HUANGLIZI/LViT §.§ Experiment Settings Following the file name of the subjects in the original train set, we split the training set and the validation set uniformly in the ratio of 80% and 20%. Therefore, the training set has a total of 5716 samples, the validation set has 1429 samples and the test set has 2113 samples. All images are cropped to 224×224 and the data is augmented using a random zoom with 10% probability. We used a number of open source libraries including but not limited to PyTorch, MONAI<cit.> and Transformers<cit.> to implement our method and baseline approach. We use PyTorch Lightning for the final training and inference wrapper. All the methods are training on one NVIDIA Tesla V100 SXM3 32GB VRAM GPU. We use the Dice loss plus Cross-entropy loss as the loss function, and train the network using AdamW optimization with a batch size of 32. We utilize the cosine annealing learning rate policy, the initial learning rate is set to 3e-4 and the minimal learning rate is set to 1e-6. We used three metrics to evaluate the segmentation results objectively: Accuracy, Dice coefficient and Jaccard coefficient. Both Dice and Jaccard coefficient calculate the intersection regions over the union regions of the given predicted mask and ground truth, where the Dice coefficient is more indicative of the segmentation performance of small targets. §.§ Comparison Experiments We compared our method with common mono-modal medical image segmentation methods and with the LViT previously proposed by Li et al. The quantitative results of the experiment are shown in Table <ref>. UNet++ achieves the best performance of the mono-modal approach. Comparing to UNet++, our method improves accuracy by 1.44%, Dice score by 6.09% and Jaccard score by 9.49%. Our method improves accuracy by 1.28%, Dice score by 4.86% and Jaccard coefficient by 7.66% compared to the previous multi-modal method LViT. In general, using text prompts could significantly improve segmentation performance. The results of the qualitative experiment are shown in Fig. <ref>. The image-only mono-modal methods tend to generate some over-segmentation, while the multi-modal approach refers to the specific location of the infected region through text prompts to make the segmentation results more accurate. §.§ Ablation Study Our proposed method introduces semantic information of text in the decoding process of image features and designs the GuideDecoder to let the semantic information in the text guide the generation of the final segmentation mask. We performed an ablation study on the number of GuideDecoder used in the model and the results are shown in the Table <ref>. As can be seen from the Table <ref>, the segmentation performance of the model improves as the number of GuideDecoders used in the model increases. The effectiveness of GuideDecoder could be proved by these results. §.§ Extended Study Considering the application of the algorithm in clinical scenarios, we conducted several interesting extension studies based on the QaTa-COV19 dataset with the text annotations. It is worth mentioning that the following extended studies were carried out on our proposed method. §.§.§ Impact of text prompts at different granularity on segmentation performance. In section 3.1 we mention that each sample is extended to a text annotation with three parts containing positional information at different granularity, as shown in the Fig. <ref>. Therefore we further explored the impact of text prompts at different granularity on segmentation performance of our method and the results are shown in Table <ref>. The results in the table show that the segmentation performance of our proposed method is driven by the granularity of the position information contained in the text prompt. Our proposed method achieved better segmentation performance when given a text prompt with more detailed position information. Meanwhile, we observed that the performance of our method is almost identical when using two types of text prompts, i.e. Stage3 alone and Stage1 + Stage2 + Stage3. It means the most detailed position information in the text prompt plays the most significant role in improving segmentation performance. But this does not mean that other granularity of position information in the text prompt does not contribute to the improvement in segmentation performance. Even when the input text prompts contain only the coarsest location information (Stage1 + Stage2 items in the Table <ref>), our proposed method yielded a 1.43% higher Dice score than the method without text prompt. §.§.§ Impact of the size of training data on segmentation performance. As shown in Table <ref>, our proposed method demonstrates highly competitive performance even with a reduced amount of training data. With only a quarter of the training data, our proposed method achieves a 2.69% higher Dice score than UNet++, which is the best performing mono-modal model trained on the full dataset. This provides sufficient evidence for the superiority of multi-modal approaches and the the fact that suitable text prompts could significantly help improve the segmentation performance. We observed that when the training data was reduced to 10%, our method only began to exhibit inferior performance compared to UNet++, which was trained with all available data. Similar experiments could be found in the LViT paper. Therefore, it can be argued that multi-modal approaches require only a small amount of data (less than 15% in the case of our method) to achieve performance equivalent to that of mono-modal methods. § CONCLUSION In this paper, we propose a language-driven method for segmenting infected areas from lung x-ray images. The designed GuideDecoder in our method can adaptively propagate sufficient semantic information of the text prompts into pixel-level visual features, promoting consistency between two modalities. The experimental results on the QaTa-COV19 dataset indicate that the multi-modal segmentation method based on text-image could achieve better performance compared to the image-only segmentation methods. Besides, we have conducted several extended studies on the information granularity of the text prompts and the size of the training data, which reveals the flexibility of multi-modal methods in terms of the information granularity of text and demonstrates that multi-modal methods have a significant advantage over image-only methods in terms of the size of training data required. §.§.§ Acknowledgements This work was supported by NSFC under Grant 62076093 and MoE-CMCC "Artifical Intelligence" Project No. MCM20190701. splncs04
http://arxiv.org/abs/2307.03995v1
20230708153652
Linear approximation to the statistical significance autocovariance matrix in the asymptotic regime
[ "V. Ananiev", "A. L. Read" ]
physics.data-an
[ "physics.data-an", "stat.ME" ]
ReviewRanker: A Semi-Supervised Learning Based Approach for Code Review Quality Estimation Masum Hasan August 12, 2023 ========================================================================================== § INTRODUCTION In high energy physics searches for new particles that appear in the data as resonances <cit.>, one usually scans a mass region and hopes to find a peak of high significance at some mass. The significance at each mass of the scan is generally found by applying Wilks' theorem <cit.> to the likelihood-ratio test statistic (LRT) <cit.> for each point, and results in a field of significances measured across the search region. While the resonance may appear anywhere in the search region, the analysis usually targets the highest (local) significance, which leads to the recurring challenge of estimating the global significance of this observation. The necessity of calculating the probability for a background fluctuation to give such a peak of significance anywhere in the search region, and not simply where the significance is maximal, is commonly referred to as the look-elsewhere effect (LEE). There have been a number of studies investigating the LEE, and in our work we pay particular attention to those describing the significance field with a Gaussian process. While some studies <cit.> set the upper bound on the trials factor, which converts a local p-value into a global one, and only use a Gaussian process implicitly to link the low and high significance regions, other studies <cit.> require explicit values for the Gaussian process parameters. In this paper we establish a chain of lightweight steps from a non-linear parametric statistical model to the trials factor by estimating the covariance matrix of the significance field. To construct the estimate involving only one background only fit to the data, we apply linear expansion to the non-linear background shape. The way to calculate the covariance matrix starting from a linear model was briefly discussed by Demortier <cit.>. As part of our work, we give a strict mathematical formulation of the method and demonstrate a practical application of it to non-linear background shapes, with the estimated covariance matrix serving as a proxy for the straightforward trials factor estimate. A common input for the methods that quantify the LEE is a set of maximum likelihood fits to some number of Monte Carlo generated data realizations. They may be used to estimate the trials factor in the lower significance region, or the covariance matrix of the Gaussian process itself (the significance autocovariance). The challenge, then, is to fit enough datasets to estimate the trials factor with a satisfactory precision, while keeping the number of fits as small as possible. In high-energy physics searches for a new particle or a resonance, typically, the likelihood-ratio test statistic is used to construct the p-value for each point on a search grid. In the asymptotic regime, the test statistic follows a χ^2 distribution. For analyses that use a Gaussian process to model the significance, the number of degrees of freedom of the test statistic distribution is, typically, 1. For this case, in Chapter <ref>, we suggest a method to estimate the significance covariance matrix that makes use of a single background-only fit to the data. We replace the set of fits that were required in our previous work, with derivatives of the best-fit-to-the-data background model. Fortunately, the derivatives can often be extracted from the fit software. Core assumptions. In section <ref> we show that three quite generic requirements: * the background model should be well approximated by its linear expansion around the best fit parameters, * the assumption that the fluctuations in different bins of the data set are independent, * the fluctuations in each bin follow a Gaussian distribution, together, are consistent with the assumptions made in the empirical study by Ananiev & Read <cit.>, which relied on the additivity (superposition) principle for the fluctuations to empirically estimate the covariance matrix of the significances. We argue, therefore, that this work serves as a theoretical basis for the method of the Asimov set of background samples introduced in the study, and at the same time may rely on its validations. §.§ Statistical model The basic structure of a statistical model commonly used in high-energy physics experiments that search for a new particle or a resonance was described in detail in the empirical study <cit.>. For the present study, we chose the H→γγ inspired model as a benchmark, because it satisfies without approximation the second and third requirements above. The search is conducted with the likelihood ratio test statistic evaluated for each point M of the search grid ℳ. In this binned model, the expected background b_i(θ⃗), used as null-hypothesis H_0, together with the expected signal μ s_i(θ⃗) form the alternative H_1, expected signal + background estimate: n_i(μ, θ⃗, M) = b_i(θ⃗) + μ s_i(θ⃗, M), where i enumerates bins, θ⃗ denotes the vector of nuisance parameters and μ is the signal strength nuisance parameter. In the asymptotic regime (e.g. large sample), and neglecting constant terms, log-likelihoods for H_0 and H_1 may be approximated as follows: -2lnℒ_0(μ=0, θ⃗) = ∑_i ( d_i - b_i(θ⃗)/σ_i)^2, -2lnℒ_1(μ, θ⃗, M) = ∑_i ( d_i - b_i(θ⃗) - μ s_i(M, θ⃗)/σ_i)^2, where i enumerates bins, M ∈ℳ denotes the point in the search region ℳ of parameters which are not present under the background-only hypothesis, θ⃗ are the nuisance parameters, and d_i corresponds to the binned data with errors σ_i. We have assumed that the errors σ_i are independent of the nuisance parameters θ⃗. With a linear correction to σ_i it is still possible to get a closed form expression for the test statistic and significance. The calculation of the covariance would require sampling toys to average out the fluctuations. No additional fits would be required, however, so this may be a potential option for more sophisticated analyses. Our goal is to estimate the covariance matrix Σ_MN of the statistical significances Z_M and Z_N evaluated at two different points of the search region ℳ: Σ_MN = ⟨ Z_M Z_N ⟩_d, M, N ∈ℳ, Z_M = (μ̂) √(t_μ(M))∼𝒩[0, 1], t_μ(M) = -2 lnℒ_0(μ=0, θ⃗_0)/ℒ_1(μ̂, θ⃗_0 + θ⃗_1, M)∼χ^2_d.o.f=1, where t_μ(M) is the likelihood-ratio test statistic (LRT), Z_M is the so-called signed-root LRT, θ⃗_0 are the nuisance parameters that maximize the background-only likelihood ℒ_0, and θ⃗_0 + θ⃗_1 together with the signal strength μ̂ maximize the signal+background likelihood ℒ_1. We would like to remark that for the signal+background model we are fitting θ⃗ as a deviation from θ⃗_0. This is essential for the proper separation of variables in the subsequent calculations. We assume that the best fit of the backgound model b_i to the data d_i is available for the study as b_i(θ⃗̂⃗) = b̂_i. In order to simplify the notation, we make use of the freedom to choose the reference point for the model parameters θ⃗ and define the best fit parameters to be θ⃗̂⃗ = 0⃗. § METHOD To simplify the notation, we redefine d_i, s_i and b_i to include σ_i: d_i/σ_i↦ d_i, s_i/σ_i↦ s_i, b_i/σ_i↦ b_i. The log-likelihoods then become: -2lnℒ_0 = ∑_i ( d_i - b_i(θ⃗) )^2, -2lnℒ_1 = ∑_i ( d_i - b_i(θ⃗) - μ s_i(θ⃗) )^2. For every realization of the data (e.g. an LHC run), we expect the deviations of the fit parameters μ and θ⃗ from 0 to be small (in the absence of a signal), and therefore the first-order expansion of b_i(θ⃗) and s_i(θ⃗) around 0⃗ to be accurate enough. The log-likelihoods then are: -2lnℒ_0 = ∑_i ( d_i - b̂_i - Δ_i βθ^β)^2, -2lnℒ_1 = ∑_i ( d_i - b̂_i - Δ_i βθ^β - μ s_i(0⃗) )^2, where Δ_i α = ∂ b_i(θ⃗)/∂θ^α|_θ⃗ = 0⃗ is the Jacobian of the best-fit background model and the Einstein summation rule applies to the indices β. Since the signal model s_i contributes to the log-likelihoods eq. (<ref>) only at lowest order, thus is constant, we simplify s_i(0⃗) to s_i from now on. The equations that define optimal values of θ⃗_0, θ⃗_1, and μ then are: ∂ℒ_0/∂θ_α|_θ⃗_0∝ ∑_i (d_i - b̂_i - Δ_i βθ_0^β)·Δ_iα = 0, ∂ℒ_1/∂θ_α|_θ⃗_1, μ̂∝ ∑_i (d_i - b̂_i - Δ_i β (θ_0^β + θ_1^β) - μ̂ s_i)·Δ_iα = 0, ∂ℒ_1/∂μ|_θ⃗_1, μ̂∝ ∑_i (d_i - b̂_i - Δ_i β (θ_0^β + θ_1^β) - μ̂ s_i)· s_i = 0. To reduce the number of indices, we rewrite the expressions above with bra-ket notation: ⟨d -b̂|Δ = ⟨θ_0|Δ^⊺Δ, 0⃗ = ⟨θ_1|Δ^⊺Δ + μ̂⟨s|Δ, ⟨d - b̂|s⟩ = ⟨θ_0 + θ_1|Δ^⊺|s⟩ + μ̂⟨s|s⟩, where in eq. (<ref>) we used eq. (<ref>) to cancel the θ⃗_0 contribution. We can solve eq. (<ref>) and eq. (<ref>) for θ⃗_0 and θ⃗_1 correspondingly: ⟨θ_0| = ⟨d-b̂|Δ(Δ^⊺Δ)^-1, ⟨θ_1| = - μ̂⟨s|Δ(Δ^⊺Δ)^-1. It is important to mention that, although Δ itself is generally singular, the product Δ^⊺Δ appears to be a Hessian of -2lnℒ_1 with respect to θ⃗_1. For the background model best-fit point θ⃗ = 0⃗ to be a minimum, it is required that the Hessian be positive definite, thus Δ^⊺Δ is invertible. We substitute eq. (<ref>) and eq. (<ref>) into eq. (<ref>) and solve for μ̂: μ̂(M) = ⟨d-b̂| P |s_M⟩/⟨s_M| P |s_M⟩, P = 1 - Δ(Δ^⊺Δ)^-1Δ^⊺. An interesting and important fact is that P is a projector and it is symmetric: P^2 = P, P = P^⊺. A projector is always positive semi-definite, which means that the product below is non-negative for any non-zero s⃗: ⟨s| P |s⟩ = ⟨s| P^2 |s⟩ = ( P |s⟩)^2 ≥ 0, ∀s⃗≠0⃗ . Let us estimate the test statistic t_M: t_M = (-2 lnℒ_0) - (-2 lnℒ_1) = 2 ⟨d - b̂ - Δθ⃗_0|Δθ⃗_1 + μ̂ s⟩ + ⟨Δθ⃗_1 + μ̂ s|Δθ⃗_1 + μ̂ s⟩. We again use eq. (<ref>) to cancel the θ⃗_0 contribution and eq. (<ref>) to substitute the solution for θ⃗_1: t_M = μ̂⟨d-b̂| P |s_M⟩ = μ̂^2 ⟨s_M| P |s_M⟩. The significance Z_M, as defined in eq. (<ref>), is: Z_M = μ̂√(⟨s_M| P |s_M⟩) = ⟨d-b̂| P |s_M⟩/√(⟨s_M| P |s_M⟩). The square root in eq. (<ref>) is always defined, as the product under the square root is always positive (eq. (<ref>)). For the covariance matrix estimation, we would need to average over data. We are looking for a solution with uncorrelated fluctuations in each bin (sec. <ref>), and we recall that we normalized the errors to 1 in eq. (<ref>), therefore, the following is true: E_d{|d-b̂⟩⟨d-b̂|} = 1. The covariance matrix, then, is: Σ_MN = E_d{ Z_M Z_N } = E_d{⟨s_M| P |d-b̂⟩/√(⟨s_M| P |s_M⟩)⟨d-b̂| P |s_N⟩/√(⟨s_N| P |s_N⟩)} = ⟨s_M| P /√(⟨s_M| P |s_M⟩) E_d{|d-b̂⟩⟨d-b̂|} P |s_N⟩/√(⟨s_N| P |s_N⟩) = ⟨s_M|/√(⟨s_M| P |s_M⟩) P |s_N⟩/√(⟨s_N| P |s_N⟩), To see the parallel with Demortier <cit.>, one needs to think of the background model as a linear combination of vectors in Δ. Then eq. (<ref>) defines a vector |v_M⟩ = P|s_M⟩/√(⟨s_M|P|s_M⟩), which was introduced by Demortier and is orthogonal to each of the vectors constituting the background shape. The test statistic, then, can be rewritten as t_M = (⟨d - b̂|v_M⟩)^2, and the covariance can be expressed as Σ_MN = ⟨v_M|v_N⟩. where we used the symmetry and projector properties of P. It should be noted that from the data fluctuations d⃗ - b⃗̂⃗ contributing to the covariance matrix in the form Fluct. ∝ E_d{|d - b̂⟩⟨d - b̂|}, a superposition principle, relied on in ref. <cit.>, can be derived: Σ_MN = ∑_f Σ^f_MN, where f enumerates independent fluctuations in different bins. In summary, we can estimate the autocovariance matrix of the significance field from the signal model and derivatives of the background model: Σ_MN = ⟨s_M|/√(⟨s_M| P |s_M⟩) P |s_N⟩/√(⟨s_N| P |s_N⟩), M, N ∈ℳ P = 1 - Δ(Δ^⊺Δ)^-1Δ^⊺, Δ_i α = ∂ b_i(θ⃗)/∂θ^α|_θ⃗ = 0⃗. § JUSTIFICATION OF THE SET OF ASIMOV BACKGROUND SAMPLES In this section we would like to compare the derived expression eq. (<ref>) for the linear approximation of the significance covariance matrix to the empirical study <cit.> and the H →γγ inspired model introduced there. To carry out the calculations we used the SigCorr package that we developed specifically for trials factor studies, which now includes functionality for the linear approximation <cit.>. We estimate the linear approximation using eq. (<ref>) with the true parameters of the model, which were predefined in the paper. The resulting matrix shown in figure <ref> clearly resembles the one presented in the empirical study. We also show, in figure <ref>, the difference between the linear approximation computed on the model's true parameters (figure <ref>) and the empirical estimate. We confirm that the empirical covariance matrix is compatible with the linear approximation suggested in this paper within the accuracy of the empirical estimate. On the one hand, the compatibility of the linear approximation and the empirical study allows us to refer to the validations conducted in the empirical study, including those regarding trials factor estimation, and to re-apply them to the method suggested in this paper. The direct calculation of the up-crossings from the covariance matrix, described in <cit.>, becomes particularly appealing now, since it requires only a single fit of the statistical model to the data. The linear approximation, on the other hand, serves as the theoretical basis for the empirical set of Asimov background samples used to estimate the covariance matrix in the aforementioned work. § CONCLUSION In this work we proposed a novel method for the estimation of the covariance matrix of statistical significance in new particle searches using a linear expansion of the statistical model around its background-only best fit to the data. In addition to the closed form expression for the linear approximation of the significance covariance matrix, we also presented elegant expressions for the best fitted signal strength and statistical significance in this approximation. We proved that the suggested covariance matrix satisfies the superposition principle with regard to the fluctuations of the data, which makes it a good proxy to the covariance matrix constructed with the set of Asimov background samples<cit.>. Finally, we compared these two approaches with the example of a H →γγ inspired model and showed that the deviations are compatible with the error of the set of Asimov background samples. We, therefore, claim that all the validations conducted in the empirical study, including those regarding trials factor estimation, hold for the linear approximation suggested in this paper, and the linear approximation serves as a theoretical basis for the empirical set of Asimov background samples construction. We would like to thank Elliot Reynolds for the encouraging discussion at the HDBS Workshop at Uppsala. This research was supported by the European Union Framework Programme for Research and Innovation Horizon 2020 (2014–2021) under the Marie Sklodowska-Curie Grant Agreement No.765710. JHEP
http://arxiv.org/abs/2307.05328v1
20230711151947
ProgGP: From GuitarPro Tablature Neural Generation To Progressive Metal Production
[ "Jackson Loth", "Pedro Sarmento", "CJ Carr", "Zack Zukowski", "Mathieu Barthet" ]
cs.SD
[ "cs.SD", "cs.AI", "eess.AS" ]
ProgGP: From Tablature Neural Generation To Progressive Metal Production Jackson Loth, Pedro Sarmento, CJ Carr, Zack Zukowski and Mathieu Barthet Queen Mary University of London, United Kingdom Dadabots, <https://dadabots.com/> [email protected] ProgGP: From GuitarPro Tablature Neural Generation To Progressive Metal Production Jackson Loth1, Pedro Sarmento1, CJ Carr2, Zack Zukowski2 Mathieu Barthet1 This work is supported by the EPSRC UKRI Centre for Doctoral Training in Artificial Intelligence and Music (Grant no. EP/S022694/1). First and second author have equal contributions. ====================================================================================================================================================================================================================================================================== Recent work in the field of symbolic music generation has shown value in using a tokenization based on the GuitarPro format, a symbolic representation supporting guitar expressive attributes, as an input and output representation. We extend this work by fine-tuning a pre-trained Transformer model on ProgGP, a custom dataset of 173 progressive metal songs, for the purposes of creating compositions from that genre through a human-AI partnership. Our model is able to generate multiple guitar, bass guitar, drums, piano and orchestral parts. We examine the validity of the generated music using a mixed methods approach by combining quantitative analyses following a computational musicology paradigm and qualitative analyses following a practice-based research paradigm. Finally, we demonstrate the value of the model by using it as a tool to create a progressive metal song, fully produced and mixed by a human metal producer based on AI-generated music. § INTRODUCTION With advancements in computing power, new approaches to music generation have emerged. In recent years, deep learning has become a popular approach for automatic music generation, with research focusing on both the audio domain and the symbolic domain. This work extends previous work by Sarmento et al. <cit.> using a symbolic music generation model trained on DadaGP, a symbolic music dataset consisting 26k songs of various genres <cit.>. We follow here a practice-based research approach where a human expert music producer and music AI researchers collaborate to produce music based on machine-generated ouputs. We fine tuned the DadaGP-based model with a custom dataset of 173 progressive metal songs, which we refer to in this paper as ProgGP, with the intent of using the model to generate songs, which can be recorded and turned into a fully produced progressive metal song. The model used in this work generates music in the GuitarPro format, rather than formats such as MIDI, MusicXML and ABC seen in other symbolic music generation works <cit.>. For guitar parts, GuitarPro not only encodes the pitch of each note, but also the location on a guitar fretboard where the note is meant to be played, as well as various expressive techniques (e.g. vibrato and string bending). We suggest that for certain musical genres, this format is very advantageous for a practice-based approach, as it provides much more information to an artist on how to perform the music that is generated, while still leaving room for creative interpretation. This paper presents the work that went into creating a brand new progressive metal song using neurally generated riffs and ideas that are relevant to the progressive metal genre. As per its main contributions, we highlight: (1) ProgGP, a manually curated progressive metal GuitarPro dataset made available to the community for research purposes; (2) a fine-tuned guitar tablature generative model for the creation of progressive metal tablatures; (3) heuristics for assessing whether generated music holds traits of the desired genre; (4) a practice-based research approach relying on a human-AI partnership where neurally-generated music is selected, edited, and integrated into a composition by a human producer. We also critically examine how to use neurally-generated music to foster creativity, inspire new ideas and improve the writing workflow of artists. We hope that this work will stir more research into human-AI interaction in the musical domain. § BACKGROUND §.§ Symbolic Music Generation Using Deep Learning Recent advances in deep learning have led to promising results in the field of music generation <cit.>, with techniques such as Variational Autoencoders (VAEs) <cit.>, Generative Adversarial Networks (GANs) <cit.>, Recurrent Neural Networks (RNNs) <cit.> <cit.>, and Transformers <cit.> being increasingly used. The Transformer model <cit.> has enabled steep improvements in natural language processing (NLP) tasks and has been adapted for generating symbolic piano music in Huang et al.'s Music Transformer <cit.>. Other notable works, such as Musenet <cit.> and Pop Music Transformer <cit.>, have further built on this approach to generate multi-instrument music and improve the generated music's rhythmic structure. However, the task of guitar tablature music generation has received limited research attention until the recent release of the DadaGP <cit.> dataset, comprising songs in both GuitarPro format, a tablature edition software, and a dedicated textual token format. An initial example of guitar tablature generation work is Chen et al.'s fingerstyle guitar generator <cit.>, despite not being based on the GuitarPro format. More recent works that explore the DadaGP dataset include GTR-CTRL <cit.>, proposing a method for guitar tablature generation with control over instrumentation and musical genre, as well as LooperGP <cit.>, enabling to generate loopable music excerpts with applications for live coding performance. §.§ Practice-Based Research and Computer Music Many works deal with the notion of `practice' in research. Practice-based research is generally concerned with the knowledge gained through practice and the outcomes of that practice, while practice-led research leads to new understandings about practice itself <cit.>. Benford et al. describe this kind of research as consisting of three interconnected activities which inform each other in different ways: practice, theory and studies <cit.>. However, they note challenges in conducting this research with balancing potentially different researcher and artist goals, as well as ethical concerns that can arise through artistic use of new technologies. Artistic uses of new technologies involving AI can be difficult due to the difficulty of prototyping new AI systems and the number of ways that AI can respond to users in different contexts <cit.>. Amershi et al. <cit.> provide guidelines on dealing with such unpredictable AI systems, mostly focusing on keeping the user informed on the system's capabilities and understanding its outputs. AI systems have seen use in musical practice-based research <cit.> <cit.> with the Folk-RNN model by Sturm et al. being noted to have a number of impacts on musical creation such as a way to inspire ideas, break habits, and a sense of creating something that could not have been created otherwise. § PRACTICE-BASED RESEARCH METHODOLOGY §.§ Human-AI Partnership In this work, the first author, a music AI researcher and progressive metal producer, adopted the practice-based research approach described below: * Use a deep learning model to generate music in the style of the producer's preferred genre, progressive metal; * Evaluate the outputs of the model using a mixed method evaluation approach, combining objective metrics with subjective evaluation; * Craft a song using generated outputs based on outcomes from the evaluation; * Learn and record the song; * Analyse and reflect on the overall music production process. The work aims to better understand the successes and issues of the deep learning model in order to help the research community use and improve the model. We also publicly release the dataset used to fine-tune the deep learning model to support similar kinds of research. Finally, we develop a music production process which can be used to efficiently integrate neurally-generated content within a human composition. The artistic content that was recorded can be listened to online and could lead to public performances. For the neural music generation, we use a model pre-trained on the DadaGP <cit.> dataset, a dataset consisting of over 26k songs of various genres. The model is trained to produce songs in a tokenized symbolic format, which can be converted to the more commonly used GuitarPro format. This model is further fine-tuned on ProgGP, a curated dataset of progressive metal songs. This fine-tuned model can then be used to generate new songs in the style of progressive metal. For clarification, we do not assess timbre quality aspects of progressive metal since we are working in the symbolic domain, despite timbre playing an important role in the genre (e.g. heavily distorted guitars, loud and punchy snare and kick drums, etc). However, we do take into account timbre identity through a distinction between distorted and clean guitars in our model. §.§ Fine-Tuning Dataset ProgGP, the fine-tuning dataset used in our experiments, consists of 173 songs largely from the progressive metal genre[Some songs included in the dataset are from adjacent genres (e.g. technical death metal). ]. The songs were obtained using Songsterr[<https://www.songsterr.com/>], a website that hosts GuitarPro files and allows playback using an web-based GuitarPro player. The tablatures (tabs) obtained from this website were not official tabs created by the artists of the songs, but rather created and maintained by the online community. Due to this, there is no guarantee that the tabs used in the dataset are perfectly accurate to the songs they are based on. However, each was verified to at least mostly capture the spirit of the original performance during the construction of the dataset. We limited the dataset to only songs in which the bass guitar and drums have also been transcribed, since the pre-trained model was trained on fully transcribed songs. This however limited the scope of the dataset, as many songs were only available with guitar transcriptions, rather than the full band. Additionally, the model only supports a few common guitar tunings, and only 6 and 7 string guitars. Many bands in this genre use more unique guitar tunings and/or 8 string guitars, so some artists that might be important in the genre of progressive metal may have limited songs or be absent entirely from the dataset. All this led to some artists dominating the dataset more than others. A word cloud representation of the artists used in the ProgGP dataset can be seen in Figure <ref>. We made ProgGP[<https://github.com/otnemrasordep/ProgGP>] available upon request, together with a list of songs per artist. §.§ Model Fine-Tuning The pre-trained model is based on the Transformer-XL <cit.> architecture, a modified version of the original Transformer <cit.> that is more capable of learning longer-term dependency. The pre-trained model used in our experiments was trained for 200 epochs on the DadaGP <cit.> dataset. We trained the model on the fine-tuning dataset for an additional 65 epochs, at which the loss dropped low enough to trigger early stopping. Checkpoints were saved at every five epochs or training, resulting in 13 models at various stages of fine tuning. §.§ Neural Generation A new song can be generated by feeding the model a prompt (set of instructions) in the form of a tokenized GuitarPro file. This will be the starting point of the generation, and the model will attempt to continue the song after the prompt. The tempo (in BPM) used for the generated song is taken from the prompt and the number of tokens to be generated is used as a parameter during inference. In DadaGP token format, a token can be a single note, rest, or expressive technique. Prompts used in the generation experiments ranged from a single note, a few measures from songs in the training set, and a few measures of songs not in the training set. The number of generated songs and the model from which to generate the songs can also be specified. Empirical analysis of the generated songs have allowed us to identify common structural patterns in generated songs, which we refer to as `sections', typically consisting of a riff that is repeated one or more times with slight variations. The songs will typically start by repeating the notes from the prompt, with minor changes. It will then generate two or three sections afterward, each somewhat changing the feel of the song. While progressive metal songs can contain a large number of different riffs, they tend to build on one another and use references to musical motifs found throughout the song and throughout other songs by the same artist. Between The Buried And Me, a band with a large presence in ProgGP, is particularly well known for this <cit.>. This is a difficult thing to capture within a model however, as while the different sections seem to fit together naturally, they do not necessarily reference one another. Together with this submission, we release all the generated compositions on the undertaken experiments, cherry-picking some examples [Available at: <https://drive.google.com/drive/folders/1xaejTcUrPncE4hoyONhSzgS0a5TRo6G_?usp=share_link>]. § ANALYSING AI-GENERATED MUSIC We used a mixed method approach to better understand the outputs of the fine-tuned models, their strengths and weaknesses, and to help the producer select a model for further music production use. This was done by analysing the generated music from each model objectively through the use of common symbolic music metrics, as well as listening through many generated examples and analysing them subjectively in the context of the author's own knowledge of progressive metal. §.§ Objective Metrics Given the difficulties in assessing the quality of neurally-generated music without using a listening test, specially in the symbolic domain, we resorted on commonly used metrics from the literature, implemented in the MusPy package <cit.>. For this evaluation, 173 songs were generated from each of the thirteen fine-tuned models, the same number of songs present within ProgGP, in order to maintain consistency when comparing the songs generated to the songs present in ProgGP. The prompt used in this analysis was a single low E note on guitar and bass guitar, and a kick and cymbal hit on drums. This was chosen in order to minimize the influence of the prompt as much as possible, as per the findings in <cit.>. In previous work, Sarmento et al. <cit.> used pitch class entropy (PCE), a measure of the entropy of pitch classes used within a song, to evaluate their model. The PCE of the fine tuned models can be seen in Figure <ref> (to ease visualization, we omit plots from models after epoch 30). The models fine-tuned for 15 and 20 epochs seem to have a distribution closer to ProgGP. The models fine-tuned for 5 and 10 epochs and beyond 20 epochs generally have a lower mean than the 15 and 20 epoch models. We hypothesize that this could be due to overfitting, causing the model to get stuck on certain sections or notes and repeating them, something seen in the generated songs by the more fine-tuned models. This would lower the pitch class entropy of a model's outputs rather than push it closer to that of the training data which is higher. The rest of the metrics can be seen in Figure <ref>. They include drum pattern consistency (DPC), number of pitch classes (NPC), number of pitches (NP), pitch entropy (PE), pitch range (PR), scale consistency (SC), polyphony (Pol) and polyphony rate (PolR). These metrics, while not necessarily giving a definitive idea of the performance of a model, help us understand how the output of certain models matches the training data. They also give an idea of certain characteristics of the music that each model tends to generate. An in-depth definition of each can be found in MusPy's package documentation[<https://salu133445.github.io/muspy/metrics.html>]. The Kullback-Leibler divergence (KLD), a measure of relative entropy between the true probability distribution and a sample probability distribution, was calculated for each of the fine-tuned models (ProgGP is used as groundtruth to compared against generated songs). The KLD results can be seen in Table <ref>. The model fine-tuned for 15 epochs scores the lowest for most metrics. The only exceptions are polyphony and polyphony rate, in which the model fine-tuned for 20 epochs scores the lowest. This is expected given that the model trained for 15 epochs seems to be more similar to ProgGP for most of the metrics than the other models. §.§ Subjective Analysis Subjectively evaluating generated progressive metal songs first requires a definition of progressive metal. This definition is hard to specify, as music genres are not always straightforward. Nevertheless, there are a number of tropes that progressive metal songs tend to have. Robinson <cit.> describes several of these such as polyrhythms, syncopated chugging on low notes and uncommon time signatures. These can be seen in many generated songs, particularly uncommon time signatures and syncopated rhythms. Similarly to the conclusions from GTR-CTRL <cit.>, we empirically found that the prompt has a reasonably large amount of influence over the generated song, but this varies between songs. The model tends to only generate notes for instruments contained in the prompt (e.g if there exists two guitars, one bass guitar and drums within the prompt, the model will only generate new notes for those instruments). It does however occasionally generate an extra guitar or keyboard track (https://drive.google.com/file/d/1x-9MJg5UK5zWNm5CJns730T0tlBBiTXv/view?usp=share_linkid-00)[Song ids are hyperlinked to facilitate listening.], but these scenarios were found to be rare. Generated guitar parts for multiple guitar tracks tend to be mostly identical, mirroring the recording technique of two guitars playing identical parts in order to create width in a song mix. Interestingly however, the model will sometimes generate a harmony for a particular guitar line where one guitar plays some kind of melodic line and the other playing the same line with the pitch shifted (https://drive.google.com/file/d/1e0c9X-X8im9LKlbicVTOMygCj-1Lg-4K/view?usp=share_linkid-01). It also occasionally generates guitar solos and rhythmic accompaniment (https://drive.google.com/file/d/18m38SUPTeHysIwgGuiVxO3WhK6iYGT4C/view?usp=share_linkid-002), with one guitar playing low-pitched chords while the other plays fast single high-pitched notes. The model generates very impressive drum parts in addition to the guitar and bass guitar (https://drive.google.com/file/d/1xJGQZNHNaGU18uBEQ_9O01yV1oq7BPHr/view?usp=share_linkid-03). The timing of the kick drum consistently lines up with the notes of the bass guitar (https://drive.google.com/file/d/1sFeJTB5Gei9GYnTmeLoRrKmRi1TEjgZR/view?usp=share_linkid-04). Additionally, several common drum beats heard in many metal songs can be generated (e.g. blast beats (https://drive.google.com/file/d/1ScBriV67HH-KTdrYebSjyWlnl7GUDSt1/view?usp=share_link(id-05)). Many songs also feature drum fills at the end of a section before transitioning into a new section. It is possible that the model excels at generating drum parts due to the limited number of possible notes compared to pitch-based instruments such as guitar and bass guitar. This being said, the generated drum parts would likely need further editing if used in an actual song in order to convey more of the nuance heard in progressive metal drumming. § SONG PRODUCTION A short progressive metal song was recorded, produced and mixed using one of the fine-tuned models to generate the initial musical ideas and song structure. This was done by the first author, himself a progressive metal producer and music AI researcher. The intention with this production was to utilize the generated songs as a way to bolster creativity and inspire ideas for music in a way in which the artist's creativity can still be applied to integrate the generated content into a song of their own. Section <ref> describes a high level overview of the song creation process using the AI system in collaboration with a music producer, while Section <ref> presents a detailed analysis of the generated song and what was changed in order to suit the production. §.§ Process The process of creating the song can be broken into the following steps: * A prompt is selected and songs are generated using one a fine-tuned model. One is chosen to be the starting point of the song based on how it inspires the producer. * The generated song is loaded into a guitar tab reader software (e.g. GuitarPro). * Drums and bass are exported to MIDI format and loaded into a digital audio workstation (DAW), along with appropriate virtual instruments. * The guitar parts are learned by the guitarist producer from the generated guitar tab and subsequently recording in the DAW. During the recording of the guitar, changes can be made to suit the producer's idea of the direction of the song. * The drum and bass guitar MIDI are edited to suit any changes made to the guitar, or to better serve the song. This may be done in conjunction with the previous step and may require some back and forth in order to fully develop the song. These steps can be repeated as many times as desired to build out a complete song. They may even be skipped if the producer is inspired by the ideas to create their own parts based on what was already generated. Virtual instruments for the bass guitar and drums are not strictly needed, but can assist in speeding up the workflow. It was found that this strategy allowed for a song to be developed quickly and minimized any extra work that may distract from creativity (e.g. having to record bass guitar parts in addition to the guitar parts or manually programming drum parts). In the next section we focus on a particular example generated using the first two measures of “Stabwound" by Necrophagist as the prompt. The song was generated using the model fine-tuned on ProgGP for 15 epochs. The structure of the generated song was not changed, as we felt that it had many interesting qualities. The guitar, drums and bass were changed slightly to better fit the vision that the generated song inspired. Additional sounds such as synths, organs and impact samples were also added to flesh out the song and increase interest in the production. The final mix and the original generated song in both PDF and GuitarPro format are available online[Available at: <https://drive.google.com/drive/folders/1y2xX3WIQeOz6Z8FoN2VP3kzWvOqYk8QI?usp=sharing>]. §.§ Song and Production Analysis The first section of the song is made up of an idea which takes up 4 measures. This idea is repeated with the second repetition skipping the first measure of the motif and adding on a new lick in the final measure which helps transition the section into the next one. Each repetition has a similar structure: three measures of 4/4 and a final measure with an odd time signature. The first repetition adds a 5/4 time signature to the end, while the second section uses a 6/4 time signature. Time signature changes are common in progressive metal <cit.>, and it is interesting to see the model generate this time signature change in both repetitions of the initial idea without simply repeating the idea. The changes in the second repetition of the idea feel like something a real songwriter might intentionally write, as if the model is building on the initial idea to create more excitement before the next section. The second section shows off a major flaw of the model: it does not always generate tabs or ideas that can be reasonably played by a human. Since a specific pitch can be played at multiple different areas of the guitar fretboard, tabs specify exactly which fret and string a note should be played on. However, the model will sometimes generate fretboard locations that are very unnatural to play by a guitarist. The tabs had to be slightly modified in order to record this section, however keeping the same notes. The main idea in this section is a repeated line of seven 8th notes followed by a chromatic note run and a lick that changes the modality from major to minor halfway through. It is difficult to know if this is something the model learned through training or if this note selection was more random. The section ends with four simple chords to transition into the next one. These were changed to be more dissonant chords in the recorded version. The final section is another repeated riff of seven notes used in a slightly more musical way than the previous section. Each repetition uses the same relative intervals between notes to outline two different chords, F# minor and G# minor. It then ends the section with two measures of 4/4, helping the song end in a slightly more familiar and natural way. A lick from the previous section is used in this ending in the tab, which helps tying the two sections together and increases cohesion. While the structures and guitar riffs remained largely unchanged, the drums did not support the rest of the song as well as they could. While many generated songs have impressive sounding drums, the drum parts generated in this particular song did not quite hold up to professional standards. The first section mostly had a snare fill which did not enhance the interesting aspects of the guitar and bass parts. This was changed to use a more steady snare hit and cymbals on the downbeats of the measure. A stack cymbal was used in the first repetition, but was changed to a china cymbal in the second repetition to add excitement to the changes between the two repetitions. A drum fill was also added in during the last few beats of the section to help highlight the transition between the two sections. The drums for the second section were mostly the same as the generated drums. The generated snare drum placement in this section accents the 7/4 time signature. However, the ride cymbals in the second repetition were changed to china cymbals which hit on the downbeats of the measure, and the kick drum was changed to be constant eight notes. This was done to push the energy up as the section finishes. The drums in the final section were kept mostly unchanged, with a small change to the drum fill at the end. A comparison from a section of the song of the originally generated MIDI and the edited MIDI can be seen in Figure <ref>. The process showed that while the model can excel at generating inspiring progressive metal ideas, a decent amount of work is still needed to make the ideas playable and professional sounding. Drums in particular, while containing good initial ideas, need a lot of editing to make them sound natural and support the ideas in the guitar and bass guitar parts. It is not as simple as directly importing the drum and bass MIDI from the generated song, a human producer is still required to make the ideas into something that is satisfying to listen to and convey emotion properly. That being said, the entire writing and production process only took three to four hours over two sessions, with most of the time being spent practicing the guitar parts in order to play them to a sufficient level for recording. The producer felt that the AI system helps inspiring new ideas and producing a good sounding demo extremely quickly, with an amazing level of detail in both the kinds of notes generated and song structure. It is easy to imagine combining multiple generated ideas together in this way to produce a full length song. § CONCLUSION AND FUTURE WORK We have presented a deep learning model capable of generating songs in the style of progressive metal. We released ProgGP, a symbolic music dataset consisting of 173 progressive metal songs, which was constructed and used to fine-tune a pretrained transformer model. The models fine-tuned for only a relatively small number of epochs, such as 15 and 20 epochs, produce interesting results and are shown to exemplify traits of the fine-tuning data in nine different symbolic music metrics. This analysis was used to inform the selection of a generated song, which was then turned into a full progressive metal production. Finally, we presented an analysis of the generated song and how it was used to augment the producer's own creativity. We hope to continue this collaboration between human musicians and the AI system in a possible professionally recorded album and live performance of AI-assisted progressive metal songs. splncs04
http://arxiv.org/abs/2307.04725v1
20230710173416
AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning
[ "Yuwei Guo", "Ceyuan Yang", "Anyi Rao", "Yaohui Wang", "Yu Qiao", "Dahua Lin", "Bo Dai" ]
cs.CV
[ "cs.CV", "cs.GR", "cs.LG" ]
[ AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning Yuwei Guo^1,2 Ceyuan Yang^1* Anyi Rao^3 Yaohui Wang^1 Yu Qiao^1 Dahua Lin^1,2 Bo Dai^1 ^1Shanghai AI Laboratory ^2The Chinese University of Hong Kong ^3Stanford University <https://animatediff.github.io/> =================================================================================================================================================================================================================================================================================== type=figure < g r a p h i c s > figure We present , an effective framework for extending personalized text-to-image (T2I) models into an animation generator without model-specific tuning. Once learned motion priors from large video datasets, can be inserted into personalized T2I models either trained by the user or downloaded directly from platforms like CivitAI <cit.> or Huggingface <cit.> and generate animation clips with proper motions. ] ^*Corresponding Author. With the advance of text-to-image models (e.g., Stable Diffusion <cit.>) and corresponding personalization techniques such as DreamBooth <cit.> and LoRA <cit.>, everyone can manifest their imagination into high-quality images at an affordable cost. Subsequently, there is a great demand for image animation techniques to further combine generated static images with motion dynamics. In this report, we propose a practical framework to animate most of the existing personalized text-to-image models once and for all, saving efforts in model-specific tuning. At the core of the proposed framework is to insert a newly initialized motion modeling module into the frozen text-to-image model and train it on video clips to distill reasonable motion priors. Once trained, by simply injecting this motion modeling module, all personalized versions derived from the same base T2I readily become text-driven models that produce diverse and personalized animated images. We conduct our evaluation on several public representative personalized text-to-image models across anime pictures and realistic photographs, and demonstrate that our proposed framework helps these models generate temporally smooth animation clips while preserving the domain and diversity of their outputs. Code and pre-trained weights will be publicly available at our https://animatediff.github.io/project page. § INTRODUCTION In recent years, text-to-image (T2I) generative models <cit.> have received unprecedented attention both within and beyond the research community, as they provide high visual quality and the text-driven controllability, i.e., a low-barrier entry point for those non-researcher users such as artists and amateurs to conduct AI-assisted content creation. To further stimulate the creativity of existing T2I generative models, several light-weighted personalization methods, such as DreamBooth <cit.> and LoRA <cit.>, are proposed to enable customized fine-tuning of these models on small datasets with a consumer-grade device such as a laptop with an RTX3080, after which these models can then produce customized content with significantly boosted quality. In this way, users can introduce new concepts or styles to a pre-trained T2I model at a very low cost, resulting in the numerous personalized models contributed by artists and amateurs on model-sharing platforms such as CivitAI <cit.> and Huggingface <cit.>. While personalized text-to-image models trained with DreamBooth or LoRA have successfully drawn attention through their extraordinary visual quality, their outputs are static images. Namely, there is a lack of temporal degree of freedom. Considering the broad applications of animation, we want to know whether we can turn most of the existing personalized T2I models into models that produce animated images while preserving the original visual quality. Recent general text-to-video generation approaches <cit.> propose incorporating temporal modeling into the original T2I models and tuning the models on the video datasets. However, it becomes challenging for personalized T2I models since the users usually cannot afford the sensitive hyper-parameter tuning, personalized video collection, and intensive computational resources. In this work, we present a general method, , to enable the ability to generate animated images for any personalized T2I model, requiring no model-specific tuning efforts and achieving appealing content consistency over time. Given that most personalized T2I models are derived from the same base one (e.g. Stable Diffusion <cit.>) and collecting the corresponding videos for every personalized domain is outright infeasible, we turn to design a motion modeling module that could animate most of personalized T2I models once for all. Concretely, a motion modeling module is introduced into a base T2I model and then fine-tuned on large-scale video clips <cit.>, learning the reasonable motion priors. It is worth noting that the parameters of the base model remain untouched. After the fine-tuning, we demonstrate that the derived personalized T2I could also benefit from the well-learned motion priors, producing smooth and appealing animations. That is, the motion modeling module manages to animate all corresponding personalized T2I models without further efforts in additional data collecting or customized training. We evaluate our on several representative DreamBooth <cit.> and LoRA <cit.> models covering anime pictures and realistic photographs. Without specific tuning, most personalized T2I models could be directly animated by inserting the well-trained motion modeling module. In practice, we also figured out that vanilla attention along the temporal dimension is adequate for the motion modeling module to learn the proper motion priors. We also demonstrate that the motion priors can be generalized to domains such as 3D cartoons and 2D anime. To this end, our could lead to a simple yet effective baseline for personalized animation, where users could quickly obtain the personalized animations, merely bearing the cost of personalizing the image models. § RELATED WORKS Text-to-image diffusion models. In recent years, text-to-image (T2I) diffusion models have gained much popularity both in and beyond the research community, benefited by the large-scale text-image paired data <cit.> and the power of diffusion models <cit.>. Among them, GLIDE <cit.> introduced text conditions to the diffusion model and demonstrated that classifier guidance produces more visually pleasing results. DALLE-2 <cit.> improves text-image alignment via CLIP <cit.> joint feature space. Imagen <cit.> incorporates a large language model <cit.> pre-trained on text corpora and a cascade of diffusion model to achieve photorealistic image generation. Latent diffusion model <cit.>, i.e., Stable Diffusion, proposed to perform the denoising process in an auto-encoder's latent space, effectively reducing the required computation resources while retaining generated images' quality and flexibility. Unlike the above works that share parameters during the generation process, eDiff-I <cit.> trained an ensemble of diffusion models specialized for different synthesis stages. Our method is built upon a pre-trained text-to-image model and can be adapted to any tuning-based personalized version. Personalize text-to-image model. While there have been many powerful T2I generative algorithms, it's still unacceptable for individual users to train their models due to the requirements for large-scale data and computational resources, which are only accessible to large companies and research organizations. Therefore, several methods have been proposed to enable users to introduce new domains (new concepts or styles, which are represented mainly by a small number of images collected by users) into pre-trained T2I models <cit.>. Textual Inversion <cit.> proposed to optimize a word embedding for each concept and freeze the original networks during training. DreamBooth <cit.> is another approach that fine-tunes the whole network with preservation loss as regulation. Custom Diffusion <cit.> improves fine-tuning efficiency by updating only a small subset of parameters and allowing concept merging through closed-form optimization. At the same time, DreamArtist <cit.> reduces the input to a single image. Recently, LoRA <cit.>, a technique designed for language model adaptation, has been utilized for text-to-image model fine-tuning and achieved good visual quality. While these methods are mainly based on parameter tuning, several works have also tried to learn a more general encoder for concept personalization <cit.>. With all these personalization approaches in the research community, our work only focuses on tuning-based methods, i.e., DreamBooth <cit.> and LoRA <cit.>, since they maintain an unchanged feature space of the base model. Personalized T2I animation. Since the setting in this report is newly proposed, there is currently little work targeting it. Though it is a common practice to extend an existing T2I model with temporal structures for video generation, existing works <cit.> update whole parameters in the networks, hurting the domain knowledge of the original T2I model. Recently, several works have reported their application in animating a personalized T2I model. For instance, Tune-a-Video <cit.> solves the one-shot video generation task via slight architecture modifications and sub-network tuning. Text2Video-Zero <cit.> introduces a training-free method to animate a pre-trained T2I model via latent wrapping given a predefined affine matrix. A recent work close to our method is Align-Your-Latents <cit.>, a text-to-video (T2V) model which trains separate temporal layers in a T2I model. Our method adopts a simplified network design and verifies the effectiveness of this line of approach in animating personalized T2I models via extensive evaluation on many personalized models. § METHOD In this section, <ref> first introduces preliminary knowledge about the general text-to-image model and its personalized variants. Next, <ref> presents the formulation of personalized animation and the motivation of our method. Finally, <ref> describes the practical implementation of the motion modeling module in , which animates various personalized models to produce appealing synthesis. §.§ Preliminaries General text-to-image generator. We chose Stable Diffusion (SD), a widely-used text-to-image model, as the general T2I generator in this work. SD is based on the Latent Diffusion Model (LDM) <cit.>, which executes the denoising process in the latent space of an autoencoder, namely ℰ(·) and 𝒟(·), implemented as VQ-GAN <cit.> or VQ-VAE <cit.> pre-trained on large image datasets. This design confers an advantage in reducing computational costs while preserving high visual quality. During the training of the latent diffusion networks, an input image x_0 is initially mapped to the latent space by the frozen encoder, yielding z_0 = ℰ(x_0), then perturbed by a pre-defined Markov process: q(z_t | z_t-1) = 𝒩(z_t; √(1-β_t)z_t-1, β_t𝐼) for t = 1,…, T, with T being the number of steps in the forward diffusion process. The sequence of hyper-parameters β_t determines the noise strength at each step. The above iterative process can be reformulated in a closed-form manner as follows: z_t = √(α̅_̅t̅)z_0 + √(1-α̅_̅t̅)ϵ, ϵ∼𝒩(0, 𝐼) where α̅_̅t̅ = ∏_i=1^tα_t, α_t = 1 - β_t. Stable Diffusion adopts the vanilla training objective as proposed in DDPM <cit.>, which can be expressed as: ℒ = 𝔼_ℰ(x_0), y, ϵ∼𝒩(0, 𝐼), t‖ϵ - ϵ_θ(z_t, t, τ_θ(y)) ‖_2^2 where y is the corresponding textual description, τ_θ(·) is a text encoder mapping the string to a sequence of vectors. In SD, ϵ_θ(·) is implemented with a modified UNet <cit.> that incorporates four downsample/upsample blocks and one middle block, resulting in four resolution levels within the networks' latent space. Each resolution level integrates 2D convolution layers as well as self- and cross-attention mechanisms. Text model τ_θ(·) is implemented using the CLIP <cit.> ViT-L/14 text encoder. Personalized image generation. As general image generation continues to advance, increasing attention has been paid to personalized image generation. DreamBooth <cit.> and LoRA <cit.> are two representative and widely used personalization approaches. To introduce a new domain (new concepts, styles, etc.) to a pre-trained T2I model, a straightforward approach is fine-tuning it on images of that specific domain. However, directly tuning the model without regularization often leads to overfitting or catastrophic forgetting, especially when the dataset is small. To overcome this problem, DreamBooth <cit.> uses a rare string as the indicator to represent the target domain and augments the dataset by adding images generated by the original T2I model. These regularization images are generated without the indicator, thus allowing the model to learn to associate the rare string with the expected domain during fine-tuning. LoRA <cit.>, on the other hand, takes a different approach by attempting to fine-tune the model weights' residual, that is, training Δ W instead of W. The weight after fine-tuning is calculated as W' = W + αΔ W, where α is a hyper-parameter that adjusts the impact of the tuning process, thus providing more freedom for users to control the generated results. To further avoid overfitting and reduce computational costs, Δ W ∈ℝ^m × n is decomposed into two low-rank matrices, namely Δ W = AB^T, where A ∈ℝ^m × r, B ∈ℝ^n × r, r ≪ m, n. In practice, only the projection matrices in the transformer blocks are tuned, further reducing the training and storage costs of a LoRA model. Compared to DreamBooth which stores the whole model parameters once trained, a LoRA model is much more efficient to train and share between users. §.§ Personalized Animation Animating a personalized image model usually requires additional tuning with a corresponding video collection, making it much more challenging. In this section, we target personalized animation, which is formally formulated as: given a personalized T2I moded, e.g., a DreamBooth <cit.> or LoRA <cit.> checkpoint trained by users or downloaded from CivitAI <cit.> or Huggingface <cit.>), the goal is to transform it into an animation generator with little or no training cost while preserving its original domain knowledge and quality. For example, suppose a T2I model is personalized for a specific 2D anime style. In that case, the corresponding animation generator should be capable of generating animation clips of that style with proper motions, such as foreground/background segmentation, character body movements, etc. To achieve this, one naive approach is to inflate a T2I model <cit.> by adding temporal-aware structures and learning reasonable motion priors from large-scale video datasets. However, for the personalized domains, collecting sufficient personalized videos is costly. Meanwhile, limited data would lead to the knowledge loss of the source domain. Therefore, we choose to separately train a generalizable motion modeling module and plug it into the personalized T2I at inference time. By doing so, we avoid specific tuning for each personalized model and retain their knowledge by keeping the pre-trained weights unchanged. Another crucial advantage of such an approach is that once the module is trained, it can be inserted into any personalized T2I upon the same base model with no need for specific tuning, as validated in the following experiments. This is because the personalizing process scarcely modifies the feature space of the base T2I model, which is also demonstrated in ControlNet <cit.>. §.§ Motion Modeling Module Network Inflation. Since the original SD can only process image data batches, model inflation is necessary to make it compatible with our motion modeling module, which takes a 5D video tensor in the shape of batch × channels × frames × height × width as input. To achieve this, we adopt a solution similar to the Video Diffusion Model <cit.>. Specifically, we transform each 2D convolution and attention layer in the original image model into spatial-only pseudo-3D layers by reshaping the frame axis into the batch axis and allowing the network to process each frame independently. Unlike the above, our newly inserted motion module operates across frames in each batch to achieve motion smoothness and content consistency in the animation clips. Details are demonstrated in the <ref>. Module Design. For the network design of our motion modeling module, we aim to enable efficient information exchange across frames. To achieve this, we chose vanilla temporal transformers as the design of our motion module. It is worth noting that we have also experimented with other network designs for the motion module and found that a vanilla temporal transformer is adequate for modeling the motion priors. We leave the search for better motion modules to future works. The vanilla temporal transformer consists of several self-attention blocks operating along the temporal axis (<ref>). When passing through our motion module, the spatial dimensions height and width of the feature map z will first be reshaped to the batch dimension, resulting in batch × height × width sequences at the length of frames. The reshaped feature map will then be projected and go through several self-attention blocks, i.e., z = Attention(Q,K,V)=Softmax(QK^T/√(d))· V where Q = W^Qz, K = W^Kz, and V=W^Vz are three projections of the reshaped feature map. This operation enables the module to capture the temporal dependencies between features at the same location across the temporal axis. To enlarge the receptive field of our motion module, we insert it at every resolution level of the U-shaped diffusion network. Additionally, we add sinusoidal position encoding <cit.> to the self-attention blocks to let the network be aware of the temporal location of the current frame in the animation clip. To insert our module with no harmful effects during training, we zero initialize the output projection layer of the temporal transformer, which is an effective practice validated by ControlNet <cit.>. Training Objective. The training process of our motion modeling module is similar to Latent Diffusion Model <cit.>. Sampled video data x_0^1:N are first encoded into the latent code z_0^1:N frame by frame via the pre-trained autoencoder. Then, the latent codes are noised using the defined forward diffusion schedule: z_t^1:N = √(α̅_̅t̅)z_0^1:N + √(1-α̅_̅t̅)ϵ. The diffusion network inflated with our motion module takes the noised latent codes and corresponding text prompts as input and predicts the noise strength added to the latent code, encouraged by the L2 loss term. The final training objective of our motion modeling module is: ℒ = 𝔼_ℰ(x_0^1:N), y, ϵ∼𝒩(0, 𝐼), t‖ϵ - ϵ_θ(z_t^1:N, t, τ_θ(y)) ‖_2^2 Note that during optimization, the pre-trained weights of the base T2I model are frozen to keep its feature space unchanged. § EXPERIMENTS §.§ Implementation Details Training. We chose Stable Diffusion v1 as our base model to train the motion modeling module, considering most public personalized models are based on this version. We trained the motion module using the WebVid-10M <cit.>, a text-video pair dataset. The video clips in the dataset are first sampled at the stride of 4, then resized and center-cropped to the resolution of 256 × 256. Our experiments show that the module trained on 256 can be generalized to higher resolutions. Therefore we chose 256 as our training resolution since it maintains the balance of training efficiency and visual quality. The final length of the video clips for training was set to 16 frames. During experiments, we discovered that using a diffusion schedule slightly different from the original schedule where the base T2I model was trained helps achieve better visual quality and avoid artifacts such as low saturability and flickering. We hypothesize that slightly modifying the original schedule can help the model better adapt to new tasks (animation) and new data distribution. Thus, we used a linear beta schedule, where β_start = 0.00085 and β_end = 0.012, which is slightly different from that used to train the original SD. Evaluations. To verify the effectiveness and generalizability of our method, we collect several representative personalized Stable Diffusion models (<ref>) from CivitAI <cit.>, a public platform allowing artists to share their personalized models. The domains of these chosen models range from anime and 2D cartoon images to realistic photographs, providing a comprehensive benchmark to evaluate the capability of our method. Once our module is trained, we plug it into the target personalized models and generate animations with designed text prompts. We do not use common text prompts because the personalized models only generate expected content with specific text distribution, meaning the prompts must have certain formats or contain “trigger words". Therefore, we use example prompts provided at the model homepage in the following section to get the models' best performance. §.§ Qualitative Results We present several qualitative results across different models in <ref>. Due to space limitations, we only display four frames of each animation clip. We strongly recommend readers refer to our homepage for better visual quality. The figure shows that our method successfully animates personalized T2I models in diverse domains, from highly stylized anime (1st row) to realistic photographs (4th row), without compromising their domain knowledge. Thanks to the motion priors learned from the video datasets, the motion modeling module can understand the textual prompt and assign appropriate motions to each pixel, such as the motion of sea waves (3rd row) and the leg motion of the Pallas's cat (7th row). We also find that our method can distinguish major subjects from foreground and background in the picture, creating a feeling of vividness and realism. For instance, the character and background blossoms in the first animation move separately, at different speeds, and with different blurring strengths. Our qualitative results demonstrate the generalizability of our motion module for animating personalized T2I models within diverse domains. By inserting our motion module into the personalized model, can generate high-quality animations faithful to the personalized domain while being diverse and visually appealing. §.§ Comparison with Baselines We compare our method with Text2Video-Zero <cit.>, a training-free framework for extending a T2I model for video generation through network inflation and latent warping. Although Tune-a-Video can also be utilized for personalized T2I animation, it requires an additional input video and thus is not considered for comparison. Since T2V-Zero does not rely on any parameter tuning, it is straightforward to adopt it for animating personalized T2I models by replacing the model weights with personalized ones. We generate the animation clips of 16 frames at resolution 512 × 512, using the default hyperparameters provided by the authors. We qualitatively compare the cross-frame content consistency of the baseline and our method on the same personalized model and with the same prompt (“A forbidden castle high up in the mountains, pixel art, intricate details2, hdr, intricate details"). To more accurately demonstrate and compare the fine-grained details of our method and the baseline, we cropped the same subpart of each result and zoomed it in, as illustrated at the left/right bottom of each frame in <ref>. As shown in the figure, both methods retain the domain knowledge of the personalized model, and their frame-level qualities are comparable. However, the result of T2V-Zero, though visually similar, lacks fine-grained cross-frame consistency when compared carefully. For instance, the shape of the foreground rocks (1st row) and the cup on the table (3rd row) changes over time. This inconsistency is much more noticeable when the animation is played as a video clip. In contrast, our method generates temporally consistent content and maintains superior smoothness (2nd, 4th row). Moreover, our approach exhibits more appropriate content changes that align better with the underlying camera motion, further highlighting the effectiveness of our method. This result is reasonable since the baseline does not learn motion priors and achieves visual consistency via rule-based latent warping, while our method inherits knowledge from large video datasets and maintains temporal smoothness through efficient temporal attention. §.§ Ablative Study We conduct an ablative study to verify our choice of noise schedule in the forward diffusion process during training. In the previous section, we mentioned that using a slightly modified diffusion schedule helps achieve better visual quality. Here we experiment with three representative diffusion schedules (<ref>) adopted by previous works and visually compare their corresponding results in <ref>. Among the three diffusion schedules used in our experiments, Schedule A is the schedule for pre-training Stable Diffusion; Schedule B is our choice, which is different from the schedule of SD in how the beta sequence is computed; Schedule C is used in DDPM <cit.> and DiT <cit.> and differs more from SD's pre-training schedule. As demonstrated in <ref>, when using the original schedule of SD for training our motion modeling module (Schedule B), the animation results are with sallow color artifacts. This phenomenon is unusual since, intuitively, using the diffusion schedule aligned with pre-training should be beneficial for the model to retain its feature space already learned. As the schedules deviate more from the pre-training schedule (from Schedule A to Schedule C), the color saturation of the generated animations increases while the range of motion decreases. Among these three configurations, our choice achieves a balance of both visual quality and motion smoothness. Based on these observations, we hypothesize that a slightly modified diffusion schedule in the training stage helps the pre-trained model adapt to new tasks and domains. Our framework's new training objective is reconstructing noise sequences from a diffused video sequence. This can be frame-wisely done without considering the temporal structure of the video sequence, which is the image reconstruction task the T2I model was pre-trained on. Using the same diffusion schedule may mislead the model that it is still optimized for image reconstruction, which slower the training efficiency of our motion modeling module responsible for cross-frame motion modeling, resulting in more flickering animation and color aliasing. § LIMITATIONS AND FUTURE WORKS In our experiments, we observe that most failure cases appear when the domain of the personalized T2I model is far from realistic, e.g., 2D Disney cartoon (<ref>). In these cases, the animation results have apparent artifacts and cannot produce proper motion. We hypothesize this is due to the large distribution gap between the training video (realistic) and the personalized model. A possible solution to this problem is to manually collect several videos in the target domain and slightly fine-tune the motion modeling module, and we left this to future works. § CONCLUSION In this report, we present , a practical framework for enabling personalized text-to-image model animation, which aims to turn most of the existing personalized T2I models into animation generators once and for all. We demonstrate that our framework, which includes a simply designed motion modeling module trained on base T2I, can distill generalizable motion priors from large video datasets. Once trained, our motion module can be inserted into other personalized models to generate animated images with natural and proper motions while being faithful to the corresponding domain. Extensive evaluation on various personalized T2I models also validates the effectiveness and generalizability of our method. As such, provides a simple yet effective baseline for personalized animation, potentially benefiting a wide range of applications. ieee_fullname § ADDITIONAL RESULTS §.§ Model Diversity In <ref>, we show results using the same prompt with the same model, demonstrating that our method does not hurt the diversity of the original model. §.§ Qualitative Results In <ref> and <ref>, we show more results of our method on different personalized models.
http://arxiv.org/abs/2307.07512v1
20230714175953
Expressive Monotonic Neural Networks
[ "Ouail Kitouni", "Niklas Nolte", "Michael Williams" ]
cs.LG
[ "cs.LG" ]
Prescaling relaxation to nonthermal attractors Thimo Preis received: ** 2023, accepted: * 2023 ============================================== The monotonic dependence of the outputs of a neural network on some of its inputs is a crucial inductive bias in many scenarios where domain knowledge dictates such behavior. This is especially important for interpretability and fairness considerations. In a broader context, scenarios in which monotonicity is important can be found in finance, medicine, physics, and other disciplines. It is thus desirable to build neural network architectures that implement this inductive bias provably. In this work, we propose a weight-constrained architecture[https://github.com/niklasnolte/MonotoneNorm] with a single residual connection to achieve exact monotonic dependence in any subset of the inputs. The weight constraint scheme directly controls the Lipschitz constant of the neural network and thus provides the additional benefit of robustness. Compared to currently existing techniques used for monotonicity, our method is simpler in implementation and in theory foundations, has negligible computational overhead, is guaranteed to produce monotonic dependence, and is highly expressive. We show how the algorithm is used to train powerful, robust, and interpretable discriminators that achieve competitive performance compared to current state-of-the-art methods across various benchmarks, from social applications to the classification of the decays of subatomic particles produced at the CERN Large Hadron Collider. § INTRODUCTION The need to model functions that are monotonic in a subset of their inputs is prevalent in many ML applications. Enforcing monotonic behaviour can help improve generalization capabilities <cit.> and assist with interpretation of the decision-making process of the neural network <cit.>. Real world scenarios include various applications with fairness, interpretability, and security aspects. Examples can be found in the natural sciences and in many social applications. Monotonic dependence of a model output on a certain feature in the input can be informative of how an algorithm works—and in some cases is essential for real-word usage. For instance, a good recommender engine will favor the product with a high number of reviews over another with fewer but otherwise identical reviews (ceteris paribus). The same applies for systems that assess health risk, evaluate the likelihood of recidivism, rank applicants, filter inappropriate content, etc. In addition, robustness to small perturbations in the input is a desirable property for models deployed in real world applications. In particular, when they are used to inform decisions that directly affect human actors—or where the consequences of making an unexpected and unwanted decision could be extremely costly. The continued existence of adversarial methods is a good example for the possibility of malicious attacks on current algorithms <cit.>. A natural way of ensuring the robustness of a model is to constrain its Lipschitz constant. To this end, we recently developed an architecture whose Lipschitz constant is constrained by design using layer-wise normalization which allows the architecture to be more expressive than the current state-of-the-art with stable and fast training <cit.>. Our algorithm has been adopted to classify the decays of subatomic particles produced at the CERN Large Hadron Collider in the real-time data-processing system of the LHCb experiment, which was our original motivation for developing this novel architecture. In this paper, we present expressive monotonic Lipschitz networks. This new class of architectures employs the Lipschitz bounded networks from <cit.> along with residual connections to implement monotonic dependence in any subset of the inputs by construction. It also provides exact robustness guarantees while keeping the constraints minimal such that it remains a universal approximator of Lipschitz continuous monotonic functions. We show how the algorithm is used to train powerful, robust, and interpretable discriminators that achieve competitive performance compared to current state-of-the-art methods across various benchmarks, from social applications to its original target application: the classification of the decays of subatomic particles produced at the CERN Large Hadron Collider. § RELATED WORK Prior work in the field of monotonic models can be split into two major categories. * Built-in and constrained monotonic architectures: Examples of this category include Deep Lattice Networks <cit.> and networks in which all weights are constrained to have the same sign <cit.>. The major drawbacks of most implementations of constrained architectures are a lack of expressiveness or poor performance due to superfluous complexity. * Heuristic and regularized architectures (with or without certification): Examples of such methods include <cit.> and <cit.>, which penalizes point-wise negative gradients on the training sample. This method works on arbitrary architectures and retains much expressive power but offers no guarantees as to the monotonicity of the trained model. Another similar method is <cit.>, which relies on Mixed Integer Linear Programming to certify the monotonicity of piece-wise linear architectures. The method uses a heuristic regularization to penalize the non-monotonicty of the model on points sampled uniformly in the domain during training. The procedure is repeated with increasing regularization strength until the model passes the certification. This iteration can be expensive and while this method is more flexible than the constrained architectures (valid for MLPs with piece-wise linear activations), the computational overhead of the certification process can be prohibitively expensive. Similarly, <cit.> propose guaranteed monotonicity for standard ReLU networks by letting a Satisfiability Modulo Theories (SMT) solver find counterexamples to the monotonicity definition and adjust the prediction in the inference process such that monotonicity is guaranteed. However, this approach requires queries to the SMT solver during inference time for each monotonic feature, and the computation time scales harshly with the number of monotonic features and the model size (see Figure 3 and 4 in <cit.>). Our architecture falls into the first category. However, we overcome both main drawbacks: lack of expressiveness and impractical complexity. Other related works appear in the context of monotonic functions for normalizing flows, where monotonicity is a key ingredient to enforce invertibility <cit.>. § METHODS The goal is to develop a neural network architecture representing a vector-valued function f : ^d →^n, d,n ∈, that is provably monotonic in any subset of its inputs. We first define a few ingredients. Let ∈^d, _≡1_⊙, and the Hadamard product of with the indicator vector 1_(i) = 1 if i ∈ and 0 otherwise for a subset ⊆{1, ⋯, d}. We say that outputs ⊆{1, ⋯, n} of f are monotonically increasing in features if f(_' + _)_i ≤ f(_ + _)_i ∀ i∈ and ∀_' ≤_, where denotes the complement of and the inequality on the right uses the product (or component-wise) order. g: ^d →^n is Lip^p if it is Lipschitz continuous with respect to the L^p norm in every output dimension, i.e., ||g() - g()||_∞≤λ - _p ∀, ∈^n . §.§ Lipschitz Monotonic Networks (LMN) We will henceforth and without loss of generality only consider scalar-valued functions (n=1). We start with a model g() that is Lip^1 with Lipschitz constant λ. Note that the choice of p=1 is crucial for decoupling the magnitudes of the directional derivatives in the monotonic features. More details on this can be found below and in Figure <ref>. The 1-norm has the convenient side effect that we can tune the robustness requirement for each input individually. With a model g() we can define an architecture with built-in monotonicity by adding a term that has directional derivative λ for each coordinate in : f() = g() + λ(1_·) = g() + λ∑_i ∈_i. This residual connection λ(1_·) enforces monotonicity in the input subset _: ∂ g/∂_i ∈ [-λ, λ], ∀ i ∈_1:n ⇒∂ f/∂_i = ∂ g/∂_i + λ≥ 0 ∀ ∈^n, i ∈ . The importance of the norm choice The construction presented here does not work with p≠1 constraints because dependencies between the partial derivatives may be introduced, see Figure <ref>. The p=1-norm is the only norm that bounds the gradient within the green square and, crucially, allows the directional derivatives to be as large as 2λ independently. When shifting the constraints by introducing the linear term, the green square allows for all possible gradient configurations, given that we can choose λ freely. As a counter example, the red circle, corresponding to p=2 constraints, prohibits important areas in the configuration space. To be able to represent all monotonic Lip^1 functions with 2λ Lipschitz constant, the construction of g() needs to be a universal approximator of Lip^1 functions. In the next section, we will discuss possible architectures for this task. §.§ Lip-p=1 approximators Our goal is to construct a universal approximator of Lip^1 functions, i.e., we would like the hypothesis class to have two properties: * It always satisfies (<ref>), i.e., be Lip^1. * It is able to fit all possible Lip^1 functions. In particular, the bound in (<ref>) needs to be attainable ∀ ,. Lip-1 constrained models To satisfy the first requirement, fully connected networks can be Lipschitz bounded by constraining the matrix norm of all weight matrices <cit.>. We recursively define the layer l of the fully connected network of depth D with activation σ as ^l= σ(^l-1) ^l + ^l , where ^0 = is the input and f() = z^D is the output of the neural network. It follows that g() satisfies (<ref>) if ∏_i=1^D ^i_1 ≤λ, and σ has a Lipschitz constant less than or equal to 1. There are multiple ways to enforce (<ref>). Two existing possibilities that involve scaling by the operator norm of the weight matrix <cit.> are: ^i →'^i = λ^1/D^i/max(1, ^i _1) or W^i → W'^i = ^i/max(1, λ^-1/D·^i _1) . In our studies, the latter variant seems to train slightly better. However, in some cases it might be useful to use the former to avoid the scale imbalance between the neural network's output and the residual connection used to induce monotonicity. We note that in order to satisfy (<ref>), it is not necessary to divide the entire matrix by its 1-norm. It is sufficient to ensure that the absolute sum over each column is constrained: ^i →'^i = ^idiag(1/max(1, λ^-1/D·∑_j |^i_jk|)) . This novel normalization scheme tends to give even better training results in practice, because the constraint is applied in each column individually. This reduces correlations of constraints, in particular, if a column saturates the bound on the norm, the other columns are not impacted. While (<ref>) may not be suitable as a general-purpose scheme, e.g. it would not work in convolutional networks, its performance in training in our analysis motivates its use in fully connected architectures and further study of this approach in future work. In addition, the constraints in (<ref>) and (<ref>) can be applied in different ways. For example, one could normalize the weights directly before each call such that the induced gradients are propagated through the network like in <cit.>. While one could come up with toy examples for which propagating the gradients in this way hurts training, it appears that this approach is what usually is implemented for spectral norm in PyTorch and TensorFlow <cit.> . Alternatively, the constraint could be applied by projecting any infeasible parameter values back into the set of feasible matrices after each gradient update as in Algorithm 2 of <cit.>. Constraining according to (<ref>) is not the only way to enforce Lip^1. <cit.> provide an alternative normalization scheme: ^1_1,∞·∏_i=2^m ^i_∞≤λ Similarly to how the 1-norm of a matrix is a column-wise maximum, the ∞-norm of a matrix is determined by the maximum 1-norm of all rows and W_1,∞ simply equals the maximum absolute value of an element in the matrix. Therefore, normalization schemes similar to (<ref>), can be employed to enforce the constraints in (<ref>) by replacing the column-wise normalization with a row- or element-wise normalization where appropriate. Preserving expressive power Guaranteeing that the model is Lipschitz bounded is not sufficient, it must also able to saturate the bound to be able to model all possible Lip^1 functions. Some Lipschitz network architectures, e.g. <cit.>, tend to over constrain the model such that it cannot fit all Lip^1 functions due to gradient attenuation. For many problems this is a rather theoretical issue. However, it becomes a practical problem for the monotonic architecture since it often works on the edges of its constraints, for instance when partial derivatives close to zero are required, see Figure <ref>. As a simple example, the authors of <cit.> showed that ReLU networks are unable to fit the function f(x) = |x| if the layers are norm-constrained with λ = 1. The reason lies in fact that ReLU, and most other commonly used activations, do not have unit gradient with respect to the inputs over their entire domain. While monotonic element-wise activations like ReLU cannot have unit gradient almost everywhere without being exactly linear, the authors of <cit.> explore activations that introduce non-linearities by reordering elements of the input vector. They propose GroupSort as an alternative to point-wise activations, and it is defined as follows: σ_G() = sort_1:G(_1:G) + sort_G+1:2G(_G+1:2G) + … =∑_i=0^n/G-1sort_iG+1:(i+1)G(_iG+1:(i+1)G), where ∈^n, _i:j = 1_i:j⊙, and sort_i:j orders the elements of a vector from indices i to j and leaves the other elements in place. This activation sorts an input vector in chunks (groups) of a fixed size G. The GroupSort operation has a gradient of unity with respect to every input, giving architectures constrained with (<ref>) greatly increased expressive power. In fact, <cit.> prove that GroupSort networks with the normalization scheme in (<ref>) are universal approximators of Lip^1 functions. Therefore, these networks fulfill the two requirements outlined in the beginning of this section. For universal approximation to be possible, the activation function used needs to be gradient norm preserving (GNP), i.e., have gradient 1 almost everywhere. Householder activations are another instance of GNP activations of which GroupSort-2 is a special case <cit.>. The Householder activation is defined as follows: σ() = > 0 (𝐈 - 2^T) ≤ 0 Here, is the preactivation row vector and is any column unit vector. Householder Lipschitz Networks naturally inherit the universal approximation property. In summary, we have constructed a neural network architecture f() via (<ref>) that can provably approximate all monotonic Lipschitz bounded functions. The Lipschitz constant of the model can be increased arbitrarily by controlling the parameter λ in our construction. § EXPERIMENTS “Beware of bugs in the above code, I have only proved it correct, not tried it" <cit.>. In the spirit of Donald Knuth, in this section we test our algorithm on many different domains to show that it works well in practice and gives competitive results, as should be expected from a universal approximator. §.§ Toy Example Figure <ref> shows a toy example where both a monotonic and an unconstrained network are trained to regress on a noisy one-dimensional dataset. The true underlying model used here is monotonic, though an added heteroskedastic Gaussian noise term can obscure this in any realization. As can be seen in Figure <ref>, no matter how the data are distributed at the edge of the support, the monotonic Lipschitz network is always non-decreasing outside of the support as guaranteed by our architecture. Such out-of-distribution guarantees can be extremely valuable in cases where domain knowledge dictates monotonic behavior is either required or desirable. §.§ Real-Time Decision-Making at 40 MHz at the LHC Because many physical systems are modeled with well-known theoretical frameworks that dictate the properties of the system, monotonicity can be a crucial inductive bias in the physical sciences. For instance, modeling enthalpy, a thermodynamic quantity measuring the total heat content of a system, in a simulator requires a monotonic function of temperature for fixed pressure (as is known from basic physical principles). In this section, we describe a real-world physics application which requires monotonicity in certain features—and robustness in all of them. The algorithm described here has, in fact, been implemented by a high-energy particle physics experiment at the European Center for Nuclear Research (CERN), and is actively being used to collect data at the Large Hadron Collider (LHC) in 2022, where high-energy proton-proton collisions occur at 40 MHz. The sensor arrays of the LHC experiments produce data at a rate of over 100 TB/s. Drastic data-reduction is performed by custom-built read-out electronics; however, the annual data volumes are still O(100) exabytes, which cannot be put into permanent storage. Therefore, each LHC experiment processes its data in real time, deciding which proton-proton collision events should be kept and which should be discarded permanently; this is referred to as triggering in particle physics. To be suitable for use in trigger systems, classification algorithms must be robust against the impact of experimental instabilities that occur during data taking—and deficiencies in simulated training samples. Our training samples cannot possibly account for the unknown new physics that we hope to learn by performing the experiments! A ubiquitous inductive bias at the LHC is that outlier collision events are more interesting, since we are looking for physics that has never been observed before. However, uninteresting outliers are frequently caused by experimental imperfections, many of which are included and labeled as background in training. Conversely, it is not possible to include the set of all possible interesting outliers a priori in the training. A solution to this problem is to implement outliers are better directly using our expressive monotonic Lipschitz architecture from Section <ref>. Our architecture was originally developed for the task of classifying the decays of heavy-flavor particles produced at the LHC. These are bound states containing a beauty or charm quark that travel an observable distance 𝒪(1 cm) before decaying due to their (relatively) long lifetimes. This example uses a dataset of simulated proton-proton (pp) collisions in the LHCb detector. Charged particles recorded by LHCb are combined pairwise into decay-vertex (DV) candidates. The task concerns discriminating DV candidates corresponding to heavy-flavor decays from all other sources. Heavy-flavor DVs typically have substantial separation from the pp collision point, due to the relatively long heavy-flavor particle lifetimes, and large transverse momenta, p_ T, of the component particles, due to the large heavy-flavor particle masses. The main sources of background DVs, described in <cit.>, mostly have small displacement and small p_ T, though unfortunately they can also have extremely large values of both displacement and momentum. Figure <ref> shows a simplified version of this problem using only the two most-powerful inputs. Our inductive bias requires a monotonic increasing response in both features (detailed discussion motivating this bias can be found in <cit.>). We see that an unconstrained neural network rejects DVs with increasing larger displacements (lower right corner), and that this leads to a decrease of the signal efficiency (true positive rate) for large lifetimes. The unconstrained model violates our inductive bias. Figures <ref> and <ref> show that a monotonic BDT <cit.> approach works here. However, the jagged decision boundary can cause problems in subsequent analysis of the data. Figure <ref> also shows that our novel approach from Section <ref> successfully produces a smooth and monotonic response, and Figure <ref> shows that this provides the monotonic lifetime dependence we desire in the efficiency. In addition, we note that the added benefit of guaranteed Lipschitz robustness is a major advantage for many real world applications. Specifically for particle physicists, this kind of robustness directly translates to important guarantees when considering experimental instabilities. Due to the simplicity and practicality of our method, the LHCb experiment is now using the proposed architecture for real-time data selection at a data rate of about 40Tbit/s. §.§ Public datasets with monotonic dependence In this section, we follow as closely as possible the experiments done in <cit.>, and some experiments done in <cit.> to be able to directly compare to state-of-the-art monotonic architectures. <cit.> studied monotonic architectures on four different datasets: COMPAS <cit.>, BlogFeedback <cit.>, LoanDefaulter <cit.>, and ChestXRay <cit.>. From <cit.> we compare against one regression and one classification task: AutoMPG <cit.> and HeartDisease <cit.>. Results are shown in Table <ref>. COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) refers to a commercial algorithm used by judges and police officers to determine the likelihood of reoffense. <cit.> discusses that the algorithm is racially biased and provides a dataset from a two-year study of the real-world performance of COMPAS. The task here is to predict the reoffense probability within the next two years. The dataset has 13 features, 4 of which have a monotonic inductive bias, and contains a total of 6172 data points. BlogFeedBack This dataset contains 54270 data points with 276 dimensions describing blog posts. The task is to predict the number of comments following the post publication within 24 hours. 8 of the features have a monotonic inductive bias. Just like <cit.>, we also only consider the 90% of the data points with the smallest targets so as to not let the RMSE be dominated by outliers. LoanDefaulter The version of this dataset available on Kaggle was updated on a yearly basis up to 2015. <cit.> contains a link that is, we believe, a superset of the data used in <cit.>. Luckily, the authors have shared with us the exact version of the dataset they used in their studies for an appropriate comparison. The data is organized in 28 features and the task is to determine loan defaulters. The classification score should be monotonic in 5 features: non-decreasing in number of public record bankruptcies and Debt-to-Income ratio, non-increasing in credit score, length of employment and annual income. ChestXRay This dataset contains tabular data and images of patients with diseases that are visible in a chest x-ray. The task is to predict whether or not the patient has such a disease. Just like <cit.>, we send the image through an ImageNet-pretrained ResNet18 <cit.>. The penultimate layer output concatenated with tabular data acts as input to the monotonic architecture. Two of the four tabular features are monotonic. In the bottom right table in <ref>, there are two entries for our architecture. The E-E entry refers to end-to-end training with ResNet18, whereas the other experiment fixes the ResNet weights. AutoMPG <cit.> This is a dataset containing 398 examples of cars, described by 7 numerical features and the model name. The target, MPG, is monotonically decreasing with 3 of the features. The name is not used as a feature. HeartDisease <cit.> is a dataset of patients, described by 13 features. The task is to determine whether or not the patient has heart disease. As can be seen in Table <ref>, our Lipschitz monotonic networks perform competitively or better than the state-of-the-art on all benchmarks we tried. It is also immediately apparent that our architecture is highly expressive. We manage to train tiny networks with few parameters while still achieving competitive performance. Given that some of these datasets have a significant number of features compared to our chosen network width, most parameters are in the weights of the first layer. We manage to build and train even smaller networks with better generalization performance when taking only a few important features. These networks are denoted with mini in Table <ref>. Because all of the presented architectures are small in size, we show practical finite sample expressiveness for harder tasks and larger networks by achieving 100% training accuracy on MINST, CIFAR-10, and CIFAR-100 with real and random labels as well as an augmented version (i.e. with an additional monotonic feature added artificially) of CIFAR100 in Appendix <ref>. § LIMITATIONS We are working on improving the architecture as follows: First, common initialization techniques are not optimal for weight-normed networks <cit.>. Simple modifications to the weight initialization might aid convergence, especially for large Lipschitz parameters. Secondly, we are currently constrained to activation functions that have a gradient norm of 1 over their entire domain, such as GroupSort, to ensure universal approximation, see <cit.>. We will explore other options in the future. Lastly, there is not yet a proof for universal approximation for the architecture described in (<ref>). However, it appears from empirical investigation that the networks do approximate universally, as we have yet to find a function that could not be approximated well enough with a deep enough network. We do not consider this a major drawback, as the construction in (<ref>) does approximate universally, see <cit.>. Note that none of these limitations have any visible impact on the performance of the experiments in Section <ref>. § CONCLUSION AND FUTURE WORK We presented an architecture that provably approximates Lipschitz continuous and partially monotonic functions. Monotonic dependence is enforced via an end-to-end residual connection to a minimally Lip^1 constrained fully connected neural network. This method is simple to implement, has negligible computational overhead, and gives stronger guarantees than regularized models. Our architecture achieves competitive results with respect to current state-of-the-art monotonic architectures, even when using a tiny number of parameters, and has the additional benefit of guaranteed robustness due to its known Lipschitz constant. For future directions of this line of research, we plan to tackle the problems outlined in the limitation section, especially improving initialization of weight-normed networks. § REPRODUCIBILITY STATEMENT All experiments with public datasets are reproducible with the code provided at <https://github.com/niklasnolte/monotonic_tests>. This code uses the package available in <https://github.com/niklasnolte/MonotoneNorm>, which is meant to be a standalone pytorch implementation of Lipschitz Monotonic Networks. The experiments in Section <ref> were made with data that is not publicly available. The code to reproduce those experiments can be found under <https://github.com/niklasnolte/HLT_2Track> and the data will be made available in later years at the discretion of the LHCb collaboration. §.§.§ Acknowledgments This work was supported by NSF grant PHY-2019786 (The NSF AI Institute for Artificial Intelligence and Fundamental Interactions, http://iaifi.org/). iclr2023_conference § EXPRESSIVE POWER OF THE ARCHITECTURE Robust architectures like Lipschitz constrained networks are often believed to be much less expressive than their unconstrained counterparts <cit.>. Here we show that our architecture is capable of (over)fitting complex decision boundaries even on random labels in a setup simular to <cit.>. We show the finite sample expressiveness of the architecture in <https://github.com/okitouni/Lipschitz-network-bench> by fitting MNIST, CIFAR10, CIFAR100 with normal and random labels to 100% training accuracy. We also train on CIFAR100 with an additional “goodness” feature x∈ [0,1] to showcase the monotonicity aspect of the architecture. This dataset is referred to as CIFAR101 below. The synthetic monotonicity problem is currently implemented such that samples with values above a critical threshold in the goodness feature x>x_ crit are labeled 0. An alternative implementation is to take label 0 with probability x and keep the original label (or assign a random one) with probability 1-x. Table <ref> summarizes the setup used for training. We use Adam with default hyper-parameters im all experiments.
http://arxiv.org/abs/2307.14348v1
20230708022041
Solving the inverse potential problem in the parabolic equation by the deep neural networks method
[ "Mengmeng Zhang", "Zhidong Zhang" ]
math.NA
[ "math.NA", "cs.NA", "math-ph", "math.MP" ]
1,2]Mengmeng [email protected] 3]Zhidong [email protected] [1]School of Science, Hebei University of Technology, Tianjin 300401, China [2]Nanjing Center for Applied Mathematics Nanjing, 211135, China [3]School of Mathematics (Zhuhai), Sun Yat-sen University, Zhuhai 519082, Guangdong, China Solving the inverse potential problem in the parabolic equation by the deep neural networks method [ ===================================================================================================== In this work, we consider an inverse potential problem in the parabolic equation, where the unknown potential is a space-dependent function and the used measurement is the final time data. The unknown potential in this inverse problem is parameterized by deep neural networks (DNNs) for the reconstruction scheme. First, the uniqueness of the inverse problem is proved under some regularities assumption on the input sources. Then we propose a new loss function with regularization terms depending on the derivatives of the residuals for partial differential equations (PDEs) and the measurements. These extra terms effectively induce higher regularity in solutions so that the ill-posedness of the inverse problem can be handled. Moreover, we establish the corresponding generalization error estimates rigorously. Our proofs exploit the conditional stability of the classical linear inverse source problems, and the mollification on the noisy measurement data which is set to reduce the perturbation errors. Finally, the numerical algorithm and some numerical results are provided. AMS subject classifications: 34K28, 35R30, 65N15, 62M45. Keywords: inverse potential problem, deep neural networks, uniqueness, generalization error estimates, numerical reconstruction. § INTRODUCTION. §.§ Mathematical model. The following parabolic system is considered in this work: (∂_t -Δ +q(x))u =F(x,t), (x,t)∈Ω_T, u(x,t) =b(x,t), (x,t)∈∂Ω_T, u(x,0) =u_0(x), x∈Ω. Here we write Ω_T=Ω×(0,T] and ∂Ω_T=∂Ω×(0,T] for short, and Ω⊂ℝ^d is an open bounded domain in ℝ^d with sufficiently smooth boundary. F(x,t), u_0(x), b(x,t) are the source term, initial status, boundary condition respectively, causing the heat propagation in the medium. The potential function q(x) ∈ L^∞(Ω), called the heat radiative coefficient of the material, is a crucial parameter for characterizing the heat conduction process. It describes the ability of the medium to propagate heat from internal sources or sinks. For known (F(x,t),u_0(x),b(x,t), q(x)) with suitable regularities, the forward problem (<ref>) is well-posed in appropriate function space <cit.>. In this work, we consider the inverse problem of recovering the unknown q(x), where the used measurement is the final time data u(x,T):=φ(x), x∈Ω. In practical applications of inverse problems, the contamination on inverse problems is unavoidable. So we will be given the noisy data φ^δ instead of the exact data φ(x) in (<ref>), which satisfies φ^δ-φ_L^∞(Ω)≤δ. To handle the effect caused by the perturbations, people need to develop effective methods to improve the accuracy and robustness in applications. In this study, we choose the deep neural networks (DNNs) to solve the inverse problem (<ref>)-(<ref>). Comparing to traditional methods for solving inverse potential problem, this approach demonstrates the superiority in high-dimensional space and has the advantage of breaking the curse of dimensionality. There are rare works on studying the inverse potential problem for parabolic equations using deep neural networks, especially the rigorous analysis of its convergence estimate. In this work, the authors will consider the solution of the inverse potential problem (<ref>)-(<ref>) parameterized by DNNs for the reconstruction scheme. We propose a new loss function with regularization terms depending on the derivatives of the residuals for PDEs and measurements. The mollification method has been employed to improve the regularity of the noisy data. Also, the generalization error estimates are rigorously derived from the conditional stability of the linear inverse source problem and the mollification error estimate on noisy data. §.§ Literature. The reconstructions of q(x) in (<ref>) from some inversion input data have been studied extensively. For zero initial status, the uniqueness for q(x) by (<ref>)-(<ref>) is established in <cit.>, while the unique reconstruction using final measurement data is studied in <cit.>. In the case of non-zero initial status, the existence and uniqueness of the generalized solution (u(x,t),q(x))∈ W_p^2,1(Ω_T) × L^p(Ω) with the time-average temperature measurement are given in <cit.> for (u_0,φ) with some regularities. Choulli and Yamamoto <cit.> prove the generic well-posedness of the inverse problem in Hölder spaces by final measurement data, and then the conditional stability result in a Hilbert space setting for sufficiently small T is studied in <cit.>. Chen et al <cit.> consider the inverse potential problem from a partial measurements over [T_0,T_1]×Ω with [T_0,T_1]⊂ [0,T], where the conditional stability estimates of the inverse problem in some Sobolev space and the reasonable convergence rates of the Tikhonov regularization are derived. Recently, Jin et al <cit.> uses the same observational data and shows a weighted L^2 stability in the standard L^2 norm under a positivity condition. They provide an error analysis of reconstruction scheme based on the standard output least-squares formulation with Tikhonov regularization (by an H^1-seminorm penalty). Zhang et al <cit.> prove the uniqueness of the identification from final time data for (sub)diffusion equation and show the conditional stability in Hilbert spaces under some suitable conditions on the problem data. The convergence and error analysis of the reconstruction discrete scheme are rigorously analyzed. The investigations in the inverse non-smooth potential problem are given in <cit.>, where the uniqueness for this nonlinear inverse problem is proved. Numerically, an iterative process called two-point gradient method is proposed by minimizing the data-fit term and the penalty term alternatively, with a convergence analysis in terms of the tangential condition. There also exists some works involving multiple coefficient identification. For example, Yamamoto and Zou <cit.> investigate the simultaneous reconstruction of the initial temperature and heat radiative coefficient in a heat conductive system, with stability of the inverse problem and the reconstruction scheme. Kaltenbacher and Rundell <cit.> consider the inverse problem of simultaneously recovering two unknowns, spatially dependent conductivity and the potential function from overposed data consisting of u(x,T). The uniqueness result and the convergence of an iteration scheme are established. We also refer to <cit.> and the references therein for the inverse potential problems in diffusion models from different types of observational data. Recently, deep learning methods for solving PDEs have been realized as an effective approach, especially in high dimensional PDEs. Such methods have the advantage of breaking the curse of dimensionality. The basic idea is to use neural networks (nonlinear functions) to approximate the unknown solutions of PDEs by learning the parameters. For the forward problems, there exists many numerical works with deep neural networks involving the depth Ritz method (DRM) <cit.>, the depth Galerkin method (DGM) <cit.>, the DeepXDE method <cit.>, depth operator network method (DeepONet) <cit.>, physical information neural networks (PINNs) <cit.>, the weak adversary neural network (WAN) <cit.> and so on. Theoretically, there are some rigorous analysis works investigating the convergence and error estimates for the solution of PDEs via neural networks, but the result are still far from complete. For example, the convergence rate of DRM with two layer networks and deep networks are studied in <cit.>; the convergence of PINNs is given in <cit.>. For the inverse problems, the PINNs frameworks can be employed to solve the so-called data assimilation or unique continuation problems, and rigorous estimates on the generalization error of PINNs are established in <cit.>. Bao et al <cit.> develop the WAN to solve electrical impedance tomography (EIT) problem. In <cit.>, the authors study a classical linear inverse source problem using the final time data under the frameworks of neural networks, where a rigorous generalization error estimate is proposed with a novel loss function including the Sobolev norm of some residuals. For more specific inverse problems applied in engineering and science, we refer to <cit.>. §.§ Outline. The rest of this article is organized as follows. In Section <ref> we introduce the knowledge of neural networks and the setting of mollification. In Section <ref>, we introduce a conditional stability of the linear inverse source problem first. Then the uniqueness theorem (Theorem <ref>) of this inverse potential problem can be proved followed from the conditional stability. In Section <ref>, a novel loss function with specific regularization terms is introduced. Then we prove the generalization error estimates of data-driven solution of inverse problems, which is stated in Theorem <ref>. In Section <ref>, we propose the reconstruction algorithm and provide several experiments to show the validity of the proposed algorithm. § PRELIMINARIES. §.§ Neural network architecture. First we introduce the basic knowledge of neural network briefly. Note that u_θ and q_η are two separate networks with different variables (x,t) and x. Thus, we use ξ to denote collectively the network parameters for a parametric function s_ξ(z) such that a general scheme can be applied for either u_θ(x,t) (with z=(x,t), ξ=θ) or q_η(x) (with z=x, ξ=η). For a positive integer K∈ℕ, a K-layer feed-forward neural network of s_ξ(z) for z∈ℝ^d_0 is a function s_ξ(z) defined by s_ξ(z):=W_K l_K-1∘⋯∘ l_1(z)+b_K, where the k-th layer l_k: ℝ^d_k-1→ℝ^d_k is given by l_k(z)=σ(W_k z+b_k) with weights W_k∈ℝ^d_k× d_k-1 and biases b_k∈ℝ^d_k for k=2, ⋯, K. The activation function σ(·) includes sigmoid, tanh, ReLU (Rectified Linear Unit), softmax and so on <cit.>. These activation functions introduce non-linearities and enable the network to learn complex patterns and relationships in the data. The neural network (<ref>) consists of an input layer with argument z, where d_0=d is the problem dimension (also known as the size of input layer), an output layer which has the weights W_K∈ℝ^d_K× d_K-1 and biases b_K∈ℝ^d_K, and K-1 hidden layers for some K∈ℕ. The network parameters of all layers are collectively denoted by ξ:=(W_K, b_K, W_K-1, b_K-1, ⋯, W_1, b_1). In Figure <ref>, we give a simple architectures of fully connected neural networks, where z=(x_1,x_2,⋯, x_d) is d-dimensional input variables, and the neural networks function is given as s_ξ(z)=y_NN. §.§ Mollification. In the practical applications of inverse problems, the noise of the measurements is unavoidable. The noisy data will make the residuals uncontrollable, which can be seen in the next section. Hence, we choose to mollify the measured data beforehand. The next is the introduction of mollification. Fix one function ρ∈ C^2(ℝ) as suppρ=(0,1), ρ(0)=ρ(1)=ρ'(0)=ρ'(1)=0, and ∫_0^∞ρ(t) t^d-1 dt=1/π_d, with π_d is the surface area of unit sphere B(0,1) in R^d. Set ρ_ϵ(x):=ϵ^-dρ(x/ϵ), and define the mollifier as G_ϵψ=∫_|x-y|≤ϵρ_ϵ(|x-y|)ψ(y) dy. Then we have ∫_ℝ^dρ_ϵ(|x-y|) dy=1. In the next lemma, we concern with the estimate of Δφ-Δ G_ϵ(φ^δ). Assume that the noisy data φ^δ∈ L^∞(Ω) and the exact data u(x,T):=φ(x)∈ H^2(Ω) satisfy φ-φ^δ_L^∞(Ω)≤δ. Also, the exact data imposes the high-order Lipschitz continuous condition. More precisely, we can find a positive constant C_φ such that |φ(x)-φ(y)| ≤ C_φ|y-x|, |Δφ(x)-Δφ(y)| ≤ C_φ|y-x|, for x,y∈Ω uniformly. For the mollification operator (<ref>), if we pick ϵ=O(δ^1/3), then we can achieve the following optimal error bound Δφ-Δ G_ϵ( φ^δ)_L^∞(Ω)≤ Cδ^1/3. We split the subtraction Δφ-Δ G_ϵ( φ^δ) as following: Δφ-Δ G_ϵ( φ^δ) = (Δφ-G_ϵ(Δφ))+(G_ϵ(Δφ)-Δ G_ϵ( φ^δ))=:I_1+I_2. For I_1, we have that |I_1|≤∫_|x-y|≤ϵρ_ϵ(|x-y|) |Δφ(x)-Δφ(y)| dy ≤ Cϵ. For I_2, Green's identities and the properties of the kernel function ρ give that Δ G_ϵ( φ)=G_ϵ( Δφ). Hence, I_2 =Δ[ ∫_R^dρ_ϵ(|x-y|) (φ(y)-φ^δ(y)) dy] =∫_R^dΔρ_ϵ(|x-y|) (φ(y)-φ^δ(y)) dy. From the straightforward calculation, we can deduce that Δρ_ϵ(|x-y|)=ϵ^-d-2ρ”(|x-y|/ϵ)+(d-1)ϵ^-d-1|x-y|^-1ρ'(|x-y|/ϵ), which gives |I_2|≤δ∫_|x-y|≤ϵ|Δρ_ϵ(|x-y|)| dy ≤ C δϵ^-2. So we have | Δφ-Δ G_ϵ(φ^δ)|≤ Cϵ(1+δϵ^-3). By picking ϵ=O(δ^1/3), we can achieve the desired estimate and complete the proof. § UNIQUENESS. The uniqueness of this inverse potential problem is one of our main results. In this section, we will prove the uniqueness and the proof relies on the conditional stability of the inverse source problem of equation (<ref>). The conditional stability will be stated in the next subsection. §.§ Conditional stability of the inverse source problem. Under the framework of DNNs, the total error of the reconstructed solution depends on the training error and the measurement error. This connection relies on the conditional stability of linear inverse source problem, i.e., the quantitative dependence of the unknown source on the measurement data. Sequentially, here we will introduce some known results for the linear inverse source problem. The mathematical statement of inverse source problem in parabolic equations with final time data is given below. For the parabolic equation (∂_t-Δ +q(x))v(x,t) =p(x)h(x,t), (x,t)∈Ω_T, v(x,t) =0, (x,t)∈∂Ω_T, v(x,0) =0, x∈Ω, we set q≥ 0 and q∈ L^∞ (Ω), and h(x,t) is given. Then the inverse source problem is to use the measurement φ(x):=v[p](x,T) to recover the unknown p(x) in the source term. Recalling the norm of the classical Sobolev space W^2,1_2(Ω_T) as u_W^2,1_2(Ω_T)=√(∑_|α|≤ 2D^αu^2_L^2(Ω_T)+u_t^2_L^2(Ω_T)), the following classical result on the inverse source problem (<ref>)-(<ref>) can be found in <cit.>. For equation (<ref>), we assume that h∈ L^∞(Ω_T), h_t∈ L^∞(Ω_T), p(x)∈ L^2(Ω), ph∈ L^2(Ω_T), ph_t ∈ L^2(Ω_T), and h(x,t)≥ 0, h_t(x,t)≥ 0 on Ω_T, |h(x,T)|≥ν >0 on Ω. Here ν is a fixed positive number. Then, for known q∈ L^∞(Ω) and input data φ∈ H^2(Ω), there exists a unique solution (v(x,t), p(x)) ∈ W^2,1_2(Ω_T)× L^2(Ω) to (<ref>)-(<ref>), following the estimate p_L^2(Ω)+v_W^2,1_2(Ω_T)≤ C (-Δ+q)φ_L^2(Ω). The constant C depends on q_L^∞(Ω), ν, Ω and T. §.§ Uniqueness theorem. Now it is time to show the uniqueness theorem. First we introduce the admissible set for the unknown potential q(x) as 𝒜:={ψ∈ L^∞(Ω): 0≤ψ(x)≤ M a.e. on Ω}⊂ L^2(Ω). The constant M is the given upper bound of the admissible set. Next, recalling equation (<ref>), we collect some restrictions on the controllable source F(x,t), initial status u_0(x) and boundary condition b(x,t). The assumptions on F(x,t), u_0(x) and b(x,t) are given as follows. * u_0(x)∈ H^2(Ω), u_0(x)=b(x,0) on ∂Ω, ∃ν>0 such that u_0(x)≥ν >0 on Ω; * b∈ H^2(∂Ω), b≥ν>0 on ∂Ω, b_t ≥ 0 on ∂Ω; * F∈ L^2(Ω_T), F_t∈ L^2(Ω_T), F≥ 0 on Ω_T, F_t≥ 0 on Ω_T; * Δ u_0(x)-Mu_0(x)+F(x,0)≥ 0 on Ω. Under Assumption <ref>, the inverse problem (<ref>)-(<ref>) has at most one solution in W^2,1_2(Ω_T)×𝒜. Assume that there are two distinct pairs (u[q_1], q_1) and (u[q_2], q_2) satisfying (<ref>)-(<ref>) with same data u[q_1](x,T)=u[q_2](x,T)=φ(x). Setting w(x,t):=u[q_1](x,t)-u[q_2](x,t), q(x):=q_2(x)-q_1(x), then w(x,t) meets the system (∂_t-Δ +q_1(x))w(x,t) =q(x)u[q_2](x,t), (x,t)∈Ω_T, w(x,t) =0, (x,t)∈∂Ω_T, w(x,0) =0, x∈Ω, with w(x,T) = 0, x∈Ω. We need to prove that (w(x,t), q(x))=(0,0) in W^2,1_2(Ω_T)× L^∞ (Ω). Obviously q(x)∈ L^2(Ω). Also there holds u[q_2]∈ L^∞(Ω_T) and u_t[q_2]∈ L^∞(Ω_T) by <cit.>. Then we have q u[q_2]∈ L^2(Ω_T), q u_t[q_2]∈ L^2(Ω_T). Under Assumption <ref> and the maximum principle, we can see that u[q_2]≥ 0 on Ω_T. For u_t[q_2], with Assumption <ref> and equation (<ref>), it satisfies (∂_t -Δ +q_2(x)) (u_t[q_2]) =F_t(x,t)≥0, (x,t)∈Ω_T, u_t[q_2](x,t) =b_t(x,t)≥ 0, (x,t)∈∂Ω_T, u_t[q_2](x,0) =Δ u_0(x)-q_2u_0(x)+F(x,0)≥ 0, x∈Ω. Then the maximum principle leads to u_t[q_2]≥ 0 straightforwardly. With the positivity of u_t[q_2], we derive that u[q_2](x,t)=u_0(x)+∫_0^t ∂_s u[q_2](x,s) ds ≥ u[q_2](x,0)≥ν >0, (x,t)∈Ω_T, which yields u[q_2](x,T)≥ν>0. Now the conditions of Lemma <ref> are satisfied, and we conclude (w(x,t),q(x))=(0,0) by applying Lemma <ref> on (<ref>)-(<ref>). The proof is complete. § GENERALIZATION ERROR ESTIMATES. In this section, we will discuss the error estimate of our approach for the inverse potential problem. Firstly, we introduce the corresponding residuals and define the loss function. §.§ Loss function and training errors. We propose a formulation of loss function for data-driven solutions of inverse problems, which can ensure the accuracy with the conditional stability of the given linear inverse source problem. To achieve it, we define suitable residuals that measure the errors of the governed system and the input data. Assume that the activation function is of C^2 regularity for the neural network u_θ defined by (<ref>), which leads to u_θ∈ H^2(Ω× [0, T]). For the network parameters θ∈Θ:={(W_k, b_k)}_k=1^K :W_k∈ℝ^d_k× d_k-1, b_k∈ℝ^d_k}, the set of all possible trainable parameters u_θ(x,t) up to its second order weak derivatives are bounded in Ω× [0,T] for any specific θ. Similarly, noticing that q_η(x) is the parametric neural network to approximate the potential function q(x), we assume the activation function for the neural network q_η(x) is of L^∞ regularity such that q_η(x)∈ L^∞(Ω). We define * Interior PDE residual ℛ_int,θ,η(x, t):=∂_t u_θ(x, t)-Δ u_θ(x, t)+ q_η(x)u_θ(x, t)-F(x,t), (x,t) ∈Ω_T. * Spatial boundary residual ℛ_sb,θ(x, t):=u_θ(x, t)-b(x,t), (x,t) ∈∂Ω_T. * Initial status residual ℛ_tb, θ(x):=u_θ(x, 0)- u_0(x), x ∈Ω. * Data residual ℛ_d, θ(x):=u_θ(x, T)- G_ϵφ^δ(x), x ∈Ω. Note that in the data residual (<ref>), we use the mollified data G_ϵφ^δ(x) instead of the noisy data φ^δ(x). A loss function minimization scheme for data-driven inverse problems seeks to minimize these residuals comprehensively with some weights balancing different residuals. The loss function is defined as follows: J_λ(θ,η) = q_ηℛ_d,θ^2_L^2(Ω) +Δℛ_d,θ^2_L^2(Ω) + λℛ_int,θ,η^2_H^1(0,T;L^2(Ω)) +ℛ_tb,θ^2_L^2(Ω) +q_ηℛ_tb,θ^2_L^2(Ω) +Δℛ_tb,θ^2_L^2(Ω)+ℛ_sb,θ^2_H^2(0,T;L^2(∂Ω)), where λ is a hyper-parameter to balance the residuals between the knowledge of PDE and the measurements. The proposed loss function (<ref>) includes derivative penalties on the residuals. This is motivated by the conditional stability result for linear inverse source problem, which requires higher regularity on the measurement data u(·,T) (see Lemma <ref>). To improve the regularity of the noisy measurement data, we employ the mollification method by applying the mollification operator G_ε on the noisy data φ^δ. The design of the loss function for inverse problems distinguishes itself from that for forward problems such as physics-informed neural networks. The smoothness requirements not only ensure the existence of forward problem solutions, but also ensure the well-posedness of the inverse problem within the optimization framework. The following standard loss function J^s(θ,η) = ℛ_d,θ_L^2(Ω)^2 +λℛ_int,θ,η_L^2(Ω_T)^2+ ℛ_tb,θ_L^2(Ω)^2 +ℛ_sb,θ_L^2(∂Ω_T)^2 has often been used in the literature. For example, the DGM workflow adopts this form of loss function and minimizes it by least squares scheme <cit.>. To determine (θ,η) from the discrete training set, accurate numerical evaluation of the integrals in (<ref>) is essential. We introduce the following training sets that facilitate efficient computation of the integrals, leading to better performance: 𝒮_d :={(x_n,T): x_n∈Ω, n=1,2,⋯,N_d}, 𝒮_int :={(x_n,t_n): (x_n,t_n)∈Ω_T, n=1,2,⋯,N_int}, 𝒮_tb :={(x_n,0): x_n∈Ω, n=1,2,⋯,N_tb}, 𝒮_sb :={(x_n,t_n): (x_n,t_n)∈∂Ω_T, n=1,2,⋯,N_sb}. Applying these sets and the numerical quadrature rules <cit.>, we get the following empirical loss function J_λ^N(θ,η) =∑_n=1^N_dω_n^d,0|q_η(x_n)ℛ_d,θ(x_n)|^2+ ∑_n=1^N_dω_n^d,1|Δℛ_d,θ(x_n)|^2 +λ∑_n=1^N_intω_n^int,0|ℛ_int,θ,η(x_n, t_n)|^2 +λ∑_n=1^N_intω_n^int,1|∂_tℛ_int,θ,η(x_n, t_n)|^2 +∑_n=1^N_tbω_n^tb,0|ℛ_tb,θ(x_n)|^2 + ∑_n=1^N_tbω_n^tb,1|q_η(x_n)ℛ_tb,θ(x_n)|^2 +∑_n=1^N_tbω_n^tb,2|Δℛ_tb,θ(x_n)|^2 +∑_n=1^N_sbω_n^sb,0|ℛ_sb,θ(x_n, t_n)|^2 +∑_n=1^N_sbω_n^sb,1|∂_tℛ_sb,θ(x_n, t_n)|^2 ∑_n=1^N_sbω_n^sb,2|∂_t^2ℛ_sb,θ(x_n, t_n)|^2, where the coefficients ω^d,k_n, ω^int,k_n, ω^tb,j_n, ω^sb,j_n, k=0,1, j=0,1,2 are the quadrature weights. It is easy to see that the error for the loss function is |J_λ(θ,η)-J_λ^N(θ,η)| ≤ Cmin{ N_d^-α_d,k,N_int^-α_int,k,N_tb^-α_tb,j, N_sb^-α_sb,j:k=0,1, j=0,1,2}, where C depends on the continuous norm ·_C(Ω) of the integrals, the rate α^d,k, α^int,k, α^tb,j, α^sb,j (k=0,1, j=0,1,2) are positive and depend on the regularity of the underlying integrand i.e, on the space C(Ω). Therefore, the underlying solutions and neural networks should be sufficiently regular such that the residuals can be approximated to a high accuracy by the quadrature rule. Now, we define the generalization errors as ℰ_G,q:=q-q^*_L^2(Ω), ℰ_G,u:=u-u^*_C([0, T] ; L^2(Ω)), where u^*:=u_θ^*, q^*:=q_η^* with (θ^*,η^*) is the minimizer of the functional (<ref>). Also, we estimate generalization errors in terms of the following training errors: * The measurement data training errors: ℰ_T,d:=ℰ_T,d,0+ℰ_T,d,1, where ℰ_T,d,0:=(∑_n=1^N_dω_j^d,0| q_η(x_n)ℛ_d,θ^*(x_n)|^2)^1/2, ℰ_T,d,1:=(∑_n=1^N_dω_j^d,1|Δℛ_d,θ^*(x_n)|^2)^1/2. * The interior PDE training errors: ℰ_T,int:=ℰ_T,int,0+ℰ_T,int,1, where ℰ_T,int,0:=(∑_n=1^N_intω_n^int,0|ℛ_int,θ^*,η^*(x_n, t_n)|^2)^1/2, ℰ_T,int,1:=(∑_n=1^N_intω_n^int,1|∂_tℛ_int,θ^*,η^*(x_n, t_n)|^2)^1/2. * The initial condition training errors: ℰ_T,tb:=ℰ_T,tb,0+ℰ_T,tb,1+ℰ_T,tb,2, where ℰ_T,tb,0 :=(∑_n=1^N_tbω_n^tb,0|ℛ_tb,θ^*(x_n)|^2)^1/2, ℰ_T,tb,1 :=(∑_n=1^N_tbω_n^tb,1|q_η(x_n)ℛ_tb,θ^*(x_n)|^2)^1/2, ℰ_T,tb,2 :=(∑_n=1^N_tbω_n^tb,2|Δℛ_tb,θ^*(x_n)|^2)^1/2. * The spatial boundary condition training errors: ℰ_T,sb:=ℰ_T,sb,0+ℰ_T,sb,1+ℰ_T,sb,2, where ℰ_T,sb,0 :=(∑_n=1^N_sbω_n^sb,0|ℛ_sb,θ^*(x_n, t_n)|^2)^1/2, ℰ_T,sb,1 :=(∑_n=1^N_sbω_n^sb,1|∂_tℛ_sb,θ^*(x_n, t_n)|^2)^1/2, ℰ_T,sb,2 :=(∑_n=1^N_sbω_n^sb,2|∂_t^2ℛ_sb,θ^*(x_n, t_n)|^2)^1/2. §.§ Proofs of the estimates. Now we can state the theorem about the generalization error estimates. Recall the errors defined in (<ref>)-(<ref>). Under Assumption <ref>, there exists a unique solution to the inverse problem (<ref>)-(<ref>). Moreover, for the approximate solution (u^*,q^*) of the inverse problem with (θ^*,η^*) being a global minimizer of the loss function J_λ^N(θ,η), we have the following generalization error estimates ℰ_G,q ≤ C( ℰ_T,d+ ℰ_T,int + ℰ_T,sb,1 + ℰ_T,sb,2 + ℰ_T,tb,1 + ℰ_T,tb,2 +C_q^1/2 N^-α/2 +O(δ^1/3)), ℰ_G,u ≤ C( ℰ_T,d+ ℰ_T,int + ℰ_T,sb + ℰ_T,tb +C_q^1/2 N^-α/2 +O(δ^1/3)), where N =min{N_d, N_int,N_sb,N_tb}, α =min{α_int,0,α_int,1,α_sb,0,α_sb,1,α_sb,2,α_tb,0,α_tb,1,α_d}, in (<ref>), and C_q=max{C_q,0,C_q,1, C_qs,0,C_qs,1,C_qs,2,C_qt,0,C_qt,1, C_qd}, with C_qd=C_qd(ℒ^*ℛ_d,θ^*_C(Ω)), C_q,0=C_q,0(ℛ_int,θ^*,η^*_C(Ω_T)), C_q,1=C_q,1(∂_tℛ_int,θ^*,η^*_C(Ω_T)), C_qs,0=C_qs,0(ℛ_sb,θ^*_C(∂Ω_T)), C_qs,1=C_qs,1(∂_tℛ_sb,θ^*_C(∂Ω_T)), C_qs,2=C_qs,2(∂_t^2ℛ_sb,θ^*_C(∂Ω_T)), C_qt,0=C_qt,0(ℛ_tb,θ^*_C(Ω)), C_qt,1=C_qt,1(ℒ^*ℛ_tb,θ^*_C(Ω)). The constant C depends on q^*_L^∞(Ω), Ω and T. First, we introduce û:=u^*-u and realize that (∂_t -Δ +q^*(x))û(x,t) =ℛ_int,θ^*,η^*(x, t)+(q-q^*)u[q](x,t), (x,t)∈Ω_T, û(x,t) =ℛ_sb,θ^*(x,t), (x,t)∈∂Ω_T, û(x,0) =ℛ_tb,θ^*(x), x∈Ω, with the final condition û(x,T)=u[q^*](x,T)-u[q](x,T)=ℛ_d,θ^*(x)-(φ-G_ϵφ^δ). We make the decomposition û:=û_1+û_2, where û_1, û_2 satisfy (∂_t -Δ +q^*(x))û_1(x,t) =(q^*-q)(x)u[q](x,t), (x,t)∈Ω_T, û_1(x,t) =0, (x,t)∈∂Ω_T, û_1(x,0) =0, x∈Ω, with û_1(x,T)=ℛ_d,θ^*(x)-(φ(x)-G_ϵφ^δ(x))-û_2(x,T), and (∂_t -Δ + q^*(x))û_2(x,t) =ℛ_int,θ^*,η^*(x, t), (x,t)∈Ω_T, û_2(x,t) =ℛ_sb,θ^*(x,t), (x,t)∈∂Ω_T, û_2(x,0) =ℛ_tb,θ^*(x), x∈Ω, respectively. Define the operator ℒ^* as ℒ^* ψ= (-Δ+q^*)ψ. With Assumption <ref>, we can apply Lemma <ref> to (<ref>)-(<ref>) and deduce that q^*-q_L^2(Ω) ≤ Cℒ^*û_1(·,T)_L^2(Ω) = Cℒ^* ℛ_d,θ^*-ℒ^*(φ-G_ϵφ^δ)-ℒ^*û_2(·,T)_L^2(Ω) = Cℒ^* ℛ_d,θ^*+Δφ-Δ G_ϵφ^δ-q^*(φ-G_ϵφ^δ)-ℒ^*û_2(·,T)_L^2(Ω) ≤ C (ℒ^*ℛ_d,θ^*_L^2(Ω)+Δφ-Δ G_ϵφ^δ_L^2(Ω)+q^*(φ-G_ϵφ^δ)_L^2(Ω)+ ℒ^*û_2(·,T)_L^2(Ω)), with C=C(q^*_L^∞(Ω),Ω,T). Using Lemma <ref>, we get Δφ-Δ G_ϵφ^δ_L^2(Ω) ≤ Cϵ(1+δϵ^-3). Also, we have |(φ-G_ϵφ^δ)| ≤∫_|x-y|≤ϵρ_ϵ(|x-y|) |φ(x)-φ^δ(y)| dy ≤∫_|x-y|≤ϵρ_ϵ(|x-y|) |φ(x)-φ(y)|dy + ∫_|x-y|≤ϵρ_ϵ(|x-y|) |φ(y)-φ^δ(y)| dy ≤ Cϵ + δ . Thus, there holds q^*(φ-G_ϵφ^δ)_L^2(Ω)≤ Cϵ + δ. By straightforward computations, we have ℒ^*û_2(·,T)_L^2(Ω) ≤∂_tû_2(·,T)_L^2(Ω)+ ℛ_int,θ^*,η^*(·, T)_L^2(Ω) ≤∂_tû_2_L^∞(0,T;L^2(Ω))+ ℛ_int,θ^*,η^*(·, T)_L^2(Ω). Setting w(x,t):=∂_t û_2(x,t), it satisfies (∂_t -Δ +q^*)w(x,t) =∂_tℛ_int,θ^*,η^*(x, t), (x,t)∈Ω_T, w(x,t) =∂_t ℛ_sb,θ^*(x,t), (x,t)∈∂Ω_T, w(x,0) =ℛ_int,θ^*,η^*(x, 0)-ℒ^*ℛ_tb,θ^*(x), x∈Ω. Using the regularity theory for the direct problem (<ref>), we obtain ∂_tû_2_L^∞(0,T;L^2(Ω)) =w_L^∞ (0,T;L^2(Ω)) ≤ C( ∂_tℛ_int,θ^*,η^*_L^2(Ω_T) +∂_tℛ_sb,θ^*_H^1(0,T;L^2(∂Ω)) +ℛ_int,θ^*,η^*(·,0)_L^2(Ω) +ℒ^*ℛ_tb,θ^*_L^2(Ω)). Combining (<ref>)-(<ref>) together and using the Sobolev embedding theorem, we get ℰ_G,q =q^*-q_L^2(Ω) ≤ C(ℛ_int,θ^*,η^*_H^1(0,T;L^2(Ω)) +ℒ^*ℛ_d,θ^*_L^2(Ω)+ℒ^*ℛ_tb,θ^*_L^2(Ω) +∂_tℛ_sb,θ^*_H^1(0,T;L^2(∂Ω)) +ϵ(1+δϵ^-3)+ϵ+δ) ≤ C (ℛ_int,θ^*,η^*_H^1(0,T;L^2(Ω)) +q^*ℛ_d,θ^*_L^2(Ω) +Δℛ_d,θ^*_L^2(Ω)+q^*ℛ_tb,θ^*_L^2(Ω) +Δℛ_tb,θ^*_L^2(Ω) +∂_tℛ_sb,θ^*_H^1(0,T;L^2(∂Ω)) +ϵ(1+δϵ^-3)+ϵ+δ), with C=C(q^*_L^∞(Ω),Ω,T)>0. Picking ϵ=O(δ^1/3), we achieve the estimate ℰ_G,q =q^*-q_L^2(Ω) ≤ C ( ℛ_int,θ^*,η^*_H^1(0,T;L^2(Ω)) +q^*ℛ_d,θ^*_L^2(Ω) +Δℛ_d,θ^*_L^2(Ω) +q^*ℛ_tb,θ^*_L^2(Ω) +Δℛ_tb,θ^*_L^2(Ω) +∂_tℛ_sb,θ^*_H^1(0,T;L^2(∂Ω)) +O(δ^1/3)). Finally, we will evaluate the generalization error for the unknown u(x,t) employing the obtained generalization error (<ref>) of the potential function. From the classical regularity theory for PDE, if F(x,t) is sufficiently smooth, then it holds that u∈L^2(0,T;L^∞(Ω)). Consequently, ℰ_G,u =û_C([0,T];L^2(Ω)) ≤ C(ℛ_int,θ^*,η^*+(q^*-q)u[q]_L^2(Ω_T)+ℛ_sb,θ^*_H^1(0,T;L^2(∂Ω)) + ℛ_tb,θ^*_L^2(Ω)) ≤ C(ℛ_int,θ^*,η^*_L^2(Ω_T) +q^*-q_L^2(Ω) u_L^2(0,T;L^∞(Ω))+ℛ_sb,θ^*_H^1(0,T;L^2(∂Ω)) +ℛ_tb,θ^*_L^2(Ω)) ≤ C (ℛ_int,θ^*,η^*_H^1(0,T;L^2(Ω)) +q^*ℛ_d,θ^*_L^2(Ω) +Δℛ_d,θ^*_L^2(Ω)+ℛ_tb,θ^*_L^2(Ω) +q^*ℛ_tb,θ^*_L^2(Ω)+ Δℛ_tb,θ^*_L^2(Ω)+ℛ_sb,θ^*_H^2(0,T;L^2(∂Ω))+O(δ^1/3)). The proof is complete. The estimate (<ref>) demonstrates that well-trained neural networks will produce small generalization errors for the inverse problem. Specifically, when all components of the training errors, including the interior PDE errors, measurement errors as well as the initial and boundary value ones, are sufficiently small, and the training sampling is large enough, the generalization errors for inverse problem using neural networks can be limited well. This differs from classical stability results that rely solely on the knowledge of data. In this work, the generalization error estimates reflect stability due to both the model itself and the reconstruction algorithm. From Theorem <ref>, we see that we can limit the errors of both the inverse and forward problems by controlling the residuals and the mollified parameter. This provides important insights into the mathematical properties of our approach and plays an important role on the construction of algorithms. § NUMERICAL RECONSTRUCTIONS. §.§ Reconstruction algorithm. The neural networks u_θ(x,t) and q_η(x) depend on the parameters θ and η describing the networks information for specific activation functions. Within the standard paradigm of deep learning, one trains the networks by finding the optimal parameters (θ^*,η^*) such that the loss function (<ref>) is minimized. Our target is the unknown solution of the inverse problem (<ref>)-(<ref>) and we wish to find the trainable parameters (θ^*, η^*) such that the corresponding neural networks (u_θ^*, q_η^*) approximate (u,q) well. More precisely, to solve (<ref>)-(<ref>) we first parameterize u and q by deep neural networks u_θ and q_η with network parameters (θ,η) respectively. Then, we design an appropriate loss function, which is minimized to determine the parameters (θ,η). Finally, a gradient-based method is applied to alternately update the network parameters so that (u_θ, q_η) gradually approximates (u,q) for our inverse problem. We provide a schematic of the neural networks in Figure <ref>. The left part visualizes two unknowns as two standard neural networks parameterized by θ and η, respectively. The right part applies the given physical laws to the networks. B.C., I.C. and D are the boundary condition, initial status and the measurement data obtained from random sample points in training sets 𝒮_sb, 𝒮_tb and 𝒮_d respectively. The training points in 𝒮_int are randomly sampled as the PDE residuals points in interior spatio-temporal domain. The loss function with some Sobolev norm is computed on the sample points, which can be done efficiently through automatic differentiation (AD) in case of derivative information. Minimizing the loss with respect to the parameters (θ, η) alternately produces u_θ^∗ and q_η^∗, which serves as the approximation to the solution of the inverse problem. With the support of Theorem <ref>, we can construct the proposed Algorithm <ref> for solving the inverse problem (<ref>)-(<ref>). The above minimization problem is to search a minimizer of a possibly non-convex function J_λ^N(θ,η) over Θ⊂ℝ^ℳ for possibly very large ℳ. The hyper-parameters (τ_η,τ_θ) are learning rates and (λ_η,λ_θ) are balance hyper-parameters between PDE and measurement data residuals. The robust analysis for hyper-parameters λ and the architectures of neural networks are studied in the next subsection. The optimizer in Algorithm <ref> is Adam (Adaptive Moment Estimation), which is an optimization algorithm commonly used in deep learning for training neural networks. The key idea of Adam is to adaptively adjust the learning rate for each parameter based on estimates of both the first-order moment (the mean) and the second-order moment (the uncentered variance) of the gradients. This adaptation helps Adam to perform well in different types of optimization problems. The algorithm maintains an exponentially moving average of gradients (m_t) and squared gradients (V_t) for each parameter. At each iteration, Adam updates the parameters using a combination of these moving average estimates. It incorporates bias correction to account for the fact that the estimates are biased towards zero at the beginning of training. We set g_t be gradients w.r.t. stochastic objective at timestep t, β_1, β_2∈[0,1) be the exponential decay rates for the moment estimates and τ be the initial learning rate. Good default settings for the tested machine learning problems are τ=0.001, β_1=0.9, β_2=0.999 and fuzzy factor ϵ=10^-8. The updates are calculated as follows: (1) Initialize the first moment vector m and the second moment vector V with zeros for each parameter: m_0=V_0=0. (2) Update the first moment estimate m using a weighted average of the current gradient g_t and the previous first moment estimate m_t-1: m_t=β_1m_t-1+(1-β_1)g_t. (3) Update the second moment estimate V using a weighted average of the squared gradients and the previous second moment estimate V_t-1: V_t=β_2V_t-1+(1-β_2)g_t^2. (4) Calculate the bias-corrected first and second moment estimate to correct for their initialization bias: m̂_t=m_t/1-(β_1)^t, V̂_t=V_t/1-(β_2)^t. (5) Update the parameters ξ by moving in the direction of the first moment estimate, where the learning rate is τ divided by the square root of the second moment estimate: ξ_t=ξ_t-1-τm̂_t/√(V̂_t)+ϵ. The hyper-parameters in Adam include the learning rate and the decay rates for the moving averages. These hyper-parameters need to be tuned based on the specific problem and dataset to achieve optimal performance. Adam has several advantages that make it popular in deep learning: (a) Adaptive learning rate: Adam automatically adapts the learning rate for each parameter based on the estimated first and second moments. This adaptive behavior helps in effectively navigating the optimization landscape and can lead to faster convergence. (b) Efficiency: Adam uses the moving averages to maintain a history of gradients, which eliminates the need to store and compute gradients for each iteration separately. This makes Adam memory-efficient and allows for efficient parallelization during training. (c) Robustness: Adam performs well across a wide range of optimization problems and is less sensitive to hyper-parameter tuning compared to some other optimizers. It can handle sparse gradients and noisy data effectively. The proposed algorithm, which utilizes (<ref>) as the loss function, exhibits superior performance in recovering smooth solutions due to the high regularity of the PDEs residual term. This regularity term promotes smoother solutions and is an important factor in achieving higher accuracy. Furthermore, the use of automatic differentiation (AD) implementations enables the efficient calculation of the necessary derivatives. This feature is a significant advantage of our approach as it allows for the accurate optimization of the objective function, which is crucial for effective solution of inverse problems. To validate the effectiveness of the proposed algorithm and to substantiate our claims, we conduct a series of numerical experiments. §.§ Numerical experiments. In this subsection, we will present several numerical examples for the spatial domain Ω⊂ℝ^d with d=2,3. We define the following relative errors for exact solutions (u,q) and numerical approximations (u^*,q^*) as Re_u:=u-u^*_L^2(Ω_T)/u_L^2(Ω_T), Re_q:=q-q^*_L^2(Ω)/q_L^2(Ω), Re_Δ u:=Δ u-Δ u^*_L^2(Ω_T)/Δ u_L^2(Ω_T). Example 1 (two-dimensional experiment): For equation (<ref>), we set the exact solution u and the domain Ω_T as u(x,y,t)=(x^2+y^2+1)exp(t), (t,x,y)∈Ω_T=[0,1]^3. The exact potential q is given as q(x,y)=sin(π x)sin(π y). The initial and boundary conditions can be calculated from the representation of u straightforwardly. The exact measurement will be u(x,y,1)=φ(x,y)=(x^2+y^2+1)exp(1), and in our experiments the noisy data is set as φ^δ(x,y):=φ(x,y)+δ· (2 rand(shape(φ(x,y)))-1), where rand(shape(φ)) is a random variable generated by uniform distribution in [0,1]. For the implementation details, we use a fully connected neural network for u_θ and q_η with 3 hidden layers, each with a width of 20. We take N=N_int+N_sb+N_tb+N_d=256+256×4+256+256=1792 as the number of collocation points, which are randomly sampled in four different domains, i.e., interior spatio-temporal domain, spatial and temporal boundary domain, and additional measurement domain. The activation function is tanh(x) and the hyper-parameter is λ=0.01. The number of training epochs is set to be 5×10^4, and the initial learning rates τ_θ, τ_η both start with 0.001 and shrink 10 times every 2×10^4 iteration. The test sets are chosen by a uniform mesh 𝒯:={(t_k,x_i,y_j): k,i,j=0,1,⋯,49}⊂Ω_T. Since the noisy level of the measurement data affects the reconstruction accuracy, in this simulation, we test the training performance for various noisy levels. Figure <ref> records the training process, i.e., the training loss, the relative error for the reconstruction of q and the relative error for the recovery of u with respect to the iterations for different noise levels δ=0.1%, 1%, 5%, 10% by the proposed scheme. After training, we test the reconstruction result on test sets 𝒯. The distribution of the temperature field u(x,t) also depends on the time t, Figure <ref> shows the time-series relative error of the recovered u with various noise levels after logarithmic re-scaling. As shown in these figures, the training performance deteriorates as the noise level of the measurement data increasing. Figure <ref> shows the exact solution of the ptential term. Figure <ref> shows the reconstruction results for q(x) by optimizing the proposed loss function (first line) and the corresponding absolute pointwise error for various noisy level δ=0.1%,5%,10% (second line). Meanwhile, Figure <ref> presents the reconstruction solution u (first line) and corresponding absolute pointwise error (second line) for various noisy level measurement data at t=1/7. We can see that the reconstruction accuracy for q deteriorates as the noise level of the measurement data increasing, but the performance for u is still satisfactory. Table <ref> presents the recovery results solved by two schemes: (I) the proposed frameworks with the loss function (<ref>), (II) the DGM frameworks with the loss function (<ref>). We record the generalization error of q, u and Δ u in L^2-error from the noisy input data with δ=0.01. Due to the random sampling of the training data points, the inversion results have some stochasticity. Thus, we perform Algorithm <ref> with the loss function in the formulation (<ref>) and formulation (<ref>) five times, respectively. The relative errors (mean and standard deviation) for the recovery of q, u and Δ u are shown in Table <ref>. As observed, optimizing the loss function proposed in this paper leads to more accurate recovery results, especially for the reconstruction of q compared with DGM frameworks. Moreover, although the reconstruction accuracy of u in L^2-error for both two frameworks are relatively close, the accuracy of Δ u in L^2-error for proposed scheme in this paper performs better. This suggests that the proposed frameworks are better able to capture smooth solutions. Example 2 (two-dimensional experiment): For equation (<ref>), we set the exact solution u and the domain Ω_T as u(x,y,t)=texp(x+y), (t,x,y)∈Ω_T=[0,1]×[0,2]^2. The exact potential q is given as q(r) = 15(cos r-√(3)/2)+2, 0≤ r ≤π/6, 2, otherwise , r(x,y) =√((x-1)^2+(y-1)^2). The exact measurement will be u(x,y,1)=φ(x,y)=exp(x+y), and the noisy data φ^δ is generated by (<ref>). The network architectures and hyper-parameters such as activation function, balance hyper-parameter λ are all the same as Example 1. The number of training epochs is set to be 1×10^5, and the initial learning rates τ_θ, τ_η both start with 0.001 and shrink 10 times every 2×10^4 iteration. The test sets are chosen by a uniform mesh as (<ref>) In this simulation, we evaluate the training performance under various levels of measurement noise. The training process under Algorithm <ref> is recorded in Figure <ref>, which includes the training loss, and the relative errors for the reconstruction of q and u during training process for different noise levels (δ=0, 1%, 5%, 10%). Figure <ref> displays the exact potential function q, while the approximated q^* under different noise level measurements (δ=1%, 5%, 10%) are shown in Figure <ref>. We can see that the numerical reconstructions still satisfy the theoretical results even with zero initial condition and nonsmooth exact potential. This means that in numerical reconstructions we may release the conditions of Assumption <ref> to some extent. Now, we start to verify the convergence of the iteration in Theorem <ref> with different neural network architectures. In the experiments, for a fixed number of per-layer neurons NN=20, we compute the reconstruction errors for q versus the noise level δ using logarithmic re-scaling with various hidden layers NL=3, 4, 6. The results of these experiments are presented in the left of Figure <ref>. The theoretical estimate O(δ^1/3) is shown by the black line. Similarly, fixing hidden layer NL=6, the reconstruction errors for q under various per-layer neurons NN=10,15,20 are given in the right of Figure <ref>. From this figure, we see that the error could be bounded by the rate δ^1/3 to some extent, which supports the theoretical analysis in Theorem <ref>. In order to evaluate the effectiveness of the proposed scheme in terms of hyper-parameters and network structure, a series of experiments are conducted. Specifically, we examine the impact of the balance hyper-parameter λ in (<ref>) and the network structure, including the number of hidden layers and neurons. For a fixed number of hidden layers (NL=3) and a fixed number of neurons per-layer (NN=20), we compute the reconstruction errors (mean and standard deviation) for q and u using various values of λ, such as λ=10^j, -4≤ j≤ 1. The results of these experiments are presented in Table <ref>, which indicates that the performance of the inverse problem is highly dependent on the balance hyper-parameter λ. Specifically, we find that the relative reconstruction errors are optimized when λ is set to 10^-2. Furthermore, we observe that the reconstruction errors increase significantly as λ exceeds this optimal value. These results suggest that the selection of the balance hyper-parameter is critical to achieving good performance in this inverse problem. Next we experiment with various combinations of hidden layers and neuron numbers for the inverse problem using Algorithm <ref>. We set the dimension to d=2 and try a total of 16 combinations of hidden layers (NL) and per-layer neuron numbers (NN), with NL=3,6,9,14, NN=10,15,20,25. For each combination, we run Algorithm <ref> for 1× 10^5 iterations and record the relative errors (mean and standard deviation) for (q,u) in Table <ref>. It indicates that deeper (larger NL) and/or wider (larger NN) neural networks tend to yield lower reconstruction errors, although this causes higher computational cost. However, we also observe that for fixed neuron number NN, increasing the number of hidden layers NL, for example NL≥ 15, causes the algorithm fail to converge as the number of iterations increases. This suggests that increasing the number of layers and/or neurons can enhance the representation capacity of neural networks. But it may also introduce more parameters to train, and lead to longer training times and potential overfitting of the representation. Example 3 (three-dimensional experiment): We also take the following 3-dimensional experiment. We set the exact solution and the domain of equation (<ref>) as u(x,y,t)=texp(x+y+z), (t,x,y,z)∈Ω_T=[0,1]^4. The exact potential q is given as q(x,y,z)=x+y+z. We also employ fully-connected neural networks with NL=4, NN=20 for both u_θ and q_η. The number of training points are N=N_int+N_sb+N_tb+N_d=256+256×6+256+256=2304, which are randomly sampled from four different training sets. The other network architectures and hyper-parameters such as activation function, balance hyper-parameter λ, number of training epochs, and initial learning rate are all the same as Example 1. The test sets are chosen by a uniform mesh 𝒯:={(t_k,x_i,y_j,z_l): k,i,j,l=0,1,⋯,49}⊂Ω_T. Figure <ref> shows the exact potential function q and the relative errors versus iterations during training process for different noise scales δ=0, 1%, 5%, 10%. Figure <ref> presents the potential functions q_η recovered from different noise levels and the corresponding point by point absolute errors on test sets. The inversion results are satisfactory and reasonable overall. Finally, we conduct the experiments to evaluate the robustness of the proposed scheme in terms of network structure (number of hidden layers and per-layer nurons). More specifically, we run Algorithm <ref> with per-layer neuron numbers NN=5,10,20,25 for fixed hidden layers NL=6 and with hidden layers NL=4,6,8,10 for fixed per-layer neuron numbers NN=25, respectively. The reconstruction errors are presented in Figure <ref>. It also seems that larger NL and/or larger NN neural networks yield lower reconstruction error. In this example, for fixed hidden layer NL=6, we test the per-layer neuron numbers NN≥ 30 and find that there will be bad reconstruction result, that is, the relative error for q with NL=6, NN=30 is larger than the case NN=10,20,25 as the iterations increasing. Therefore, more layers and/or neurons with much more parameters to train, yield longer training time, and may result in overfitting of the reconstruction. § CONCLUDING REMARKS. In this work, a deep neural network-based reconstruction scheme has been proposed to solve an inverse potential problem in the parabolic equation. The proposed method has shown superior performance in high-dimensional space. We prove the uniqueness of the inverse potential problem. A new loss function has been introduced, which includes regularization terms that depend on the derivatives of the residuals for both the partial differential equation and the measurement data. These regularization terms aim to address the ill-posedness of the inverse problem and enhance the regularity of the solution. Additionally, the mollification method has been employed to improve the regularity of the noisy data, where it can reduce the perturbation errors caused by numerical differentiation on the noisy data. Generalization estimates based on the conditional stability of linear inverse source problems and the mollification error estimate on noisy data have been established, which provide a measure of the stability and accuracy of the proposed method in solving the inverse potential problem. Numerical experiments have been conducted to evaluate the performance of proposed method, which indicate the efficiency the approach in this work. § ACKNOWLEDGMENTS Mengmeng Zhang is supported by Foundation of Hebei University of Technology (Grant No.282022550) and Foundation of Tianjin Education Commission Research Program(Grant No.2022KJ102). Zhidong Zhang is supported by National Natural Science Foundation of China (Grant No. 12101627). plainurl
http://arxiv.org/abs/2307.04394v3
20230710075623
Relieving the $S_8$ Tension: Exploring the Surface-type DBI Model as a Dark Matter Paradigm
[ "Xingpao Suo", "Xi Kang", "Huanyuan Shan" ]
astro-ph.CO
[ "astro-ph.CO" ]
APS/123-QED [email protected] Institute for Astronomy, School of Physics, Zhejiang University, Hangzhou 310027, China [email protected] Institute for Astronomy, School of Physics, Zhejiang University, Hangzhou 310027, China Purple Mountain Observatory, 10 Yuan Hua Road, Nanjing 210034, China [email protected] Shanghai Astronomical Observatory (SHAO), Nandan Road 80, Shanghai 200030, China Recent observations of weak gravitational lensing surveys indicate a smoother Universe compared to the predictions of the Cosmic Microwave Background (CMB). This is known as σ_8 tension or S_8 tension, where σ_8 represents the present root-mean-square matter fluctuation averaged over a sphere of radius 8 h^-1Mpc and S_8 ≡σ_8√(Ω_m/0.3). In this Letter, we investigate a kind of general Dirac-Born-Infeld (DBI) Lagrangian referred as surface-type DBI (s-DBI) model. We have found that, up to the linear order, the constraints on the s-DBI model with CMB from Planck2018 and low-redshift probes (WL and GC) yield S_8= 0.7685_-0.0066^+0.0077 and S_8=0.766_-0.0376^+0.0471, respectively, which are not only self-consistent but also consistent with the values derived from most low-redshift probes. Furthermore, we provide an outlook for searching the non-linear effects of this model, which could be helpful to resolve other issues by Cold Dark Matter on small scales. Relieving the S_8 Tension: Exploring the Surface-type DBI Model as a Dark Matter Paradigm Huanyuan Shan August 12, 2023 ========================================================================================= Introduction. –The ΛCDM model stands as the most widely accepted cosmological model, serving as the standard framework for Big Bang cosmology. It offers a simple yet effective description that agrees with most observations. However, with the development of theoretical and observational studies, some disagreement between different observations or between theory and observations have emerged, challenging the ΛCDM model and suggesting the need for new extended model or physics<cit.>. Among these challenges, σ_8, or S_8 tension is one of the most significant<cit.>. It shows that the low-redshift probes such as weak gravitational lensing (WL) <cit.>, galaxy clustering (GC) <cit.> as well as their combined analyses <cit.>, indicate a smoother Universe than the constraint by cosmic microwave background (CMB)<cit.>. Quantitatively, the structure growth parameter S_8 ≡σ_8 √(Ω_m/0.3) derived from low-redshift probes is systematically 2-3σ lower than the value obtained from the CMB<cit.>. Recently, a joint cosmological analysis of cosmic shear + galaxy-galaxy lensing + GC yielded a constraint of (Ω_m, S_8) = (0.305^+0.010_-0.015,0.766^+0.02_-0.014)(see <cit.>, hereafter referred as K1K-3×2pt). This result is deviated by 8.3 ± 2.6% relative to (Ω_m, S_8) = (0.3166±0.0084, 0.834±0.016) given by of Planck2018<cit.>. In this Letter, we present a novel dark matter model which offers a solution to the S_8 tension. Referred as the surface-type Dirac-Born-Infeld (s-DBI) model, it adopts an area functional form as the dark matter Lagrangian, which presents a special case within the broader class of general DBI models. Our study demonstrates that this model effectively addresses the S_8 tension by smoothing out the low-redshift structure while preserving the perturbation evolution at high redshifts. The surface-type DBI as a dark matter model. –Here we consider the Lagrangian ℒ ≡R/2κ + Λ_I + Λ_II√(1 + ∂_μϕ∂^μϕ) + ℒ_m and its corresponding action S = ∫ d^4x √(-g)ℒ, where g ≡(g_μν) represents the determinant of the space-time metric g_μν with signature [-1,1,1,1], R denotes the scalar curvature of Levi-Civita connection, κ≡ 8 π G with gravitational constant G, Λ_I is the vacuum energy or equivalently cosmological constant, ℒ_m is the lagrangian of normal matter including radiation and baryon, and Λ_II√(1 + ∂_μϕ∂^μϕ) with a constant Λ_II and scalar field ϕ is the Lagrangian that we introduce to represent dark matter, which we refer to as the surface-type Dirac-Born-Infeld (s-DBI) model. It is important to note that our consideration of s-DBI is primarily from a mathematical standpoint. The terms ∫ d^4x √(-g) and ∫ d^4 x √(-g)√(1 + ∂_μϕ∂^μϕ) can be viewed as formal area or volume functionals. Meanwhile, it is worth mentioning that the s-DBI also processes strong physical motivations. It can be interpreted as a general DBI with the constant warp factor <cit.> or as a low-dimension deduced equivalence in membrane theory <cit.>. For the Lagrangian given in Eq. (<ref>), applying the principle of least action leads to the Einstein field equation: R_μν - 1/2R g_μν = -κ( T_μν^(Λ_I) + T_μν^(Λ_II) + T_μν^(m)), where R_μν is the Ricci tensor, T_μν^(Λ_I) = - Λ_I g_μν and T_μν^(Λ_II) = Λ_II(∂_μϕ∂_νϕ/√(1+∂_ρϕ∂^ρϕ) - g_μν√(1+∂_ρϕ∂^ρϕ)) represent the energy-stress tensor of dark energy and dark matter in this model, respectively. Now our focus turns to the s-DBI field. In a homogeneous Universe, according to Eq. (<ref>), this field can be treated as a perfect fluid characterized by the Equation of State (EoS) w = - Λ_II^2/ρ^2= - 1/1+(a_d/a)^6 , where w ≡ P / ρ, P and ρ denoting the pressure and mass density of the s-DBI field, respectively. Here, a is the scale factor normalized to unity at the present time, and a_d is a free parameter. When the Universe evolves from a=0 to a=∞, the s-DBI field transforms from the dark-matter phase (w=0) to the dark energy phase (w=-1). The parameter a_d characterizes the scale at which this phase transition occurs and can be interpreted as the decay scale factor or decay parameter. Notably, this phase transition is rapid with a power index of six. Using Eq. (<ref>), we can derive the density evolution of ρ regard to a as follows: ρ(a) = ρ_today/√(1+a_d^-6)√(a_d^-6+a^-6)≡ρ_s √(a_d^-6+a^-6) . Moreover, considering a linear perturbation in the homogeneous Universe, the sound speed of the s-DBI field can be given by c_s^2 = c_a^2 = - w , where c_s and c_a are the rest-frame and adiabatic sound speed, respectively. The EoS and sound speed provide sufficient information to complete the scalar linear evolution equations of the Universe <cit.>. The dark matter with the above form EoS and sound speed has such properties that during the early stages (a≪ a_d), it behaves similarly to the pressure-less standard cold dark matter, but at the late stages (a close to a_d), it exhibits a certain sound speed and pressure, which leads to the smoothing out the structures that formed during the early stages. This may provide an explanation for the observed smoother Universe compared to the predictions from the CMB. In Fig. <ref>, we present the linear matter spectra of different redshifts with a_d = 3.8 as a reference. It is evident that the suppression related to the ΛCDM increases with time. The value of the decay parameter will greatly influence this process. Fig. <ref> shows the power spectra of different a_d values at z=0, along with the matter power spectrum of the ΛCDM model for comparison. As a_d tends towards infinity, the s-DBI model will degenerate to ΛCDM. An initial estimate for a_d can be made based on the following considerations: if a_d≤ a_today = 1, the dark matter would have already decayed to the dark energy phase, which is against the observation. Therefore, a_d should be larger than one. However, a_d should not be so large that it becomes indistinguishable from standard cold dark matter. According to <cit.> and <cit.>, any solution to the S_8 tension must be effective after z≈ 1. Hence, a_d should not exceed approximately ten. In summary, if the constraint yields a value outside the range of [1, 10], it should be considered as providing insufficient support for this model. Note that the non-relativistic approximation of the s-DBI field is equivalent to the Chaplygin gas<cit.>, which has the EoS of P= - A/ρ with a constant A>0. However, in the relativistic region, we need to consider Eq. (<ref>) and the evolution equation for ϕ (1/2∂_μlog( -g ) + ∂_μ) ∂^μϕ/√(1+∂_νϕ∂^νϕ)=0 , which represents a general minimal surface equation. Since the perturbation evolution of dark matter, particularly on large scales and in the early stage of our Universe, is dominated by non-relativistic and linear part, we can ignore the non-linear and relativistic aspects of the theory. Constraints by the observations. –To demonstrate that the s-DBI model can alleviate the S_8 tension, we perform a series of constraints using different observational datasets. We begin with the of Planck2018, which combines the TT, TE, EE and low-E angular power spectra of the CMB to constrain the cosmological parameters<cit.>. This baseline analysis is advantageous as it avoids model-dependent non-linear effects that may introduce uncertainties <cit.>. For the low-redshift probes, we employ the WL shear catalog from KiDS1000<cit.> and the GC data from SDSS-III BOSS<cit.>. In our analysis, we treat the high-redshift probe (CMB) and low-redshift probes (WL and GC) separately, instead of combining them, since if the two data sets can give a consistent result, it will be a stronger proof of the correctness of a model. Additionally, we employ the same data sets to constrain the ΛCDM model in parallel, serving as a control group for comparison. We modified the Boltzmann code <cit.> [<https://lesgourg.github.io/class_public/class.html>] to perform perturbation calculations. Based on it, a public Markov Chain Monte Carlo (MCMC) sampler <cit.>[<https://baudren.github.io/montepython.html>] was used. All the MCMC samplings in our constraint are done with Metropolis-Hasting algorithm coded in . To constrain this model with Planck2018 , we assume a flat prior on some nuisance parameters in Planck likelihood <cit.> and the cosmological parameters {ω_b, Ω_s, h, A_s, n_s, τ_reio, a_d}, where Ω_s ≡ρ_s/ρ_cr≡8π G /3H_0^2ρ_s is the reduced dark matter density in our model. The names and prior of base cosmological parameters are listed in Table <ref>. For comparison, we also conducted a parallel ΛCDM constraint using a similar setup. Note that in all the analysis we always assume the spacial curvature is zero (Ω_K=0) and our neutrinos model is the same as Planck2018 with two massless species and one massive with 0.06eV. The posterior distributions with Planck2018 are presented in Table <ref>. The Markov chain used for the analysis satisfies the Gelman-Rubin convergence criterion with R-1 ≈ 10^-3, indicating good convergence. The posterior distributions for all parameters are approximately Gaussian, and the acceptance rate of the chain is around 0.22, indicating reliable convergence. Furthermore, our constraints on the ΛCDM model are consistent with the results reported by the Planck2018 collaboration <cit.>, validating the accuracy of our analysis. The results reveal slight differences in the mean values or best fits of common cosmological parameters between the s-DBI and ΛCDM. However, significant discrepancies are observed in the total matter density Ω_m and the structure growth parameter S_8. The s-DBI model yields values of (Ω_m, S_8) = (0.3072_-0.0055^+0.0071,0.7685_-0.0066^+0.0077) , which are in strong agreement with the results from K1K-3×2pt and clearly deviate from the result given by Planck2018 <cit.>. To assess the goodness of fit, we present the CMB temperature power spectrum with the best-fit model in Fig. <ref>. It is evident that the discrepancy between the two models is significantly smaller than the discrepancy between the theoretical predictions and observational data, giving χ^2_obs,LCDM=4.51× 10^-12, χ^2_obs,s-DBI= 4.38 × 10^-12 and χ^2_s-DBI,LCDM = 1.19× 10^-13, where χ^2_i,j is defined as χ^2_i,j≡∑_k=0^N-1(f_k^(i) - f_k^(j))^2/f_k^(j) with f_k^(i) the k-th entry of data set i with total length N. These results suggest that both the s-DBI and the ΛCDM model are strongly favored by Planck2018 data. Due to the similarity of the results, we did not include the plots of other components of the power spectra. It's also worth noting that the s-DBI model does not exacerbate the Hubble tension<cit.>. On the contrary, it relieves the Hubble tension by increasing the Hubble constant slightly higher to h≈ 0.68, compared with the result from ΛCDM with h≈0.67. After constraining the model with Planck2018 CMB power spectrum, we proceed with the combined constraint using low-redshift probes, i.e., WL and GC. We perform parallel constraints for both the s-DBI and ΛCDM models. The non-linear scale evolution of our model is not available, so we eliminate the non-linear effect reliably. For WL, we adopt the correlation function ξ_+(θ) and truncate the small scale portion (θ<10) using the KiDS cosmology analysis pipeline <cit.>. The validity of this truncation is ensured through a comparison between the correlation function data vector ξ⃗_+^NL and ξ⃗_+^L, which include the non-linear and linear effects, respectively. By increasing the angular variable θ, we verify that the relative distance between the two vectors ||Δξ⃗|| / ||ξ⃗_+^NL|| reaches a level of 10^-2, where Δξ⃗≡ξ⃗_+^L - ξ⃗_+^NL and ||· || ≡√(⟨·, ·⟩). Note that we discard the correlation function ξ_- since the effect of non-linear on ξ_- can hardly be removed. For GC, we focus only on the measurements of the baryon acoustic oscillations (BAO) and discard the redshift-space distortions. Due to the strict elimination of the non-linear effect, the constraint capacity on the five common base parameters becomes weaker. Hence, for both the s-DBI and ΛCDM models, we fix these parameters according to their respective best-fit values in Table <ref>. However, for the s-DBI model, we allow the decay parameter a_d to have a prior within the interval [2,6], as the constraint capability of WL + GC on a_d is unknown. For the ΛCDM model, the low-redshift data still prefer a lower value of S_8 compared to the Planck2018 . The constraint yields (Ω_m, S_8) = (0.299_-0.0105^+0.011, 0.770_-0.035^+0.0371), which is consistent with the results from K1K-3×2pt but with a difference of about 0.6σ for Ω_m and 0.1σ for S_8. However, as shown in Fig. <ref>, the tension between low-redshift probes and CMB still persists. On the other hand, for the s-DBI model, the S_8 tension does not exist. As depicted in Fig. <ref>, the constraint on WL+GC data gives a value of (Ω_m, S_8) = (0.305_-0.0127^+0.0107, 0.766_-0.0376^+0.0471), which is highly consistent with our constraint using the Planck2018 . Note that the area of credibility interval is larger than that of ΛCDM due to the degeneration between Ω_s and a_d. In conclusion, our analysis reveals that the S_8 tension persists in the ΛCDM model when considering non-linear-free data. This suggests that modifying the non-linear model such as <cit.> or <cit.>, is unlikely to resolve the tension effectively. On the other hand, the s-DBI model, within the scope of the data sets we have considered, successfully alleviates the S_8 tension. Non-linear effect and outlook. –A key issue exists about whether small-scale structures such as dark matter halos can form under the s-DBI model. To answer this question, we note that s-DBI's non-relativistic approximation, Chaplygin gas is barotropic, for which we can introduce an effective potential h ≡ - ∫_ρ^∞dP(ρ')/ρ' = - 1/2Λ_II^2/ρ^2 to substitute the effect of pressure. We include this external potential in the N-body simulation software <cit.> by modifying the implementation of the PM algorithm. Setting the cosmological parameters Ω_m, Ω_vac and h as the best-fit values of the s-DBI model from Table <ref>, we carry out the simulation with 512^3 particles in a cube box with the length of the edge of 100Mpc. The parallel simulation for ΛCDM is also performed. The simulations reveal that in the s-DBI model, the dark matter halo can indeed form. Furthermore, we find that the differences between the s-DBI and ΛCDM models are tiny at redshifts z>1. However, as the redshift z approaches zero, the s-DBI model predicts a lower non-linear power spectrum compared to ΛCDM. The "bias" between the power spectra of the two models, defined as b_M ≡√(P_s-DBI/P_Λ CDM), is shown in Fig. <ref>. Note that the leading order bias between observed and simulated power spectrum, denoted as b_1 ≡√(P_gg/P_mm ), can range from about 1.4 to 3.5<cit.>. In comparison, the bias b_M ≈ 0.9 is close enough to unity, suggesting that our model can fit the observed galaxy power spectrum by minor regulation on b_1. Besides, In the s-DBI simulation, we found that some small dark matter halos are dissolved by external pressure, suggesting that our model may hold promise in addressing other inconsistencies related to cold dark matter, such as the presence of dark matter lacking galaxy<cit.>, cuspy halo<cit.> and dwarf galaxy missing problem<cit.>. However, a rigorous numerical analysis is necessary to fully investigate these issues. Moreover, a more comprehensive understanding of the non-linear effects is crucial for further constraints using various cosmological probes. Given the complexity of this topic, we leave it to future work. Xingpao Suo and Xi Kang acknowledge the support from the National Key Research and Development Program of China (No.2022YFA1602903), the NSFC (No. 11825303, 11861131006), the science research grants from the China Manned Space project with No. CMS-CSST-2021-A03, CMS-CSST-2021-A04, the Fundamental Research Funds for the Central Universities of China (226-2022-00216) and the start-up funding of Zhejiang University. Huanyuan Shan acknowledges the support from NSFC of China under grant 11973070, Key Research Program of Frontier Sciences, CAS, Grant No. ZDBS-LY-7013, Program of Shanghai Academic/Technology Research Leader, and the science research grants from the China Manned Space Project with NO. CMS-CSST-2021-A01, CMS-CSST-2021-A04. We thank Joe Zuntz and Benjamin Stölzner for helpful discussions. *
http://arxiv.org/abs/2307.06237v1
20230712152709
Physical parameters of stellar population in star formation regions of galaxies
[ "A. S. Gusev", "F. Sakhibov", "O. V. Egorov", "V. S. Kostiuk", "E. V. Shimanovskaya" ]
astro-ph.GA
[ "astro-ph.GA" ]
Astrometric Calibration and Performance of the Dark Energy Spectroscopic Instrument Focal Plane H. Zou August 12, 2023 =============================================================================================== We present the results of a study of young unresolved stellar groupings (clusters, OB associations, and their complexes) associated with H ii regions, based on the coupling of spectroscopic, photometric and Hα spectrophotometric observations of star formation regions. Along with our own observations, we use a part of the spectroscopic and Hα data from the literature and open databases. The study is based on the catalogue of 1510 star formation regions with ages ∼10-20 Myr in 19 spiral galaxies, compiled by us earlier. We study the morphology of stellar groupings and their relation with the associated Hα emission region. Extinctions, gas chemical abundances, and sizes of star formation regions are measured. Using numerical SSP models computed for metallicities fixed from observations to intrinsic colours of the studied star formation regions, we estimated ages and masses of stellar population of 400 young stellar groupings. Different relations between observational and physical parameters of the young stellar population in star formation regions are discussed. H ii regions – galaxies: ISM – galaxies: star clusters: general – galaxies: star formation § INTRODUCTION To understand processes of modern star formation in galaxies and to study the early evolution of star clusters, OB associations and complexes, one needs to estimate physical and chemical parameters of young stellar groupings in star formation regions, including their age, mass, size, and metallicity. A galaxy star-forming region is a single mixture of newly formed star clusters, ionized gas, and clouds of molecular gas and dust. Star formation regions form a hierarchical structure on scales from a few to several hundreds of parsecs. The largest star formation regions are star complexes with typical sizes of about 300-700 pc <cit.>. The diameters of the largest complexes reach 2 kpc <cit.>. These complexes are the largest coherent groupings of stars, clusters, and associations which are connected by the unity of the origin from the same H_2 supercloud <cit.>. On small scales, there are star clusters with sizes from a few parsecs which have been formed within dense cores of giant molecular clouds (GMCs). OB associations and stellar aggregates with sizes from ∼40 to ∼200 pc occupy intermediate scales of star formation. This paper focuses on studying the stellar groupings in star formation regions of rather distant galaxies (see Table <ref>). The angular resolution of our observations ∼1-1.5 arcsec corresponds to the linear resolution 30-40 pc for the nearest galaxies NGC 628, NGC 5585, and NGC 6946, and 350-400 pc in the faraway galaxies NGC 783 and IC 1525. It does not allow us to separate the young star clusters and OB associations even in the nearest galaxies: smaller star clusters are observed as star-like objects with diameters of 30-40 pc. In more distant galaxies, we can observe star formation regions with sizes of 200–300 pc and larger, i.e. star complexes. Star clusters, embedded in star formation regions, are dense aggregates of young stars, formed at essentially the same time in the same region of space <cit.>. In our previous paper <cit.>, we found that the minimal masses of the studied star clusters in the nearest galaxies NGC 628 and NGC 6946 are ≈1·10^4 M_⊙. According to <cit.>, star clusters that are more massive than ∼10^4 M_⊙ are determined as young massive clusters. <cit.> showed that the youngest (age ≤10 Myr) clusters and associations are poorly separated. Thus, most of the objects studied here are young massive clusters (associations) or complexes of young star clusters. Hereinafter, we will call the studied stellar populations in star formation regions the 'stellar groupings'. It shall be understood that this common term encompasses different types of young objects, from giant complexes of clusters and stars to OB associations and star clusters. A star formation region goes through several stages of evolution during first tens Myr of its life, from the stage when young stars are completely obscured by their dusty gas cocoons to the stage of a young star cluster with no evidence of the ionized gas <cit.>. <cit.> developed an evolutionary classification scheme of star clusters based on Hubble Space Telescope (HST) observations of M83. Star clusters become visible in optical bands since the age of ∼2.5 Myr <cit.>. The authors showed that in clusters with ages between 2 and 4 Myr, the ionized gas is observed in the same place as the cluster stars. Clusters of ages ≈4-5 Myr are surrounded with small H ii bubbles whose radii are equal to 7-20 pc <cit.>. The phase of the partially embedded cluster blowing a bubble of gas is rather short, it lasts for about 1-3 Myr <cit.>. Star clusters with ages of > 5 Myr are surrounded by a large ionized gas bubble. The radii of the bubbles are larger than 20 pc. The ionized gas is not detected around star clusters of ages > 10 Myr. Figure <ref> illustrates this evolutionary sequence on the sample of our young unresolved objects. A study of the earliest stages of star clusters, OB associations and their complexes and estimation of physical parameters therein are difficult tasks because of the impact of gas and dust on the observations. Perhaps the most difficult task is to estimate ages of stellar populations. Usually, 2D or 3D spectroscopic or photometric data, or their combination are used for estimating ages of unresolved extragalactic star clusters. The spectroscopic method involves both estimation of spectral age indicators (e.g., equivalent widths EW(Hα) and EW(Hβ), [O iii]/Hβ ratio, He ii emission lines, etc.) and a direct comparison of spectra with synthetic spectra of different ages <cit.>. The photometric method involves comparison of multicolour photometry data for clusters with predictions of evolutionary synthesis models <cit.>. Comparison shows that ages, evaluated for the same star clusters using data of spectral and photometric observations are in a fairly good agreement <cit.>. However, <cit.> who studied resolved stellar populations in star formation regions of M83 found that correlation between the ages of star clusters determined from individual stars in the region and the ages obtained via integrated colours using a standard photometric method is not very strong. This discrepancy, in addition to the reasons indicated by <cit.> (Hα emission impact, selection effects for stars, overlay of isochrones for 1 and 3 Myr), we tend to explain also by the use of a continuously populated IMF and the presence of a small but significant range of ages of stars in the clusters. Spectroscopic techniques usually provide age estimates <cit.>, however the method allows determination of ages for a limited number of objects. One of the main challenges in photometric age estimation is accounting for the effect of gas and dust on observations. A lack of independent data on the chemical abundance and extinction in the clusters leads to 'age–extinction' and 'age–metallicity' degeneracies in the comparative analysis with the theoretical evolutionary models of star clusters <cit.>. Moreover, continuum and line emissions from the ionized gas are strong enough to affect the integrated broad-band photometry <cit.>. Age and mass estimates based on long-slit spectroscopic observations are correct only if the radiation of stars, which form the continuum of the spectrum, spatially coincides with the ionized gas emission. This situation is observed in star formation regions younger than ≈5 Myr <cit.>. The combination of optical photometric, Hα spectrophotometric and spectroscopic observations provides us with the necessary data to separate spatially the radiation from gas and stellar components. This makes it possible to take into account the contribution of the gas to the optical photometric bands and to find objects in which the light extinction for the stars is equal to the light extinction for the emission of ionized gas <cit.>. Note that for stellar clusters (simple stellar population (SSP) systems) with masses less than 5·10^3-10^4 M_⊙, stochastic effects in the discrete random population IMF start playing a key role and applying the 'standard' mode of continuous IMF models will not be correct <cit.>. According to <cit.>, IMF discreteness significantly affects the luminosity and colour of the cluster. The strength of this effect depends on the wavelength and is particularly strong in IR wavelengths, where the effect of discreteness is noticeable up to M_ cl∼ 10^7 M_⊙ masses, rather than 10^4 M_⊙ as at optical wavelengths in the V band. Thus, the estimation of physical parameters of stellar groupings, using photometric methods with a continuously populated IMF is correct only for massive star clusters. This paper presents the conclusive part of our project of comprehensive study of star formation regions in the selected 19 spiral galaxies. The results of our own spectroscopic observations of 103 H ii regions in eight galaxies were presented in the previous papers <cit.>. In <cit.> we estimated physical parameters of stellar population in H ii regions using a combination of spectroscopic <cit.> and photometric observations (see Table <ref>). We derived properties of extracted emission spectra of H ii regions and estimated their extinctions, chemical abundances, and the relative contributions of nebular continuum and emission lines to the total observed flux. These data were used to obtain the luminosities and colour indices of stellar groupings, corrected for extinction and nebular emission contribution, i.e. 'true' (intrinsic) colours and luminosities of stellar population. As a result, we were able to estimate ages and masses for ≈60% of clusters (complexes) of our sample. Extinctions for ≈35% of objects were overestimated. This is due to the fact that the key assumption of our method, the equality of the light extinction for stars, A( stars), and the light extinction for ionized gas, A( gas), is not satisfied for a significant number of young clusters. The fact that the extinction in a gaseous medium is up to 2 times higher than the stellar one is a well-known fact for a long time <cit.>. <cit.> empirically investigated the discrepancy between the extinction of gas emission and the extinction of stellar light in the giant H ii regions, i.e. in star formation complexes in galaxies M33, LMC and NGC 2403. They found that, in most cases, A( Balmer)≡ A( gas) is higher than A( stars). Such a result can be explained in terms of the quite uneven distribution of obscuring material <cit.>. So, using correction of the observed colours of stars in the star formation complexes for extinction in the Balmer lines can result in a bias in the colours of the stars in the complexes toward the blue part of the spectrum, thus distorting the parameters of star formation derived from these colours. Objects with A( gas) A( stars) have a high nebular emission contribution in the U, B, V bands (>40%) and an extremely high EW(Hα)>1500Å. Visually, in these regions a spatial displacement between the photometric centres of the stars in the broad bands and of the gas emissions in Hα line is observed (see the third image from left in Fig. <ref>). Later, we presented the catalogue of 1510 young stellar groupings associated with H ii regions in 19 galaxies in <cit.> using multicolour photometric and Hα (Hα+[N ii]) spectrophotometric observations. This catalogue is available in electronic form[<http://lnfm1.sai.msu.ru/g̃usev/sfr_cat.html>]. In the same paper we modified our extinction and age estimation techniques using Hα morphology as an additional indicator. This method was developed in <cit.>. The goal of this study is to estimate the physical parameters, such as mass and age, of the stellar population in the star formation regions of the galaxies of our sample, using additional spectral data for H ii regions taken from the literature and open databases. Most of the stellar groupings studied in this paper have an age of ∼1-10 Myr, i.e. they are objects with Hα emission, visible at optical wavelengths. In addition, we studied young star clusters with colour indices typical for stellar populations younger than 10 Myr without visible Hα emission including the cases for which Hα data are absent for the galaxy. These young objects may be older than 10 Myr (see Section <ref> for details). The sample of selected galaxies is based on our UBVRI photometric survey of 26 galaxies <cit.>. Numerous star formation regions are observed visually in 19 of them. The sample is presented in Table <ref>, where data on the Galactic extinction, A(B)_ Gal, are taken from the NED[<http://ned.ipac.caltech.edu/>] database, and the other parameters are taken from the LEDA[<http://leda.univ-lyon1.fr/>] database <cit.>. The morphological type of the galaxy is listed in column (2). The apparent and absolute B magnitudes are presented in columns (3) and (4). The inclination and position angles are given in columns (5) and (6). The isophotal radii in units of arcmin are shown in column (7). The adopted distances are given in column (8). The Galactic extinction and the dust extinction due to the inclination of a galaxy are listed in columns (9) and (10). The number of identified star formation regions in the galaxy is shown in column (11). A presence of photometric (Ph) and Hα spectrophotometric observations of the galaxies, as well as spectrophotometric and spectroscopic (Sp) data for the star formation regions and the references to them are given in column (12). The adopted value of the Hubble constant in the study is equal to H_0 = 75 km s^-1Mpc^-1. § DATA AND METHODS USED The algorithm and techniques for data reduction, criteria for selecting star formations regions, and evolutionary synthesis models used were described in detail in our previous paper <cit.>. In this paper, we describe only the new data and models used, as well as the data considered earlier very briefly. §.§ Observational data Most part of our own observations was published earlier (see Table <ref>). Additionally, we used FITS images of the galaxies which were taken from the NED database, as well as spectroscopic data from the Sloan Digital Sky Survey DR13[<http://www.sdss.org/dr13/>] <cit.> and from the literature (see references in Table <ref>). §.§.§ Photometric and spectrophotometric Hα images Earlier we carried out photometric observations of 19 galaxies, studied here, and published the analysis of the photometric data <cit.>. Spectrophotometric Hα observations of NGC 3184 and Hα+[N ii] observations of NGC 628, NGC 6946, and NGC 7331 were described in our previous papers <cit.>. FITS images, obtained with narrow-band interference Hα+[N ii] or Hα filters for another five galaxies from our sample, were found in the NED database (see references in Table <ref>). We used the Hα+[N ii] FITS image of NGC 3726 obtained by <cit.>. Parameters for absolute calibration of Hα+[N ii] flux to units of erg s^-1cm^-2 were found in descriptors of the FITS file. Absolute calibration of the FITS image of NGC 5585 from <cit.> was done according to the data in descriptors of the FITS file. Additionally, we checked the calibration using integrated Hα+[N ii] fluxes of NGC 5585 measured in <cit.> and <cit.>. For a study of H ii parameters in NGC 2336, we used the Hα+[N ii] FITS image obtained in <cit.>. FITS file descriptors and results of integrated Hα+[N ii] photometry from <cit.> were used for absolute flux calibration. For two galaxies (NGC 266 and NGC 6217) we used Hα FITS images published in <cit.>. Their absolute calibrations were carried out using the results of integrated Hα photometry of <cit.> for NGC 266 and NGC 6217, and integrated spectrophotometry of <cit.> for NGC 6217. Note that the absolute calibration uncertainty of NGC 266 and NGC 6217 can reach ≈20-25%. This accuracy, however, is sufficient for estimates of nebular emission contributions in total fluxes from star formation regions in galaxies <cit.>. §.§.§ Spectroscopic data for star formation regions The results of spectroscopic observations of 103 H ii regions in eight galaxies have already been published in our previous papers <cit.>. An explicit description of the observational data reduction is given in <cit.>. Additionally, in this paper we used data of emission-line spectrophotometry, integral field spectroscopy, and long-slit spectroscopy for H ii regions in the galaxies in our sample from the literature and SDSS (see notes in Table <ref>). Among 19 galaxies of our sample, we found spectral data for H ii regions in NGC 628, NGC 3184, NGC 6946, and NGC 7331 in the literature. Some H ii regions in NGC 3184, NGC 3726, NGC 4136, NGC 5351, and NGC 5585 were observed in the SDSS project. At the first stage, we cross-identified the H ii regions observed by different authors. Then these regions were identified with the objects from our catalogue (see Sections <ref>, <ref>). A description of calculation of the extinction coefficient, c(Hβ), from the measured Balmer decrement was presented in <cit.>. Oxygen abundances, O/H, in H ii regions were obtained using the reddening-corrected fluxes in the main emission lines [O ii]λ3727+3729, [O iii]λ4959+5007, [N ii]λ6548+6584, and [S ii]λ6717+6731. Because different sets of emission lines were measured in different studies, we used four different empirical calibration methods. In order of priority, these are S-calibration <cit.>, R-calibration <cit.>, O3N2 calibration <cit.>, and H ii-ChiMistry method <cit.>. We also took our O/H data from <cit.> measured using NS-calibration <cit.>. For objects, observed in several studies, we took weighted averages of the measured abundances, extinctions, and Hα equivalent widths with weights inversely proportional to their relative measurement uncertainties. We used all available data (our, SDSS, and from the literature) to find the mean values, with one exception. Spectrophotometric measurements of <cit.> for NGC 628 and NGC 6946 were used only for objects which were not observed by any other authors. This is because the spectrophotometry results of <cit.> are not as accurate as spectroscopic ones for individual objects <cit.>. We also do not include recently published measurements made for H ii regions in NGC 628 within inner 1.5 effective radii based on IFU spectroscopy with MUSE/VLT <cit.>, but compare the results in Section <ref>. §.§ Sample selection The procedure of selection of young stellar groupings was described in detail in <cit.>. We note briefly that the preliminary selection of bright star formation sources from B and Hα images of galaxies was carried out with the use of the SExtractor[<http://sextractor.sourceforge.net/>] program. We searched stellar groupings associated with H ii regions and young star clusters with colour indices corresponding to stellar populations younger than 10 Myr. The final selection criteria for the objects, included in our catalogue, have been explained in <cit.>. The selected young stellar groupings must satisfy one of the following conditions: (i) those, which form close pairs with the nearest H ii regions: the angular separation between photometric centres of the stars in B band and the gas emission in Hα is less than 1.5 arcsec (in 9 galaxies with obtained Hα+[N ii] or Hα images), (ii) those, for which the emission spectra are measured (in 13 galaxies with obtained spectroscopic data), (iii) those, which have corrected for the Galactic extinction and inclination effects (U-B)_0^i<-0.537 mag (in 16 galaxies for which U images have been obtained), (iv) those, which have (B-V)_0^i<-0.043 mag (in NGC 4136, NGC 5605, NGC 5665, and outer part of NGC 7331). Remark that the ambiguity of age from U-B and B-V values exists for stellar systems with ages between 6 and 40 Myr. Stellar groupings with (U-B)_0^i colour indexes ranging from -0.75 to -0.5 can be both older and younger than 10 Myr <cit.>. §.§ Young stellar groupings and gas-to-stars morphology As we noted in <cit.>, the technique, that we use to determine the age and mass of the stellar component of star formation regions, has some limitations. The spatial displacement between photometric centres of stars and of gas emissions in star formation regions leads to incorrectly estimated extinction and overestimated contribution of nebular emission in optical broad-bands. Physical parameters (age and mass) of stellar population in star formation regions can be correctly retrieved only provided that the optical radiation from stars spatially coincides with ionized gas emission. A typical sample of such regions is shown on the second image from left in Fig. <ref>. If both optical broad-band and Hα images of a galaxy are available, it is possible to detect a presence or an absence of the spatial displacement by direct comparison of the positions of photometric centres in these bands. For galaxies, that were not observed in Hα line, we can suspect the displacement by spectral features, such as an extremely large EW(Hα) (>1500Å), an extremely high nebular emission contribution (>40%) in the shortwave optical bands, an extremely large Balmer decrement, giving unrealistically 'blue' colour indices <cit.>. For objects with a star-like profile, we accept that the photometric emission centers of stars and gas coincide if the distance between them in a plane does not exceed 0.5 arcsec <cit.>. In addition to the objects where the optical emission from stars coincides with the Hα-emission from ionized gas, we can obtain physical parameters of young star clusters without a visible Hα emission. These star clusters have the extinction that is close to zero <cit.>, thus we can assume A( stars)=A_ Gal+A_ in, where A_ Gal is the Galactic extinction and A_ in is the dust extinction due to the inclination of a galaxy (see columns (9) and (10) in Table <ref>). A sample of such regions is shown on the right image in Fig. <ref>. We have classified the young stellar groupings studied here as follows: class 2 – optical radiation from stars coincides with ionized gas emission (second image from left in Fig. <ref>); class 1 – photometric (stellar) radiation centre is displaced from gas emission centre (third image from left in Fig. <ref>); class 0 – no gas emission within the area of optical radiation from stars (forth image from left in Fig. <ref>); class –1 – no Hα data. §.§ Comparison with synthetic models We described in detail the algorithm for correction of observational photometric fluxes for contribution from nebular continuum and emission lines in <cit.>. To briefly summarize: we determined the relative contributions of the stellar and nebular continua, and gas emission lines to the total observed flux in the UBVRI bands following <cit.>. We used the emission line ratios for every star formation region in our sample (see Section <ref>) to derive electron temperatures and metallicities in the H ii regions. The fluxes for the non-measured emission lines were calculated based on the derived estimations of the emission measures, using the equations given in <cit.> and <cit.>. A total of 18 main emission lines were taken into account. The contribution from the gas line emission was computed through the summation of the emission line intensities in a given photometric band. The relative contribution of the nebular continuum was estimated using the equations for the continuum emission near the limits of the hydrogen series emission, two-photon and free-free emissions, given in <cit.>. We used spectrophotometric Hα (Hα+[N ii]) fluxes (see Section  <ref>) for the absolute calibration of the emission line spectroscopic fluxes. For two galaxies without Hα photometry, NGC 4136 and NGC 5351, we multiplied the absolute fluxes, obtained within the SDSS aperture, by a factor, calculated as the ratio of the flux in the R band within the aperture that we used for every H ii region to the flux within the area of the SDSS aperture. The reliability of this procedure was discussed in <cit.>. Obtained 'true' photometric parameters of the star groups, i.e. colours and magnitudes corrected for the extinction and gas contribution, were compared with SSP evolutionary sequences using Salpeter IMF with a mass range from 0.15 to 100 M_⊙. We used a database of stellar evolutionary tracks and isochrones provided by the Padova group <cit.> via the online server CMD[<http://stev.oapd.inaf.it/cgi-bin/cmd/>]. We used the sets of stellar evolutionary tracks from version 2.8 <cit.>. For young massive clusters with M>1·10^4 M_⊙, models in Standard modes have been developed adopting the technique described in <cit.>. The standard mode reproduces properties of standard SSP models with a continuously populated IMF, while the extended mode allows to take into account the influence of a randomly populated IMF. As we discussed earlier in <cit.>, the multiple structure of unresolved star complexes does not affect their integrated colour indices and therefore the estimations of the age of a stellar population. Multicolour photometry provides a useful tool for constraining masses and ages of stellar populations in star formation regions. Here we use the method of the minimisation of 'observed minus computed' (O-C) parameters O-C=[[(U-B)_ obs-(U-B)_ model]^2+ +[(B-V)_ obs-(B-V)_ model]^2 +[M(B)_ obs-M(B)_ model]^2]^1/2, described in <cit.>, where under the concept of 'observed parameters' we place the 'true' colours U-B and B-V, and B luminosities. We did not use the V-R and V-I colour indices because, in the case of star formation regions, the R and I fluxes are weakly sensitive to changes in age, and actual observational errors lead to large uncertainties. Moreover, the stochastic effects of the stellar luminosity function are noticeable up to M_ cl∼ 10^7 M_⊙ masses, rather than 10^4 M_⊙ as at optical wavelengths in U, B, and V bands <cit.>. The exception is a few objects without gas emission within the area of optical radiation from stars, for which observations in the U band are unavailable. For them, we used the 'true' colours B-V and V-R. The stellar population models, computed for Z, independently obtained from observations, are presented in the form of a grid of models for a broad range of variation of parameters t(i) and M(j), where the indices i, j are the numbers of rows and columns in a two dimensional grid of physical parameters. The table step h of the log t parameter variation is 0.05 dex. The initial table step of the log M for the first iteration depends on the range of luminosity variations of star formation regions in a given galaxy: h_log M=(log M_ max-log M_ min)/N, where N is the number of the evolutionary sequences simulated for N values of cluster masses within a given mass interval. For every node (i,j), the value of the (O-C)_i,j parameter was calculated. The second step is the search for the grid node in which the (O-C)_i,j parameter has a minimum value. Note that the value of the parameter (O-C)_i,j corresponds to the distance of the investigated stellar cluster from the grid node (i,j) in the three-dimensional photometric space of (U-B, B-V, M_B). The minimum value of (O-C)_i,j corresponds to the distance between the object under study and the nearest node in the photometric value space. This lowest parameter value (O-C)^ min_i,j had to be less than the errors of the observed colours and luminosities. Otherwise, the next iteration was carried out, in which the mass interval was halved and centred according to the results of the previous iteration. Thus, beginning with the second iteration, for each stellar grouping under study, a particular grid of models was simulated according to the input observational data (M(B)_ obs, (U-B)_ obs, (B-V)_ obs, Z). With each successive iteration, the density of the model grid was increased. The number of iterations per object needed to achieve the requirement (O-C)^ min_i,j less than the observational error of photometric quantities, Δ_U-B, Δ_B-V, Δ_M_B, ranged from 2 to 4. The values of age t(i) and mass M(j) corresponding to the selected node (i,j) with the minimum value of parameter (O-C)^ min_i,j, were taken as the age t_ cl and mass M_ cl for the stellar grouping under study. Simultaneous constraint using Eq. <ref> and true colours and luminosities in the U, B, V bands, which are most sensitive to changes of age and mass, with a grid of models simulated for metallicity, which is fixed from independent observations, helps to avoid ambiguities associated with degenerations of 'metallicity-age', 'absorption-age', 'luminosity-mass' and ambiguity in the estimates of physical properties, age t_ cl and mass M_ cl, within the adopted model. The model grid of Extended mode was constructed using Monte Carlo simulations by which random variations of the discrete IMF depending on a given model star cluster mass were generated. For every given cluster mass, a discrete IMF was generated using a pseudo-random number generator. Note that with this random sampling of the discretely populated IMF for a fixed mass value of the stellar grouping, the number of stars, N_ stars in that grouping is also fixed. Then, using this randomly chosen discrete IMF, we have calculated an evolutionary sequence of 68 models of Extended mode for every given cluster mass with a log t-step of 0.05 in the interval log t = 5.9 - 9.3 and metallicity Z fixed from the observations. For each calculation of a randomly sampled discrete IMF, a random seed was used, also obtained using a pseudo-random number generator. When comparing the observed colours and luminosity of a given object with that of a model, each iteration uses N=50 evolutionary sequences of 'discrete' models of Extended mode. The number of iterations per object ranged from 2 to 4. So the number of simulations of a randomly sampled discrete IMF per object varies from 50 to 200. Number of simulated Extended mode models for each pair of mass M_ cl and age t_ cl estimates ranged from 6800 to 13600. The errors of the age and mass estimates for the case of continuous IMF models have been calculated as follows. Using the evolutionary sequences for the star cluster colour indices in the U, B, V bands, simulated for a fixed model grid node (i,j), the coefficients of the third- or fourth-degree interpolation polynom were calculated for a time interval corresponding to the selected node (i,j) with a minimum value of the parameter (O-C)_i,j. Knowing the functional (polynomial) correlation between colour and age, and the observational error of the colours used, we applied the Gauss law of error propagation to determine the accuracy of the age estimates. Similarly, using the functional correlation between the model luminosity and the cluster mass, as well as the observational errors of the integrated luminosities, the accuracies of the mass estimates were defined. The IMF discreteness significantly affects the luminosity and colours of the cluster, as manifested by flashes and fluctuations in the evolutionary path of the cluster's photometric parameters, caused by the appearance of red giants. There is also a systematic bias between luminosities and colours of main sequence clusters and the predictions by standard SSP models <cit.>. The luminosity evolution curve of the discrete cluster model has the form of tilted oscillations and consists of relatively short time scale intervals of recurrent events. During a single time interval, there are a slow, gradual increase in cluster luminosity and an almost instantaneous outburst, caused by the evolution of the brightest star in the main sequence and its eventual transformation into a bright, short-lived red supergiant. After the supergiant's decline, the process is repeated by the evolution of the next brightest star on the main sequence and its transformation into a red giant. Note that the behaviour of the colour and luminosity evolution curves of the discrete model, described above, is stronger in the case of small cluster masses M_ cl, when the number of red giants is rare and the clusters spend most of the time as clusters with stars of the main sequence. As the cluster mass M_ cl increases, the colour and luminosity evolution curves of the discrete model converge to those of the standard continuous model. The errors of the age and mass estimates for the case of discrete IMF models have been calculated as follows. Based on the model colours corresponding to the selected node (i,j) with the minimum value of parameter (O-C)^ min_i,j, the corresponding interval on the colour evolution curve of the discrete model between two 'red flashes' was selected and the coefficients of the interpolation polynomial were calculated. Then, as in the case of the continuous model, knowing the functional relationship between age and colour, as well as colour observation errors, we applied the Gauss law of error propagation to determine the accuracy of the age estimates. Similarly, using the functional correlation between the model luminosity and the cluster mass within the interval between two 'red flashes', as well as observational errors of the integrated luminosities, the accuracies of the mass estimates were determined. In summary, the errors of age and mass estimates, calculated here, take into account the influence of colour and luminosity observation errors only. The influence of the accuracy of the choice of a model grid node, on the estimates of ages and masses, we have not considered, just making sure that the minimum parameter (O-C)^ min_i,j must not exceed the observational error of the colours and luminosities, Δ_U-B, Δ_B-V, Δ_M_B. We have therefore not detected the effect of increasing inaccuracy when considering discrete models of small masses. We compared the mass and age estimates, obtained using the continuously and randomly populated IMF, in Fig. <ref>. The top diagrams show that M_ rand and t_ rand are systematically larger than M_ cont and t_ cont, respectively. This difference decreases for high-massive (M>5·10^4 M_⊙) and ageing (t>50 Myr) stellar groupings. The differences between the mass estimates, obtained from continuous and random models for groups in the mass range of 5·10^3 - 10^4 M_⊙, are larger than for groups of higher mass. Above we have already noted the effect of IMF discreteness on the luminosity of the cluster, which is manifested by a systematic bias between luminosities of main sequence clusters with randomly populated IMF and the predictions by standard SSP models noted in <cit.> and discussed thoroughly in <cit.> and <cit.>. The strength of this bias between the luminosities of the discrete and standard models is stronger at low masses and decreases with increasing cluster mass, which is apparent in our estimates of the masses of the stellar groupings explored here. The maximum differences in age estimates are observed for objects with t=5-12 Myr (Fig. <ref>). This difference results from the fact that in this age interval, the number of red giants in the case of the randomly populated IMF is rare and most of the cluster stars are main sequence stars. Models with a continuously populated IMF and ages t=5-12 Myr always contain red giants, which shift their colours towards red relative to those of models with a discrete IMF. With increasing age, the colour bias between continuous and discrete IMF models decreases. The bias also decreases at low ages t<5 Myr <cit.>. The use of various methods for determining ages in the range of 5-10 Myr gives results that differ by an order of magnitude or more <cit.>. The maximum difference in log M_ cont-log M_ rand is observed for objects with t_ cont=5-12 Myr and t_ rand=10-30 Myr (Fig. <ref>). For the youngest (t<4 Myr) and oldest (t>50 Myr) stellar groupings, it usually does not exceed 0.2 dex. The age difference log t_ cont-log t_ rand is systematically negative for low-massive objects with M<3·10^3 M_⊙. For most stellar groupings with M>5·10^3 M_⊙, this difference does not exceed 0.2 dex (Fig. <ref>). In general, the data obtained in Fig. <ref> correspond to the conclusions of <cit.> and <cit.> about the need to use models with a randomly populated IMF for star clusters with M<1·10^4 M_⊙, where the IMF discreteness effect is particularly strong. At low masses, models with a discrete IMF remain relatively long time like a main sequence cluster model, free of red giants, due to their small number. Continuous models cannot resemble main sequence clusters because they always contain a fraction of red giants. Therefore, at small masses, the luminosity of a branch of the main sequence cluster can be 2-3 mag below the luminosity of a continuous track and colours are correspondingly bluer, Δ(B-V)≈0.1-0.5 mag <cit.>. As noted by <cit.>, the mean value of model simulations with a randomly sampled discrete populated IMF converges to the results of a 'standard' model with a continuously populated IMF. It is noted, however, that the 'random sampling' mode gives the entire distribution of possible age and mass values compared to the 'standard mode' for a set of photometric parameters fixed from the observations. At larger masses, on the other hand, the associated distributions approach Gaussians and the relative variance decreases <cit.>, so only the mean (and variance) is required for inference. Indeed, as the star cluster mass increases, the IMF population density increases, converging to a continuously populated IMF density and the results converge to the inferences of the 'standard' model. Thus, when at M_ cl≥ 10^4 M_⊙ the bias between the discrete and continuous models is comparable or less than the errors of our observations and the application of continuously populated IMF models is more rational to reduce computational time. In closing Section <ref>, we note again, that the bias between the discrete and continuous models is consistent with the above noted systematic excess of M_ rand over M_ cont, especially at low masses, where the number of red flashes in the case of a discrete IMF is rare and the luminosity of the cluster is determined by the emission of main sequence stars. Since the continuous IMF models have for a given mass a luminosity excess compared to discrete mass models, the former option requires lower luminosity in order to agree both models. Older age estimates, obtained with the discrete model, can be similarly explained by the blue colour bias of the discrete models with respect to continuous option. § RESULTS §.§ Catalogue of young stellar groupings The catalogue is available in electronic form at http://lnfm1.sai.msu.ru/∼gusev/sfr_cat.html and is also presented as the additional online material on the paper page of the MNRAS website. The following data are presented in the catalogue columns: (1) ID of the region; (2) galaxy name (NGC, IC, or UGC); (3) ID of the object within a galaxy; (4, 5) apparent coordinates in the plane of the sky, with respect to the galaxy centre, in units of arcseconds; positive values correspond to the northern (4) and western (5) positions; (6, 7) deprojected galactocentric distances in units of kpc (6) and in units of isophotal radius R_25 (7), where R_25 is the radius at the isophotal level 25 mag arcsec^-2 in the B band corrected for the Galactic extinction and inclination effects; (8) apparent total B magnitude; (9) absolute magnitude M(B), M(B) = B - 5log D - 25, where D is an adopted distance in units of Mpc (see Table <ref>); (10) B magnitude uncertainty; (11–18) apparent colour indices U-B (11), B-V (13), V-R (15), and V-I (17) with their uncertainties (12, 14, 16, 18); (19, 20) logarithm of spectrophotometric Hα+[N ii] flux (19), where the flux is in units of erg s^-1cm^-2, for all galaxies except NGC 266, NGC 3184, and NGC 6217, and logarithm of Hα flux for NGC 266, NGC 3184, and NGC 6217, and their uncertainties (20); (21) absolute magnitude M(B)_0^i, corrected for the Galactic extinction and inclination effects; (22–25) the colour indices (U-B)_0^i (22), (B-V)_0^i (23), (V-R)_0^i (24), and (V-I)_0^i (25), corrected for the Galactic extinction and inclination effects; (26) the same as column (19), but corrected for the Galactic extinction and inclination effects, F(Hα+[N ii])_0^i (F(Hα)_0^i); (27) R-Hα index, R- Hα = R+2.5log F( Hα+[N ii]) for all galaxies except NGC 266, NGC 3184, and NGC 6217, and R- Hα = R+2.5log 1.35F( Hα) for NGC 266, NGC 3184, and NGC 6217, where R is in magnitudes, and F(Hα+[N ii]), F(Hα) are the fluxes in units of erg s^-1cm^-2; (28) gas-to-stars morphology (2 – optical radiation from stars coincides with ionized gas emission (class 2), 1 – photometric (stellar) radiation centre is displaced from the centre of gas emission (class 1), 0 – no gas emission within the area of optical radiation from stars (class 0), –1 – no Hα data); (29, 30) extinction A(B) and its uncertainty Δ A(B) in units of magnitude calculated from Balmer decrement; (31, 32) equivalent width EW(Hα) in units of Å and its uncertainty; (33, 34) logarithm of equivalent width EW(Hα) and its uncertainty; (35, 36) metallicity Z and its uncertainty; (37) relative contribution of nebular continuum and emission lines to the total observed flux in B band, I_B( gas)/I_B( total); (38, 39) 'true' absolute magnitude M(B)_ true, corrected for extinction and nebular emission contribution, and its uncertainty; (40–47) 'true' colour indices (U-B)_ true (40), (B-V)_ true (42), (V-R)_ true (44), and (V-I)_ true (46), corrected for extinction and nebular emission contribution, and their uncertainties (41, 43, 45, 47); (48, 49) the same as columns (19, 20), but corrected for extinction A, I(Hα+[N ii]) (I(Hα)), and their uncertainties; (50, 51) age t in units of Myr, and its uncertainty; (52, 53) mass M in units of 10^4 M_⊙, and its uncertainty; (54) estimated diameter in units of pc; (55) structure of the region (1 – separate object with a star-like profile, 2 – double object, 3 – triple object, 4 – separate object with a diffuse profile, 5 – ring structure, 6 – complex structure (more than three separate objects), 10...60 – the same as 1...6, but the object is a brighter part (core) of a more extended star forming region). We give the parameters of Hα+[N ii] (Hα) lines: F, F_0^i, I fluxes, and equivalent widths EW(Hα) for all H ii regions associated with stellar groupings, including cases where (i) the photometric radiation centre is displaced from the gas emission centre (class 1) and (ii) the gas emission is absent within the area of radiation from stars (class 0). At the same time, R-Hα index was not calculated for the class 0 objects. Gas metallicity Z is assumed to be equal to the metallicity of the stellar population for objects of any gas-to-stars morphology. For the objects without Hα emission within the area of radiation from stars (class 0) we assume A(B)=A(B)_ Gal+A(B)_ in. The gas contribution I_B( gas)/I_B( total) is assumed to be 0 for the class 0 objects. In the catalogue, we present 'true' colours and absolute magnitudes only for objects of classes 2 and 0. In doing so, 'true' colours and absolute magnitudes for objects with absence of gas emission within the area of radiation from stars are equal to colours and magnitudes, corrected for the Galactic extinction and inclination effects. We did not calculate the colour index (V-R)_ true for stellar groupings with extremely high contribution of gas emission in the R band, I_R( gas)/I_R( total)>0.4. Masses and ages of stellar groupings were estimated both for objects of classes 2 and 0. We did not include masses and (or) ages for some groupings, for which estimates of m and (or) t were obtained with low accuracy (Δ m/m≥1, Δ t/t≥1 if t+Δ t≤10 Myr). §.§ Physical parameters of stellar population in star formation regions The completeness limits for object samples differ from galaxy to galaxy (Fig. <ref>), since the observations of galaxies were carried out with different telescopes and with different total exposures. In the most deeply exposed NGC 628 and NGC 6946, the sample is complete up to apparent B magnitudes of ≈22.0 and ≈21.7 mag, respectively (Fig. <ref>). The remaining galaxies in the sample were taken with lower exposures. The total distribution of 1510 stellar groupings has a maximum at ≈21.3 mag (Fig. <ref>). For galaxies with the worst signal-to-noise ratio, object samples are complete up to m(B)≈20 mag. We constructed the luminosity function for stellar groupings in the galaxies with the largest numbers of identified star formation regions, NGC 628 and NGC 6946. We used a standard power-law luminosity function of the form dN(L_m(B))/dL_m(B) = β L_m(B)^α which was converted to the form log N = a× m(B)+b for the fitting, where the variables α, β in equation (<ref>) and a, b in equation (<ref>) are related as α = -2.5a-1 and β = 2.5(ln 10)^-110^b+4.8a, respectively. The constructed star formation region luminosity functions are shown in Fig. <ref>. The luminosity functions have slopes α=-1.80±0.05 for NGC 628 and α=-1.69±0.05 for NGC 6946. These slopes are close to typical ones ∼-2 for H ii regions and young open clusters in spiral galaxies <cit.>. In particular, <cit.> obtained α = -1.7±0.1 for H ii regions in NGC 628 from the integral field-spectroscopy (as part of PHANGS-MUSE survey; ), that is in agreement with our measurements based on the archival long-slit data. Most of young stellar groupings are located, as expected, in regions of the developed spiral structure at galactocentric distances 0.1-0.7 r/R_25 (Fig. <ref>). At the same time, we identified 22 young objects at distances of 1.10-1.74 r/R_25. Six of them are located in the irregular galaxy NGC 5585. The remaining sixteen are along the minor axis of the highly inclined disc of NGC 7721, and the accuracy of finding their galactocentric distances is low. We believe that the inclination of the disc of NGC 7721 (81) is overestimated in the LEDA catalogue. The histogram of the distribution of stellar groupings by absolute distances to the centre (left panel of Fig. <ref>) is similar in shape to the distribution in the right panel of the figure. This is a consequence of the fact that four galaxies with the largest number of identified star forming regions (NGC 628, NGC 3184, NGC 3726, and NGC 6946) have close sizes, their R_25=10.9-13.3 kpc (see Table <ref>). The size distribution of stellar groupings is strongly influenced by selection effects associated with the fact that the galaxies of our sample are located in a wide range of distances from the Milky Way. Among nearby galaxies (d<30 Mpc), the size distribution of stellar groupings has a power-law form with a maximum at ≈70 pc (Fig. <ref>). Although resolution effects also play a role here, this diameter, 70 pc, is typical for stellar associations <cit.>, and the power law of the size function for stellar associations and H ii regions is well known <cit.>. In distant galaxies, where we cannot resolve individual associations, the size distribution has a maximum of d=500-600 pc, which is a typical size of star complexes <cit.>. Gas in most of the studied H ii regions has a sub-solar metallicity, Z∼0.01 ≃ 0.55 Z_⊙ (Fig. <ref>). Because we had to use different empirical calibration methods (see Section <ref>), there are systematic discrepancies between the measured metallicity values. Whereas R, S, O3N2, and NS-calibrations are in a good agreement with each other, H ii-ChiMistry method gives, on average, 0.1-0.2 dex higher values of O/H (see the bottom panel of Fig. <ref>), close to solar ones. We note, however, that errors in determining the chemical abundance have small effect on estimates of the age and mass of stellar groupings, since the difference in luminosities and colours for evolutionary sequences of different metallicities does not exceed the typical errors in the measured 'true' luminosities and colour indices of stellar groupings (see the sequences in the colour-magnitude and colour-colour diagrams below). The radial distribution of the metallicities of H ii regions in Fig. <ref> shows a gradient, typical for discs of spiral galaxies <cit.>. Among the sample of our objects with available data in the Hα line (1347 out of 1510), the majority (917 out of 1347, or 68%) are star formation regions in which photometric (stellar) radiation centre is displaced from gas emission centre (class 1), 291 (22%) objects are H ii regions in which optical radiation from stars coincides with ionized gas emission (class 2), and 139 (10%) objects have no gas emission within the area of optical radiation from stars (class 0; Fig. <ref>). Evolutionary classification scheme of <cit.> predicts objects of class 1 to be between 4-5 and 8-10 Myr old, and objects of class 2 to be younger than 4-5 Myr. Taking into account that the youngest star formation regions with an age of 1-2 Myr are not visible in optics due to high extinction in the surrounding gas-dust cloud <cit.>, as well as selection effects, due to which younger and dusty objects have a larger m(B) than objects of class 1 of the same luminosity, a ratio of 3:1 for stellar groupings of class 1 and class 2 seems reasonable. Spectral data are not available for the every object from our sample; therefore, we were able to obtain the Balmer decrement and estimate the extinction A(B) in star formation regions only for 604 objects in the catalogue. We present in Fig. <ref> the distribution of star formation regions by intrinsic extinction computed from Balmer decrement, A(B), and corrected for the Galactic extinction and the dust extinction due to the inclination of a galaxy, A(B)_ Gal+A(B)_ in. Typical extinction A(B)-A(B)_ Gal-A(B)_ in≈0.5 mag in H ii regions. For some regions, it can reach 4 mag, but usually does not exceed 2 mag (Fig. <ref>). Note that among the regions, for which the Balmer decrement A(B)<A(B)_ Gal+A(B)_ in, the regions with a displaced gas emission centre (class 1) dominate. Among the regions of class 2, we found only 47 objects (16%), in which the negative A(B)-A(B)_ Gal-A(B)_ in exceeds the errors Δ A(B). Note that the mean Δ A(B)=0.26 mag for class 2 objects and 0.32 mag for class 1 objects. Negative A(B)-A(B)_ Gal-A(B)_ in, as a rule, are not the result of errors in spectroscopic measurements. A(B)_ in is the average value for the galaxy as a whole. However, extinction in a galaxy is not a constant value, it has radial and vertical gradients, as well as local variations. Therefore, A(B)_ in for a particular cluster depends on its galactocentric distance and vertical distance from the disc plane. It may be less than the average for the galaxy as a whole. We note specific cases when the A(B)_ Gal extinction can also differ from the average for an extragalactic cluster. An example is the galaxy NGC 6946 (see Table <ref>), located at a low Galactic latitude (b=11.7). Small local changes in extinction in the Milky Way can give significant deviations of A(B)_ Gal over the field of the galaxy <cit.>. The distribution of objects by gas emission contribution to the total radiation in photometric broadbands is similar to what we obtained in <cit.> in fig. 11. The characteristic nebular contribution in the B band is about 10% (Fig. <ref>). Among regions of class 1, a relative excess of objects with small (<5%) and large (>40%) nebular contributions is observed. Apparently, in the first case, we have spectral observations, in which the slit passed through the photometric (stellar) centre of H ii region, and in the second case, it passed through the centre of gas emission. We present a colour-magnitude diagram B-V versus M(B) for the 'true' colours and luminosities of stellar population of the studied star formation regions of classes 0 and 2, as well as open star clusters in the Milky Way from the catalogue of <cit.> in Fig. <ref>. As can be seen from the figure, the vast majority of objects are well described by synthetic evolutionary SSP sequences for continuously and randomly populated IMF. Note that among young stellar groupings without Hα emission (class 0) there are no high-mass objects with M>2·10^5 M_⊙. The reason for this phenomenon will be discussed in Section <ref>. Figure <ref> shows that the brightest open star clusters in our Galaxy and the dimmest stellar groupings from our sample are superimposed on each other in the colour-magnitude diagram. This confirms the conclusion of <cit.> that extragalactic young stellar groupings and open star clusters in the Milky Way form a continuous sequence of masses and ages and they represent a single evolutionary sequence of objects at different stages of their evolution. We present colour-colour diagrams of colour indices, corrected for the Galactic extinction and the dust extinction due to the inclination of a galaxy, for all objects of our sample in Fig. <ref>. As can be seen from the figure, most of the star formation regions are well superimposed on the evolutionary sequences of young stellar systems with t≤10 Myr. The exception is the (B-V)_0^i-(V-R)_0^i diagram, where a 'tail' of objects with an anomalously large (V-R)_0^i is observed. The excess radiation in the R band is due to the large contribution of gas emission lines to this photometric band. The star formation regions with no gas emission (class 0) and the regions, for which there are no Hα data, lie more compactly along the evolutionary sequences on the colour-colour diagrams, than the regions with the presence of Hα emission (classes 1 and 2). This is due to selection effects: objects without gas emission were selected based only on their colour indices (U-B)_0^i and (B-V)_0^i (see Section <ref>). The largest scatter in the colour-colour diagrams is observed for regions with displaced gas emission centre (class 1), because the gas emission contribution and 'true' (Balmer) absorption in the H ii region are not taken into account here. On colour-colour diagrams showing the 'true' colours of stellar groupings (Fig. <ref>), objects are located more compactly along evolutionary sequences than the star formation regions in the diagrams (U-B)_0^i-(B-V)_0^i, (B-V)_0^i-(V-R)_0^i, and (B-V)_0^i-(V-I)_0^i (Fig. <ref>). This may indicate the correctness of our estimates of the nebular emission contribution and extinction calculated from the Balmer decrement. Part of stellar groupings of class 2 with (B-V)_ stars∼ 0.4 and (U-B)_ stars>0 is well described only by evolutionary sequences with a randomly populated IMF in the (U-B)-(B-V) diagram. In the (U-B)_0^i-(B-V)_0^i diagram, there are no star formation regions with such (U-B)_0^i and (B-V)_0^i (Fig. <ref>). Stellar groupings with gas emission (class 2) have systematically smaller colour indices U-B and B-V than regions without gas emission (class 0). This is especially clearly seen in the (U-B)-(B-V) diagram (Fig. <ref>). This is an expected result, reflecting the fact that star formation regions with gas emission should be younger and bluer on average. Amongst 430 stellar groupings of classes 0 and 2, we were able to estimate the mass for 409 and the age for 391 objects using evolutionary models. Most star clusters have masses in the range of 3·10^3-3·10^5 M_⊙ (Fig. <ref>). Two thirds of stellar groupings can be attributed to massive star clusters with M>1·10^4 M_⊙. The minimum masses were fixed for stellar groupings no. 465 in NGC 628 (430 M_⊙) and nos. 286, 301, 367 in NGC 628, and no. 634 in NGC 3184 (790 M_⊙). According to our estimates, stellar complexes no. 1439 in NGC 7678 and no. 890 in NGC 5351 have the maximum masses (3·10^7 and 1.3·10^7 M_⊙, respectively). As noted above, among the objects without gas emission (class 0) there are no high-massive star complexes with M>2.2·10^5 M_⊙. We also did not find stellar groupings of class 0 with masses M<1.1·10^3 M_⊙ (Fig. <ref>). The age range of stellar groupings turned out to be unexpectedly wide: from 1 to 560 Myr (Fig. <ref>). 154 regions (39%) are younger than 10 Myr, another 137 objects (35%) have an age of 10-25 Myr. Objects without gas emission (class 0), as expected, turned out to be on average older than regions with Hα emission. The boundary, at which regions of class 0 begin to predominate, is the age of 15-16 Myr (Fig. <ref>). Objects without Hα emission are practically absent among very young (t<4 Myr) and relatively old (t>130 Myr) stellar groupings (Fig. <ref>). A probable reason for the presence of gas emission in stellar groupings older than 10 Myr is prolonged or multi-burst star formation, which is poorly described in terms of SSP evolutionary models (see Section <ref> for details). In order to verify that the measured properties of the ionized gas and stars presented in our catalogue are consistent with what is typically observed in nearby galaxies by other authors, we compare their distribution to what is derived from PHANGS-MUSE <cit.> and PHANGS-HST <cit.> data. Within PHANGS survey, 19 nearby galaxies were mapped with MUSE and 38 galaxies – with HST, while only one galaxy (NGC 628) is also in our catalogue. From these data, the properties of about 30000 H ii regions <cit.> and about 100000 young compact star clusters <cit.> and OB associations <cit.> were derived, and currently the corresponding catalogues are ones of the most comprehensive sources of the resolved observational properties of the H ii regions and star clusters in nearby galaxies. In Fig. <ref> we show the distribution of the gas-phase oxygen abundances, EW(Hα), total stellar mass and age of the star groupings in our study (blue histograms), and those taken from the PHANGS catalogues (orange histogram) derived from the MUSE (for oxygen abundance and equivalent width) and HST (for stellar mass and age) observations. As follows from these plots, the regions from our catalogue cover roughly the same range of metallicities and stellar masses, although the PHANGS-HST data are more sensitive and complete at the low-mass range of star clusters. The fraction of the relatively old star clusters is significantly higher in our sample, probably due to the fact that we are studying the larger stellar groupings (thus their average age can be older than the age of the youngest individual compact clusters there), while PHANGS-HST resolves more compact star clusters and young stellar associations. Finally, the values of EW(Hα) measured for our sample are slightly lower than for PHANGS-MUSE H ii regions. We note here that the latter values were corrected for contamination by background old stellar population, which is quite strong in the IFU data like PHANGS-MUSE. <cit.> showed that background-corrected values of EW(Hα) from PHANGS-MUSE catalogue are about an order of magnitude larger than the observed ones, and in Fig. <ref> we show the corrected values from that paper. Our measurements rely mostly on the long-slit data and less suffer from this effect, although slight displacement between the distributions can be due to the fact that we did not perform such correction for our measurements (but also due to in general older ages of the stellar population). § DISCUSSION The duration of star formation is approximately proportional to the mass of a molecular cloud and a star grouping formed from it. Star formation in massive star complexes lasts ∼20 Myr <cit.>. Probably, in the most massive star complexes, we observe the emission of hydrogen from the last, recent starburst. At the same time, the first starburst could have occurred relatively long ago. In the absence of a recent burst of star formation, the colour indices of such complexes are not well defined by SSP models as several generations of stars could be observed at the same place. Therefore, we have not identified any star complexes without gas emission (class 0) with a mass M>2.2·10^5 M_⊙ (see Figs. <ref>, <ref>). The relative duration of star formation in large star complexes can also explain the fact that there are no low-mass stellar systems (M<10^4 M_⊙) among the 'oldest' stellar groupings (t>130 Myr) of any classes (2 and 0). As we noted in Introduction, EW(Hα) appears to be one of the commonly used age indicators. The definite advantage of this indicator is its insensitivity to the interstellar extinction. In Fig. <ref> we compare EW(Hα) to the R- Hα index (also independent on reddening) that we introduced in Section <ref>. From the definition of the index, log EW(Hα)∼0.4(R- Hα), and the meaning of that index is the same as of EW(Hα) assuming the underlying stellar continuum has a flat shape. Formally found linear regression coefficient between these two indexes for the considered star formation groupings is equal to 0.407, very close to 0.4. The resulting relation between the indexes can be described as log EW( Hα) = 0.4(R- Hα)+(8.5±0.3). However, the scatter in Fig. <ref> and the derived uncertainties of the offset in equation <ref> (±0.3 dex) do not allow its accurate application to the real data. In particular, there is a numerous group of H ii regions, mostly of class 1, with anomalously low EW(Hα) for a given R- Hα index (Fig. <ref>). Note that the mean error Δlog EW(Hα) is equal to 0.06 dex for the objects in Fig. <ref>. Analyzing the EW(Hα), obtained by different authors for the same H ii regions, we found that the differences in the measured EW(Hα) can reach a factor of 2-3. This is also why, after averaging (see Section <ref>), the EW(Hα) errors in Fig. <ref> and in the catalogue turned out to be larger than the EW(Hα) errors of individual measurements. In our opinion, there are two main reasons for the large scatter in EW(Hα) measurements. The first one is related to the fact that the EW(Hα) values are sensitive to the choice of the background area (continuum from the underlying stellar disc). Small changes in the continuum under the Hα line lead to significant changes in the EW(Hα) value. The second reason, which is specific for slit spectroscopy, is related to the fact that different parts of H ii region with different EW(Hα) can fall into the slit. Additional reasons for the scatter in EW(Hα) measurements can be also due to variations of the filling factors in H ii regions and the fraction of ionizing photons which escape from the nebulae. Therefore, the use of EW(Hα) as an age indicator in H ii regions should be treated with caution. We plotted the age versus EW(Hα) and the age versus R- Hα index diagrams in Fig. <ref> for the objects with the most precisely measured EW(Hα) and t. Starburst99 evolutionary models <cit.> show that EW(Hα) in H ii regions decreases from >1000Å in the youngest star clusters to ≈30-40Å in the regions with an age of 10 Myr <cit.>. The lowest EW(Hα) values for a given age in Fig. <ref> correspond to those predicted by Starburst99 models, however, the largest values (∼1000Å) are found for young stellar regions of all ages, up to t=17 Myr. A similar picture is observed in the age versus R- Hα index diagram: excluding the youngest regions with t≈1 Myr, the minimum values of the index fall from -15 to -18 in the age range of 3-15 Myr, while the maximum indices remain constant, R- Hα=-14 (Fig. <ref>). Apparently, the lack of a strong correlation between age and EW(Hα) predicted by evolutionary models is due to the complex star formation history in the H ii regions, which is poorly described by SSP models. This is also indicated by the fact that there are no objects with a low R- Hα index (<-16.5) among young low-massive (M<10^4 M_⊙) stellar groupings. At the same time, we believe that the introduced R- Hα index can be used when EW(Hα) is not available because of absence of spectral observations of star formation regions for their analysis. Given the weakness of the correlation with the age of the regions derived from SED fitting, one cannot rely on either of these indexes for the precise age-dating based without properly addressing tow most probable sources of the scatter mentioned above (contamination by the background stellar population, uncertainties in the escape fraction of the ionizing quanta and the differences in the area covered by the aperture). Integral field spectroscopy is necessary to overcome some of such limitations related to the incomplete coverage of H ii region. However, even then the measured values EW(Hα) often disagree with those predicted from models <cit.> and only weakly correlate with the stellar association ages measured from SED fitting <cit.>. Our sample contains various types of young stellar objects: OB associations, open clusters, stellar aggregates, and star complexes. Their dynamic evolution is different. Large stellar aggregates with sizes >150 pc and star complexes with d≈500-600 pc have a complex structure and contain conglomerates of star clusters and associations. H ii regions and associations may expand with age inside the complex, but the size of the complex depends fundamentally on the physical parameters of the surrounding interstellar matter and the magnetic field <cit.>. Modern high-resolution studies using Hubble Space Telescope data of resolved H ii regions show that the size of an H ii region is a function of the age of the stellar population <cit.>. However, this dependence is observed up to an age of 5-6 Myr and a diameter of 40 pc. The sizes of H ii regions (future star clusters) starting from 40 pc weakly depend on the age of the stellar population <cit.>. However, the 'age–size' relation is observed for star associations over a wider range of ages and sizes <cit.>. Unfortunately, as we noted in Introduction, the youngest (t≤10 Myr) clusters and associations are poorly differentiated by their parameters <cit.>, and the minimum linear resolution of our observations is 30-40 pc in the nearest galaxies. Figure <ref> illustrates the dependence between the age and the size of the studied stellar groupings. To make a homogeneous sample, multiple (double, triple, and complex) objects have been excluded from the graph (see comments for column (55) of the catalogue). Visually, we do not find any correlation between the age and the size of the young stellar groupings in the figure. However, we can separate large star complexes, a few stellar aggregates, and numerous star clusters. They vary in size but do not show 'age–size' dependence. However, the distribution of young (t<10 Myr) small (d<120 pc) stellar groupings on the graph seems to indicate the presence of the 'age–size' relation. Larger star associations are older. A diffusion-driven expansion, which produces a relation t∼ d^2 between age and size, seems to play the main role here (see Fig. <ref>). A correlation between sizes and masses of giant molecular clouds (GMCs), M∼ d^2, was found for the first time by <cit.>. That correlation has been repeatedly confirmed later <cit.>. The mass–size relation for young star complexes was found to be close to that of GMCs, <cit.>. It reflects the fact that young star complexes are the direct descendants of GMCs. Using the GMC sample of <cit.> and their own sample of young massive clusters, <cit.> gave a relation M∼ d^2.0±0.3 for young star complexes and M∼ d^1.9±0.1 for GMCs (see Fig. <ref>). However, more recent studies of star clusters and GMCs have shown more complex relations between their masses and sizes <cit.>. The dependence between sizes and masses of stellar groupings from our sample is presented in Fig. <ref>. As in the 'age–size' diagram (Fig. <ref>), we excluded multiple objects from the graph. We also excluded objects with mass estimation errors >20%. The stellar groupings from our sample are in fairly good agreement with the dependence M∼ d^2 obtained by <cit.> for the size range from 50 to 1000 pc (Fig. <ref>). Note that small stellar groupings (d=50-100 pc) also fit well with the 'size–mass' dependence (<cit.> studied star clusters larger than 100 pc). At the same time, our data also agree well with the results of <cit.> for stellar conglomerations with the highest surface brightness from their sample (upper side of the cyan triangle in the figure). The 'size–mass' relation which is observed for star clusters, associations and complexes is a relic of the same dependence for their ancestors, the GMCs. The vertical shift between the GMCs and the stellar groupings in the diagram is due to the star formation efficiency: only a fraction of the gas in the GMCs will form stars <cit.>. The width of the band occupied by stellar groupings along the dependence line M∼ d^2 testifies to the differing efficiency of star formation in different regions, which is usually 1-7%. § CONCLUSIONS In this paper, we present the results of analysis of the catalogue comprising parameters of 1510 young stellar groupings.This catalogue is based on the combination of spectroscopic, photometric and Hα spectrophotometric data for star formation regions in 19 galaxies. We have studied the morphology of stellar groupings and their relation to the associated Hα emission region. Extinctions for 743, metallicities for 402, ages for 391, and masses for 409 stellar groupings were estimated. We used a continuously populated IMF for high-massive clusters (M>1·10^4 M_⊙) and a randomly populated IMF for star clusters with M<1·10^4 M_⊙ in the evolutionary synthesis models to estimate ages and masses of stellar groupings. It is shown, that the method we use for estimating the age and mass of the stellar component in star formation regions is applicable only for objects, in which the optical radiation from stars coincides with the ionized gas emission and for objects without gas emission within the area of optical radiation from stars. Note that the number of regions with a displacement between the centres of gas emission and photometric (stellar) radiation is 3 times greater than the number of regions where the optical radiation from stars coincides with the gas emission. The derived masses of stellar groupings range from 430 M_⊙ in the nearby galaxy NGC 628 to 3·10^7 M_⊙ in the distant NGC 7678. Most stellar groupings have masses in the range of 3·10^3 M_⊙ - 3·10^5 M_⊙. Two thirds of stellar groupings can be attributed to massive star clusters with M>1·10^4 M_⊙. The range of ages of stellar groupings is from 1 to 560 Myr. One third of regions are younger than 10 Myr, and another one-third of objects has an age of 10-25 Myr. The age boundary, at which regions without gas emission begin to predominate over objects with Hα emission, is 15-16 Myr. The lower mass estimates for the regions in NGC 628, NGC 3184, and NGC 6946 overlaps with the mass interval of the young Milky Way open clusters. This is an argument for the existence of a uniform evolutionary sequence of extragalactic star formation regions and Galactic open clusters at different stages of their evolution. The introduced R- Hα index = R+2.5log F( Hα+[N ii]) can be used when EW(Hα) is not available because of absence of spectral observations of star formation regions for their analysis. § ACKNOWLEDGMENTS We are extremely grateful to the anonymous referee for his/her helpful and constructive comments. The authors would like to thank A. E. Piskunov (Institute of Astronomy of Russian Academy of Sciences) for helpful consultions. The authors acknowledge the use of the HyperLeda database (<http://leda.univ-lyon1.fr>), the NASA/IPAC Extragalactic Database (<http://ned.ipac.caltech.edu>), Strasbourg Astronomical Data Center (CDS, <https://cds.u-strasbg.fr>), the Sloan Digital Sky Survey (SDSS, <http://www.sdss.org>), the Padova group online server CMD (<http://stev.oapd.inaf.it>), the European Southern Observatory Munich Image Data Analysis System (eso-midas, <http://www.eso.org/sci/software/esomidas>), and SExtractor program (<http://sextractor.sourceforge.net>). This study was supported by the Russian Foundation for Basic Research (project no. 20-02-00080). This research has been supported by the Interdisciplinary Scientific and Educational School of Moscow University 'Fundamental and Applied Space Research'. § DATA AVAILABILITY The catalogue is presented as the additional online material to this paper in the MNRAS website. It is also available in electronic form at <http://lnfm1.sai.msu.ru/g̃usev/sfr_cat.html>. Some of the images are available in NASA/IPAC Extragalactic Database at <http://ned.ipac.caltech.edu>. Spectral data are available in the Sloan Digital Sky Survey at <http://www.sdss.org>, Strasbourg Astronomical Data Center at <https://cds.u-strasbg.fr>, or in corresponding papers. Our own UBVRI and Hα observational data can be shared on reasonable request to the corresponding author. [Adamo et al.(2013)]adamo2013 Adamo A., Östlin G., Bastian N., Zackrisson E., Livermore R. C., Guaita L., 2013, http://dx.doi.org/10.1088/0004-637X/766/2/105ApJ, 766, 105 [Adamo et al.2017]adamo2017 Adamo A., Ryon J. E., Messa M., Kim H., Grasha K., Cook D. O., Calzetti D., et al., 2017, http://dx.doi.org/10.3847/1538-4357/aa7132ApJ, 841, 131 [Albareti et al.(2017)]albareti2017 Albareti F. D. et al., 2017, http://dx.doi.org/10.3847/1538-4365/aa8992ApJS, 233, id. A25 [Artamonov, Bruevich & Gusev Artamonov et al.1997]artamonov1997 Artamonov B. P., Bruevich V. V., Gusev A. S., 1997, Astron. Rep., 41, 577 [Artamonov et al.(1999)]artamonov1999 Artamonov B. P., Badan Y. Y., Bruyevich V. V., Gusev A. S., 1999, Astron. Rep., 43, 377 [Artamonov, Badan & Gusev Artamonov et al.2000]artamonov2000 Artamonov B. P., Badan Y. Y., Gusev A. S., 2000, http://dx.doi.org/10.1134/1.1307552Astron. Rep., 44, 569 [Bastian et al.(2005)]bastian2005 Bastian N., Gieles M., Efremov Y. N., Lamers H. J. G. L. M., 2005, http://dx.doi.org/10.1051/0004-6361:20053165A&A, 443, 79 [Bastian et al.(2006)]bastian2006 Bastian N., Emsellem E., Kissler-Patig M., Maraston C., 2006, http://dx.doi.org/10.1051/0004-6361:20053793A&A, 445, 471 [Bastian et al.(2009)]bastian2009 Bastian N., Trancho G., Konstantopoulos I. S., Miller B. W., 2009, http://dx.doi.org/10.1088/0004-637X/701/1/607ApJ, 701, 607 [Baumgardt et al.(2013)]baumgardt2013 Baumgardt H., Parmentier G., Anders P., Grebel E. K., 2013, http://dx.doi.org/10.1093/mnras/sts667MNRAS, 430, 676 [Belley & Roy(1992)]belley1992 Belley J., Roy J.-R., 1992, http://dx.doi.org/10.1086/191621ApJS, 78, 61 [Berg et al.(2013)]berg2013 Berg D. A., Skillman E. D., Garnett D. R., Croxall K. V., Marble A. R., Smith J. D., Gordon K., Kennicutt R. C., Jr., 2013, http://dx.doi.org/10.1088/0004-637X/775/2/128ApJ, 775, id. A128 [Berg et al.(2015)]berg2015 Berg D. A., Skillman E. D., Croxall K. V., Pogge R. W., Moustakas J., Johnson-Groh M., 2015, http://dx.doi.org/10.1088/0004-637X/806/1/16ApJ, 806, id. A16 [Bertelli et al.(1994)]bertelli1994 Bertelli G., Bressan A., Chiosi C., Fagotto F., Nasi E., 1994, A&AS, 106, 275 [Bolatto et al.(2008)]bolatto2008 Bolatto A. D., Leroy A. K., Rosolowsky E., Walter F., Blitz L., 2008, http://dx.doi.org/10.1086/591513ApJ, 686, 948 [Bresolin & Kennicutt(1996)]bresolin1996 Bresolin F., Kennicutt R. C., Jr., 1996, http://dx.doi.org/10.1086/118073ApJ, 112, 1009 [Bresolin, Kennicutt & Garnett Bresolin et al.1999]bresolin1999 Bresolin F., Kennicutt R. C., Garnett D. R., 1999, http://dx.doi.org/10.1086/306576ApJ, 510, 104 [Bressan et al.(2012)]bressan2012 Bressan A., Marigo P., Girardi L., Salasnich B., Dal Cero C., Rubele S., Nanni A., 2012, http://dx.doi.org/10.1111/j.1365-2966.2012.21948.xMNRAS, 427, 127 [Brown & Mathews(1970)]brown1970 Brown R. L., Mathews W. G., 1970, http://dx.doi.org/10.1086/150483ApJ, 160, 939 [Bruevich et al.(2007)]bruevich2007 Bruevich V. V., Gusev A. S., Ezhkova O. V., Sakhibov F. K., Smirnov M. A., 2007, http://dx.doi.org/10.1134/S1063772907030043Astron. Rep., 51, 222 [Bruevich, Gusev & Guslyakova Bruevich et al.2010]bruevich2010 Bruevich V. V., Gusev A. S., Guslyakova S. A., 2010, http://dx.doi.org/10.1134/S106377291005001XAstron. Rep., 54, 375 [Bruevich, Gusev & Guslyakova Bruevich et al.2011]bruevich2011 Bruevich V. V., Gusev A. S., Guslyakova S. A., 2011, http://dx.doi.org/10.1134/S1063772911040019Astron. Rep., 55, 310 [Bruzual(2002)]bruzual2002 Bruzual G., 2002, IAU Symp., 207, 616 [Calzetti(2001)]calzetti2001 Calzetti D., 2001, http://dx.doi.org/10.1086/324269PASP, 113, 1449 [Caplan & Deharveng(1986)]caplan1986 Caplan J., Deharveng L., 1986, A&A, 155, 297 [Chandar et al.(2010)]chandar2010 Chandar R. et al., 2010, http://dx.doi.org/10.1088/0004-637X/719/1/966ApJ, 719, 966 [Cedrés et al.(2012)]cedres2012 Cedrés B., Cepa J., Bongiovanni Á., Castañeda H., Sánchez-Portal M., Tomita A., 2012, http://dx.doi.org/10.1051/0004-6361/201219571A&A, 545, id. A43 [Cerviño(2013)]cervino2013 Cerviño M., 2013, http://dx.doi.org/10.1016/j.newar.2013.09.001New Astronomy Reviews, 57, 123 [Chen et al.(2014)]chen2014 Chen Y., Girardi L., Bressan A., Marigo P., Barbieri M., Kong X., 2014, http://dx.doi.org/10.1093/mnras/stu1605MNRAS, 444, 2525 [Chen et al.(2015)]chen2015 Chen Y., Bressan A., Girardi L., Marigo P., Kong X., Lanza A., 2015, http://dx.doi.org/10.1093/mnras/stv1281MNRAS, 452, 1068 [Copetti, Pastoriza & Dottori Copetti et al.1986]copetti1986 Copetti M. V. F., Pastoriza M. G., Dottori H. A., 1986, A&A, 156, 111 [Dale et al.(2009)]dale2009 Dale D. A. et al., 2009, http://dx.doi.org/10.1088/0004-637X/703/1/517ApJ, 703, 517 [de Grijs & Anders(2006)]grijs2006 de Grijs R., Anders P., 2006, http://dx.doi.org/10.1111/j.1365-2966.2005.09856.xMNRAS, 366, 295 [de la Fuente Marcos & de la Fuente Marcos(2009)]marcos2009 de la Fuente Marcos R., de la Fuente Marcos C., 2009, http://dx.doi.org/10.1088/0004-637X/700/1/436ApJ, 700, 436 [Efremov(1989)]efremov1989 Efremov Y. N., 1989, Sites of Star Formation in Galaxies: Star Complexes and Spiral Arms. Fizmatlit, Moscow, p. 246 (in Russian) [Efremov(1995)]efremov1995 Efremov Y. N., 1995, http://dx.doi.org/10.1086/117728AJ, 110, 2757 [Efremov & Elmegreen(1998)]efremov1998 Efremov Y. N., Elmegreen B., 1998, http://dx.doi.org/10.1046/j.1365-8711.1998.01819.xMNRAS, 299, 588 [Efremov, Ivanov & Nikolov Efremov et al.1987]efremov1987 Efremov Y. N., Ivanov G. R., Nikolov N. S., 1987, http://dx.doi.org/10.1007/BF00644467Ap&SS, 135, 119 [Efremov, Afanasiev & Egorov Efremov et al.2011]efremov2011 Efremov Yu. N., Afanasiev V. L., Egorov O. V., 2011, http://dx.doi.org/10.1134/S1990341311030035Astrophys. Bull., 66, 304 [Elmegreen(1994)]elmegreen1994 Elmegreen B. G., 1994, http://dx.doi.org/10.1086/174623ApJ, 433, 39 [Elmegreen(2002)]elmegreen2002 Elmegreen B. G., 2002, http://dx.doi.org/10.1086/324384ApJ, 564, 773 [Elmegreen(2009)]elmegreen2009 Elmegreen B. G., 2009, in Andersen J., Bland-Hawthorn J., Nordström B., eds, Proc. IAU Symp. 254, The Galaxy Disk in Cosmological Context. Kluwer, Dordrecht, p. 289 [Elmegreen(2011)]elmegreen2011 Elmegreen B. G., 2011, in Charbonnel C., Montmerle T., eds, Ecole Evry Schatzman 2010: Star Formation in the Local Universe. EAS Publications Series, 51. Cambridge Univ. Press, Cambridge, p. 31 [Elmegreen & Efremov(1996)]elmegreen1996 Elmegreen B. G., Efremov Y. N., 1996, http://dx.doi.org/10.1086/177554ApJ, 466, 802 [Elmegreen et al.(1996)]elmegreen1996b Elmegreen B. G., Elmegreen D. M., Salzer J. J., Mann H., 1996, http://dx.doi.org/10.1086/177634ApJ, 467, 579 [Elmegreen et al.(2000)]elmegreen2000 Elmegreen B. G., Efremov Y., Pudritz R. E., Zinnecker H., 2000, in Mannings V., Boss A. P., Russell S. S., eds, Protostars and Planets IV. Univ. of Arizona Press, Tucson, p. 179 [Elmegreen, Elmegreen & Leitner Elmegreen et al.2003a]elmegreen2003a Elmegreen B. G., Elmegreen D. M., Leitner S. N., 2003a, http://dx.doi.org/10.1086/374860ApJ, 590, 271 [Elmegreen et al.(2003b)]elmegreen2003b Elmegreen B. G., Leitner S. N., Elmegreen D. M., Cuillandre J.-C., 2003b, http://dx.doi.org/10.1086/376411ApJ, 593, 333 [Elmegreen et al.(2006)]elmegreen2006 Elmegreen B. G., Elmegreen D. M., Chandar R., Whitmore B., Regan M., 2006, http://dx.doi.org/10.1086/503797ApJ, 644, 879 [Elson & Fall(1985)]elson1985 Elson R. A. W., Fall S. M., 1985, http://dx.doi.org/10.1086/163693ApJ, 299, 211 [Emsellem et al.2022]Emsellem2022 Emsellem E., Schinnerer E., Santoro F., Belfiore F., Pessa I., McElroy R., Blanc G. A., et al., 2022, http://dx.doi.org/10.1051/0004-6361/202141727A&A, 659, id. A191 [Epinat, Amram & Marcelin Epinat et al.2008]epinat2008 Epinat B., Amram P., Marcelin M., 2008, http://dx.doi.org/10.1111/j.1365-2966.2008.13796.xMNRAS, 390, 466 [Ferguson, Gallagher & Wyse Ferguson et al.1998]ferguson1998 Ferguson A. M. N., Gallagher J. S., Wyse R. F. G., 1998, http://dx.doi.org/10.1086/300456AJ, 116, 673 [Fouesneau et al.(2014)]fouesneau2014 Fouesneau M. et al., 2014, http://dx.doi.org/10.1088/0004-637X/786/2/117ApJ, 786, id. A117 [García-Benito et al.(2010)]garsia2010 García-Benito R. et al., 2010, http://dx.doi.org/10.1111/j.1365-2966.2010.17269.xMNRAS, 408, 2234 [Gatto et al.(2021)]gatto2021 Gatto M. et al., 2021, http://dx.doi.org/10.1093/mnras/stab2297MNRAS, 507, 3312 [Gieles & Portegies Zwart(2011)]gieles2011 Gieles M., Portegies Zwart S. F., 2011, http://dx.doi.org/10.1111/j.1745-3933.2010.00967.xMNRAS, 410, L6 [Girardi et al.(2000)]girardi2000 Girardi L., Bressan A., Bertelli G., Chiosi C., 2000, http://dx.doi.org/10.1051/aas:2000126A&AS, 141, 371 [Gouliermis et al.(2017)]gouliermis2017 Gouliermis D. A. et al., 2017, http://dx.doi.org/10.1093/mnras/stx445MNRAS, 468, 509 [Groves et al.2023]Groves2023 Groves B., Kreckel K., Santoro F., Belfiore F., Zavodnik E., Congiu E., Egorov O. V., et al., 2023, http://dx.doi.org/10.1093/mnras/stad114MNRAS, 520, 4902 [Grudić et al.(2021)]grudic2021 Grudić M. Y., Diederik Kruijssen J. M., Faucher-Giguére C.-A., Hopkins P. F., Ma X., Quataert E., Boylan-Kolchin M., 2021, http://dx.doi.org/10.1093/mnras/stab1894MNRAS, 506, 3239 [Gusev(2006a)]gusev2006a Gusev A. S., 2006a, http://dx.doi.org/10.1134/S1063772906030012Astron. Rep., 50, 167 [Gusev(2006b)]gusev2006b Gusev A. S., 2006b, http://dx.doi.org/10.1134/S1063772906030024Astron. Rep., 50, 182 [Gusev & Efremov(2013)]gusev2013b Gusev A. S., Efremov Y. N., 2013, http://dx.doi.org/10.1093/mnras/stt1019MNRAS, 434, 313 [Gusev & Kaisin(2002)]gusev2002b Gusev A. S., Kaisin S. S., 2002, http://dx.doi.org/10.1134/1.1508063Astron. Rep., 46, 712 [Gusev & Kaisin(2004)]gusev2004 Gusev A. S., Kaisin S. S., 2004, http://dx.doi.org/10.1134/1.1787063Astron. Rep., 48, 611 [Gusev & Park(2003)]gusev2003 Gusev A. S., Park M.-G., 2003, http://dx.doi.org/10.1051/0004-6361:20031215A&A, 410, 117 [Gusev & Shimanovskaya(2019)]gusev2019 Gusev A. S., Shimanovskaya E. V., 2019, http://dx.doi.org/10.1093/mnras/stz1881MNRAS, 488, 3045 [Gusev et al.(2002)]gusev2002 Gusev A. S., Zasov A. V., Kaisin S. S., Bizyaev D. V., 2002, http://dx.doi.org/10.1134/1.1508062Astron. Rep., 46, 704 [Gusev, Zasov & Kaisin Gusev et al.2003]gusev2003b Gusev A. S., Zasov A. V., Kaisin S. S., 2003, http://dx.doi.org/10.1134/1.1579782Astron. Lett., 29, 363 [Gusev et al.(2007)]gusev2007 Gusev A. S., Myakutin V. I., Sakhibov F. K., Smirnov M. A., 2007, http://dx.doi.org/10.1134/S1063772907030055Astron. Rep., 51, 234 [Gusev et al.(2012)]gusev2012 Gusev A. S., Pilyugin L. S., Sakhibov F., Dodonov S. N., Ezhkova O. V., Khramtsova M. S., 2012, http://dx.doi.org/10.1111/j.1365-2966.2012.21322.xMNRAS, 424, 1930 [Gusev, Sakhibov & Dodonov Gusev et al.2013]gusev2013 Gusev A. S., Sakhibov F. H., Dodonov S. N., 2013, http://dx.doi.org/10.1134/S1990341313010045Astrophys. Bull., 68, 40 [Gusev, Egorov & Sakhibov Gusev et al.2014]gusev2014 Gusev A. S., Egorov O. V., Sakhibov F., 2014, http://dx.doi.org/10.1093/mnras/stt1970MNRAS, 437, 1337 [Gusev et al.(2015)]gusev2015 Gusev A. S., Guslyakova S. A., Novikova A. P., Khramtsova M. S., Bruevich V. V., Ezhkova O. V., 2015, http://dx.doi.org/10.1134/S1063772915100029Astron. Rep., 59, 899 [Gusev et al.(2016)]gusev2016 Gusev A. S. et al., 2016, http://dx.doi.org/10.1093/mnras/stw212MNRAS, 457, 3334 [Gusev et al.(2018)]gusev2018 Gusev A. S., Shimanovskaya E. V., Shatsky N. I., Sakhibov F., Piskunov A. E., Kharchenko N. V. 2018, http://dx.doi.org/10.1515/astro-2018-0004Open Astronomy, 27, 98 [Gusev, Sakhibov & Ezhkova Gusev et al.2020]gusev2020 Gusev A. S., Sakhibov F. Kh., Ezhkova O. V., 2020, http://dx.doi.org/10.1134/S1063772920060025Astron. Rep., 64, 375 [Haas et al.(2008)]haas2008 Haas M. R., Gieles M., Scheepmaker R. A., Larsen S. S., Lamers H. J. G. L. M., 2008, http://dx.doi.org/10.1051/0004-6361:20078831A&A, 487, 937 [Hollyhead et al.(2015)]hollyhead2015 Hollyhead K., Bastian N., Adamo A., Silva-Villa E., Dale J., Ryon J. E., Gazak Z., 2015, http://dx.doi.org/10.1093/mnras/stv331MNRAS, 449, 1106 [Hollyhead et al.(2016)]hollyhead2016 Hollyhead K., Adamo A., Bastian N., Gieles M., Ryon J. E., 2016, http://dx.doi.org/10.1093/mnras/stw1142MNRAS, 460, 2087 [Hopkins(2012)]hopkins2012 Hopkins P. F., 2012, http://dx.doi.org/10.1111/j.1365-2966.2012.20730.xMNRAS, 423, 2016 [Ivanov(1991)]ivanov1991 Ivanov G. R., 1991, http://dx.doi.org/10.1007/BF00643841Ap&SS, 178, 227 [James et al.(2004)]james2004 James P. A. et al., 2004, http://dx.doi.org/10.1051/0004-6361:20031568A&A, 414, 23 [Kaplan & Pikelner(1979)]kaplan1979 Kaplan S. A., Pikelner S. B., 1979, The physics of the interstellar medium. Nauka, Moscow, p. 592 (In Russian) [Kennicutt et al.(2008)]kennicutt2008 Kennicutt R. C. Jr, Lee J. C., Funes J. G. J. S., Sakai S., Akiyama S., 2008, http://dx.doi.org/10.1086/590058ApJS, 178, 247 [Kharchenko et al.(2005a)]kharchenko2005a Kharchenko N. V., Piskunov A. E., Röser S., Schilbach E., Scholz R.-D., 2005a, http://dx.doi.org/10.1051/0004-6361:20042523A&A, 438, 1163 [Kharchenko et al.(2005b)]kharchenko2005b Kharchenko N. V., Piskunov A. E., Röser S., Schilbach, E., Scholz R.-D., 2005b, http://dx.doi.org/10.1051/0004-6361:20052740A&A, 440, 403 [Kharchenko et al.(2009)]kharchenko2009 Kharchenko N. V., Piskunov A. E., Röser S., Schilbach E., Scholz R.-D., Zinnecker, H., 2009, http://dx.doi.org/10.1051/0004-6361/200911979A&A, 504, 681 [Kim et al.(2012)]kim2012 Kim H. et al., 2012, http://dx.doi.org/10.1088/0004-637X/753/1/26ApJ, 753, id. A26 [Kim et al.(2021)]Kim2021 Kim J. et al., 2021, http://dx.doi.org/10.1093/mnras/stab878MNRAS, 504, 487 [Kim et al.2023]Kim2023 Kim J. et al., 2023, http://dx.doi.org/10.3847/2041-8213/aca90aApJL, 944, L20. [Knapen et al.(2004)]knapen2004 Knapen J. H., Stedman S., Bramich D. M., Folkes S. L., Bradley T. R., 2004, http://dx.doi.org/10.1051/0004-6361:20041584A&A, 426, 1135 [Konstantopoulos et al.(2009)]konstantopoulos2009 Konstantopoulos I. S., Bastian N., Smith L. J., Westmoquette M. S., Trancho G., Gallagher J. S., 2009, http://dx.doi.org/10.1088/0004-637X/701/2/1015ApJ, 701, 1015 [Konstantopoulos et al.(2013)]konstantopoulos2013 Konstantopoulos I. S. et al., 2013, http://dx.doi.org/10.1088/0004-6256/145/5/137AJ, 145, id. A137 [Kreckel et al.2022]Kreckel2022 Kreckel K., Egorov O. V., Belfiore F., Groves B., Glover S. C. O., Klessen R. S., Sandstrom K., et al., 2022, http://dx.doi.org/10.1051/0004-6361/202243858A&A, 667, id. A16 [Lada & Lada(2003)]lada2003 Lada C. J., Lada E. A., 2003, http://dx.doi.org/10.1146/annurev.astro.41.011802.094844ARA&A, 41, 57 [Lang(1978)]lang1978 Lang K. R., 1978, Astrophysical Formulae. A Compendium for the Physicist and Astrophysicist. Springer-Verlag, Berlin, Heidelberg, New York, p. 783 [Larsen(2002)]larsen2002 Larsen S. S., 2002, http://dx.doi.org/10.1086/342381AJ, 124, 1393 [Larsen & Richtler(1999)]larsen1999b Larsen S. S., Richtler T., 1999, A&A, 345, 59 [Larson(1981)]larson1981 Larson R. B., 1981, http://dx.doi.org/10.1093/mnras/194.4.809MNRAS, 194, 809 [Larson et al.(2023)]Larson2023 Larson K. L., Lee J. C., Thilker D. A., Whitmore B. C., Deger S., Lilly J., Chandar R., et al., 2023, http://dx.doi.org/10.1093/mnras/stad1600MNRAS, 523, 6061 [Lee et al.2022]Lee2022 Lee J. C., Whitmore B. C., Thilker D. A., Deger S., Larson K. L., Ubeda L., Anand G. S., et al., 2022, http://dx.doi.org/10.3847/1538-4365/ac1fe5ApJS, 258, id. A10 [Leitherer et al.(1999)]leitherer1999 Leitherer C. et al., 1999, http://dx.doi.org/10.1086/313233ApJS, 123, 3 [Marigo & Girardi(2007)]marigogirardi2007 Marigo P., Girardi L., 2007, http://dx.doi.org/10.1051/0004-6361:20066772A&A, 469, 239 [Marigo et al.(2008)]marigo2008 Marigo P., Girardi L., Bressan A., Groenewegen M. A. T., Silva L., Granato G. L., 2008, http://dx.doi.org/10.1051/0004-6361:20078467A&A, 482, 883 [Marino et al.(2013)]marino2013 Marino R. A. et al., 2013, http://dx.doi.org/10.1051/0004-6361/201321956A&A, 559, id. A114 [McCall, Rybski & Shields McCall et al.1985]mccall1985 McCall M. L., Rybski P. M., Shields G. A., 1985, http://dx.doi.org/10.1086/190994ApJS, 57, 1 [Messa et al.(2018)]messa2018 Messa M. et al., 2018, http://dx.doi.org/10.1093/mnras/stx2403MNRAS, 473, 996 [Mora et al.(2009)]mora2009 Mora M. D., Larsen S. S., Kissler-Patig M., Brodie J. P., Richtler T., 2009, http://dx.doi.org/10.1051/0004-6361/200810614A&A, 501, 949 [Morisset et al.2016]Morisset2016 Morisset C., Delgado-Inglada G., Sánchez S. F., Galbany L., García-Benito R., Husemann B., Marino R. A., et al., 2016, http://dx.doi.org/doi:10.1051/0004-6361/201628559A&A, 594, id. A37 [Odekon(2008)]odekon2008 Odekon M. C., 2008, http://dx.doi.org/10.1086/589141ApJ, 681, 1248 [Osterbrock(1989)]osterbrock1989 Osterbrock D. E., 1989, Astrophysics of gaseous nebulae and active galactic nuclei. University Science Books, Mill Valley, CA, p. 422 [Paturel at al.(2003)]paturel2003 Paturel G., Petit C., Prugniel Ph., Theureau G., Rousseau J., Brouty M., Dubois P., Cambresy L., 2003, http://dx.doi.org/10.1051/0004-6361:20031411A&A, 412, 45 [Pérez-Montero(2014)]perez2014 Pérez-Montero E., 2014, http://dx.doi.org/10.1093/mnras/stu753MNRAS, 441, 2663 [Pettini & Pagel(2004)]pettini2004 Pettini M., Pagel B. E. J., 2004, http://dx.doi.org/10.1111/j.1365-2966.2004.07591.xMNRAS, 348, L59 [Pilyugin & Grebel(2016)]pilyugin2016 Pilyugin L. S., Grebel E. K., 2016, http://dx.doi.org/10.1093/mnras/stw238MNRAS, 457, 3678 [Pilyugin & Mattsson(2011)]pilyugin2011 Pilyugin L. S., Mattsson L., 2011, http://dx.doi.org/10.1111/j.1365-2966.2010.17970.xMNRAS, 412, 1145 [Pilyugin, Grebel & Kniazev Pilyugin et al.2014]pilyugin2014 Pilyugin L. S., Grebel E. K., Kniazev A. Y., 2014, http://dx.doi.org/10.1088/0004-6256/147/6/131AJ, 147, id. A131 [Piskunov at al.(2006)]piskunov2006 Piskunov A. E., Kharchenko N. V., Röser S., Schilbach E., Scholz R.-D., 2006, http://dx.doi.org/10.1051/0004-6361:20053764A&A, 445, 545 [Piskunov at al.(2011)]piskunov2011 Piskunov A. E., Kharchenko N. V., Schilbach E., Röser S., Scholz R.-D., Zinnecker H., 2011, http://dx.doi.org/10.1051/0004-6361/201015376A&A, 525, 122 [Popescu, Hanson & Elmegreen Popescu et al.2012]popescu2012 Popescu B., Hanson M. M., Elmegreen B. G., 2012, http://dx.doi.org/10.1088/0004-637X/751/2/122ApJ, 751, id. A122 [Portegies Zwart, McMillan & Gieles Portegies Zwart et al.2010]zwart2010 Portegies Zwart S. F., McMillan S. L. W., Gieles M., 2010, http://dx.doi.org/10.1146/annurev-astro-081309-130834ARA&A, 48, 431 [Regan et al.(2004)]regan2004 Regan M. W., et al., 2004, http://dx.doi.org/10.1086/423204ApJS, 154, 204 [Reines et al.(2010)]reines2010 Reines A. E., Nidever D. L., Whelan D. G., Johnson K. E., 2010, http://dx.doi.org/10.1088/0004-637X/708/1/26ApJ, 708, 26 [Rosales-Ortega et al.(2011)]rosales2011 Rosales-Ortega F. F., Diaz A. I., Kennicutt R. C., Sanchez S. F., 2011, http://dx.doi.org/10.1111/j.1365-2966.2011.18870.xMNRAS, 415, 2439 [Rosolowsky et al.(2021)]rosolowsky2021 Rosolowsky E. et al., 2021, http://dx.doi.org/10.1093/mnras/stab085MNRAS, 502, 1218 [Ryon et al.(2015)]ryon2015 Ryon J. E. et al., 2015, http://dx.doi.org/10.1093/mnras/stv1282MNRAS, 452, 525 [Ryon et al.(2017)]ryon2017 Ryon J. E. et al., 2017, http://dx.doi.org/10.3847/1538-4357/aa719eApJ, 841, id. A92 [Sakhibov & Smirnov(1990)]sakhibov1990 Sakhibov, F., Smirnov, M.A., 1990, Soviet Astronomy, 34, 236 [Sakhibov & Smirnov(1995)]sakhibov1995 Sakhibov, F., Smirnov, M.A., 1995, Astron. Rep., 39, 281 [Sánchez et al.(2012)]sanchez2012 Sánchez S. F. et al., 2012, http://dx.doi.org/10.1051/0004-6361/201219578A&A, 546, id. A2 [Santoro et al.2022]santoro2022 Santoro F., Kreckel K., Belfiore F., Groves B., Congiu E., Thilker D. A., Blanc G. A., et al., 2022, http://dx.doi.org/10.1051/0004-6361/202141907A&A, 658, id. A188 [Scalo(1986)]scalo1986 Scalo J. M., 1986, Fundamentals of Cosmic Physics, 11, 1 [Searle, Wilkinson & Bagnuolo Searle et al.1980]searle1980 Searle L., Wilkinson A., Bagnuolo W. G., 1980, http://dx.doi.org/10.1086/158165ApJ, 239, 803 [Scheuermann et al.2023]Scheuermann2023 Scheuermann F., Kreckel K., Barnes A. T., Belfiore F., Groves B., Hannon S., Lee J. C., et al., 2023, http://dx.doi.org/10.1093/mnras/stad878MNRAS, 522, 2369 [Tang et al.(2014)]tang2014 Tang J., Bressan A., Rosenfield P., Slemer A., Marigo P., Girardi L., Bianchi L., 2014, http://dx.doi.org/10.1093/mnras/stu2029MNRAS, 445, 4287 [Thilker et al.2022]Thilker2022 Thilker D. A., Whitmore B. C., Lee J. C., Deger S., Chandar R., Larson K. L., Hannon S., et al., 2022, http://dx.doi.org/10.1093/mnras/stab3183MNRAS, 509, 4094 [Turner et al.2021]turner2021 Turner J. A., Dale D. A., Lee J. C., Boquien M., Chandar R., Deger S., Larson K. L., et al., 2021, http://dx.doi.org/10.1093/mnras/stab055MNRAS, 502, 1366 [van Zee et al.(1998)]zee1998 van Zee L., Salzer J. J., Haynes M. P., O'Donoghue A. A., Balonek T. J., 1998, http://dx.doi.org/10.1086/300647AJ, 116, 2805 [Wei, Keto & Ho Wei et al.2012]wei2012 Wei L. H., Keto E., Ho L. C., 2012, http://dx.doi.org/10.1088/0004-637X/750/2/136ApJ, 750, 136 [Whitmore et al.(2010)]whitmore2010 Whitmore B. C. et al., 2010, http://dx.doi.org/10.1088/0004-6256/140/1/75AJ, 140, 75 [Whitmore et al.(2011)]whitmore2011 Whitmore B. C. et al., 2011, http://dx.doi.org/10.1088/0004-637X/729/2/78ApJ, 729, id. 78 [Whitmore et al.2021]Whitmore2021 Whitmore B. C., Lee J. C., Chandar R., Thilker D. A., Hannon S., Wei W., Huerta E. A., et al., 2021, http://dx.doi.org/10.1093/mnras/stab2087MNRAS, 506, 5294 [Wofford, Leitherer & Chandar Wofford et al.2011]wofford2011 Wofford A., Leitherer C., Chandar R. 2011, http://dx.doi.org/10.1088/0004-637X/727/2/100ApJ, 727, id. A100 [Young et al.(1996)]young1996 Young J. S., Allen L., Kenney J. D. P., Lesser A., Rownd B., 1996, http://dx.doi.org/10.1086/118152AJ, 112, 1903 [Zhang, Fall & Whitmore Zhang et al.2001]zhang2001 Zhang Q., Fall S. M., Whitmore B. C., 2001, http://dx.doi.org/10.1086/322278ApJ, 561, 727
http://arxiv.org/abs/2307.04165v1
20230709130711
On IMU preintegration: A nonlinear observer viewpoint and its application
[ "Bowen Yi", "Ian R. Manchester" ]
eess.SY
[ "eess.SY", "cs.SY" ]
1 .001 Yi and Manchester mode = title]On IMU preintegration: A nonlinear observer viewpoint and its applications 1]Bowen Yi[ auid=000,bioid=1, ] [1] [email protected] [1]organization=Robotics Institutes, University of Technology Sydney, postcode=NSW 2006, country=Australia [2]organization=Australian Centre for Robotics, School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, postcode=NSW 2006, country=Australia 2]Ian R. Manchester [email protected] [cor1]Corresponding author (The work has partially been done when the first author was with The University of Sydney.) The inertial measurement unit (IMU) preintegration approach nowadays is widely used in various robotic applications. In this article, we revisit the preintegration theory and propose a novel interpretation to understand it from a nonlinear observer perspective, specifically the parameter estimation-based observer (PEBO). We demonstrate that the preintegration approach can be viewed as recursive implementation of PEBO in moving horizons, and that the two approaches are equivalent in the case of perfect measurements. We then discuss how these findings can be used to tackle practical challenges in estimation problems. As byproducts, our results lead to a novel hybrid sampled-data observer design and an approach to address statistical optimality for PEBO in presence of noise. Nonlinear observer IMU preintegration Robotics Sampled-data estimation [ [ August 12, 2023 =================== § INTRODUCTION State estimation and perception are fundamentally important for autonomous systems <cit.>. Initially, filtering approaches dominated the field of online state estimation due to the limitation of computational capacity <cit.>. In recent years full smoothing approaches which are based on nonlinear batch optimisation have gained popularity in numerous localisation problems, since they provide estimates with high accuracy <cit.>. However, the optimisation-based estimation framework is computationally demanding. This issue is currently becoming more urgent than ever as we have witnessed the trend of utilisation of monocular cameras with IMUs – known as the monocular visual-inertial system (VINS) – in real-world robotic systems. The VINS is an asynchronous sampled system, with IMUs providing measurements at a high rate. As a result, there is the need to calculate the “standard” inertial integration from initial conditions between two camera frames, which thus makes it a daunting task to solve in real time. In <cit.>, Lupton and Sukkarieh propose the IMU preintegration approach to address the above-mentioned computational challenges. It allows pre-processing of the high-rate data from IMU to obtain low-rate pseudo measurements, in which initial conditions and the preintegrated quantities are separated, thus reducing on-line computational burden significantly. Later on, the preintegration approach was extended to kinematic models living on nonlinear manifolds <cit.>, and now is gradually becoming a popular result in the robotics community. More recently, it has been improved and elaborated from several different perspectives, e.g., analytical solutions for graph optimisation <cit.>, approximation via Gaussian process <cit.>, and generalisation on groups <cit.>, just to name a few. Since its introduction, the preintegration approach has been widely applied in various robotic systems, see e.g. <cit.>. In this paper we prove that the preintegration approach can be derived following the observer theory for nonlinear systems, in particular the parameter estimation-based observer (PEBO). It is a novel kind of constructive observer technique recently proposed by Ortega et al. in <cit.> and later elaborated in <cit.>, in which state observation is translated into an on-line parameter identification problem; see <cit.> for a geometric interpretation. Recently, we have extended the PEBO methodology from Euclidean space to marix Lie groups, which has been proven instrumental in solving several open problems in observer design for robotic systems <cit.>. Although the approaches of preintegration and PEBO have been pursued in parallel in different communities, it is interesting and generally important to elucidate the connections between these two frameworks. By bridging these distinct bodies of research, this paper aims to unveil their relationship and present the following main contributions. 1) We revisit the preintegration theory and provide a nonlinear observer interpretation to it. Namely, the preintegrated signals are exactly the dynamic extended variables (i.e., fundamental matrices) in PEBO but implemented in a moving horizon. Under some mild assumptions, we establish the equivalence between the preintegration and PEBO approaches. 2) We show the practical utility of the resulting equivalence in addressing several practical challenges encountered in state estimation problems. In particular, it provides a novel solution to design sampled-data observers for continuous-time dynamical systems and enables the attainment of statistical optimality in PEBO in the presence of noisy measurements. The remainder of the paper is organised as follows. In Section <ref> we consider the dynamical models in Euclidean space as an illustrative example to recall the preintegration and PEBO approaches. It is followed by some preliminary results about the connections between two approaches in Euclidean space. In Section <ref>, we present our main results on the manifold SO(3) ×^n, which is the state space considered in numerous robotic and navigation-related problems, and also the original motivation of IMU preintegration. Then, we discuss some applications of the main claim in Section <ref>. The paper is wrapped up by some concluding remarks in Section <ref>. Notation: For a given variable or signal x, sometimes we may simply write x(t) as x_t, and the dependency of signals on t is omitted for brevity when clear. We use x(t_1^-) to denote the value of x just before t_1, i.e. x(t_1^-):= lim_s>0,s→ 0 x(t_1 - s). We use |x| to represent the standard Euclidean norm of a vector. SO(3) represents the special orthogonal group, which is defined as SO(3)={R∈^3×3|R^⊤ R = I_3,  (R) =1}. The operator (·)_× is defined such that a_× b := a× b for two vectors a, b ∈^n. For a variable y, we use y̅ to represent its noisy measurement from sensors. λ_ max{A} denotes the largest eigenvalue of a symmetric matrix A∈^n× n. § PRELIMINARY RESULTS IN EUCLIDEAN SPACE We start with the deterministic systems with states living in Euclidean space to introduce our basic idea. Its extension to the systems on manifolds, which is tailored for pose estimation of rigid bodies, will be presented in the next section. §.§ Problem Set In many engineering problems, there is a need to estimate the unknown internal state x ∈𝒳⊂^n for the linear time-varying (LTV) dynamical system ẋ  =  A_t x + B_t u y  =  C_t x + D_t u with input u ∈^m and the output y ∈^p, and we usually consider the state space as ^n. Since sensor noise is unavoidable in practice, the measured signals of u and y satisfy u̅ = u + ϵ_u, y̅ = y + ϵ_y, in which ϵ_u ∈^m and ϵ_y∈^p represent measurement noise, usually modelled as zero-mean white-noise processes. This estimation problem has been well addressed by the Kalman filter and the full-information estimation approach (a.k.a. batch optimisation). In some applications, despite admitting continuous-time models, we are concerned with estimation of the state x at some discrete instants {t_k}_k ∈ℕ. This is because multiple sensors provide information with different rates – sometimes even having obvious time-scale separation. For example, in the problem of visual inertial navigation (VIN) for robotics, the IMU provides data at a very high rate, and it is reasonable to roughly view inertial measurements as some continuous-time signals. In contrast, it is well-known that image processing is relatively computationally heavy, and thus the camera provides data at a low rate. As a result, if the estimation algorithm is being processed at the same rate as the IMU, then it is usually not tractable on-line. In this paper, we make the following assumption. This scenario exists in many practical problems, particularly in robotic systems. The input u̅ is available as a continuous-time signal, and the output y̅ is measured at some discrete instances {t_k}_k∈ℕ. The main results can be extended straightforwardly to discrete-time systems with multi-rate sampled data (i.e. high-frequency input u̅ and low-rate output y̅), and we do not discuss it in this paper. §.§ Preintegration in Euclidean Space To address the state estimation of x under Assumption <ref>, Lupton and Sukkarieh proposed in <cit.> the preintegration approach to generate pseudo-measurements to improve on-line efficiency. Let us recall its basic idea with the LTV model (<ref>) as follows. <cit.> Consider the LTV system (<ref>). Given two instants t_k< t_k+1, there exist a matrix F_k and a vector v_k such that the state satisfies x(t_k+1) = F_k x(t_k) + v_k. for all x(t_k) ∈^n. Its proof is available in <cit.>. We underscore that the matrices F_k and v_k are independent of the state x, which are accessible signals and uniquely determined by the measurable signals A_t,B_t and u_t. Hence, we call F_k and v_k as preintegration, and they can be calculated as F_k = F(t_k+1^-), v_k = v(t_k+1^-), which is generated by the dynamics . Ḟ = A_t F, F(t_k^+) = I_n v̇ = A_t v + B_tu, v(t_k^+) = 0_n  } Note that when implementing preintegration we only have the measurable signal y̅ rather than the perfect output y, and thus the second preintegration is implemented as v̇̅̇ = A_t v̅ + B_tu̅ , v̅(t_k^+) =0_n, where v̅ may be viewed as the noisy signal of v. They can be written as the Picard integral for t ∈ ( t_k, t_k+1 ) F_t = ∫_t_k^t A_s F_s ds, v̅_t = ∫_t_k^t ( A_s v_s + B_s u̅_s ) ds, and implemented numerically via discretization. Now, using the preintegration we obtain the equation (<ref>) that is a new LTV discrete-time dynamical model with known F_k and v_k, and the (nominal) output function y(t_k) = C(t_k) x(t_k) + D(t_k) u(t_k). Supposed the current moment is t_N, in the full-information estimation (FIE) approach we need to estimate {x(t_k)}_k∈ℓ with ℓ:= {0,…, N}. The simplest case is to consider solving the optimisation (, ) = , min  J_ x() + J_ w() s.t. x̂(t_k+1) - F_k x̂(t_k) - v̅_k = w_k with the cost functions[We assume that y is measured from t_0 without loss of generality.] J_ x() = ∑_k = 0^N-1γ_k |y̅(t_k) - C(t_k) x(t_k) - D(t_k) u̅(t_k) |^2 J_ w() = ∑_k = 0^N-1γ_k'|w_k|^2 and the definitions := ( x_0, …, x_N), := ( x̂_0, …, x̂_N) := (w_0,…, w_N). The coefficients γ_k, γ_k'>0 may be involved to weight different instances, and two widely-used selections are: (i) using the norm inverse of some covariance for the consideration of noise; and (ii) selecting γ_k = λ^N-k with λ∈ (0,1) to represent forgetting factors in on-line deterministic estimators. The above summary of preintegration is presented as a high-level framework, which may be implemented in different ways. For instance, the optimisation problem can be solved for each instance (a.k.a. full-information estimation, FIE), in a moving-horizon, or incrementally as done in LTV Kalman-Bucy filters at the discrete instants {t_k} in a lower sampling rate; the optimisation may also be replaced by computing the optimal maximum a posteriori (MAP) estimate, and combined with factor graphs. To summarise, the basic idea is to use the preintegration (<ref>) to transform the continuous-time model (<ref>) into the discrete model (<ref>) with low-rate measurements, and then complete the estimation task.[To distinguish from the other estimates in the remainder of the paper, we write the estimate from the preintegration approach as _ PI.] Note that a salient feature of (<ref>) is the separation between the preintegrated signals F,v and the initial condition x(t_k), which is capable of reducing significant on-line computational burden in the nonlinear context. .9 State Estimation via Preintegration: - preintegration: (<ref>) - estimate: _ PI - optimisation: (<ref>)-(<ref>) The computational burden of estimation of the original continuous-time system (<ref>) is not prohibitive, due to linearity in the model. However, when considering the visual navigation problem on manifolds, high nonlinearity and non-convexity limit the performance and complicate the analysis of both full-information estimation and filtering approaches. §.§ Parameter Estimation-Based Observer Recently, a new constructive nonlinear observer technique, namely PEBO, has been developed for a class of state-affine systems <cit.>. Its basic idea is translating state estimation into the one of some constant variables and then identifying them online. This provides an efficient way to simplify observer design. Instead of introducing the approach comprehensively, we limit ourselves to the LTV system (<ref>) to show the basic idea of PEBO. Following <cit.>, the first step is to design the dynamic extension ξ̇= A_t ξ + B_t u, ξ(t_0)= ξ_0, with ξ∈^n, in which the initial condition ξ_0 is selected by users thus being known. We underline here that the PEBO approach is developed for the deterministic system with the perfect measurement u, and the robustness to various uncertainties can be addressed from standard Lyapunov analysis. In this subsection, we consider the case with access to the perfect u, and its extension to with the noisy measurement u̅ will be discussed in Section <ref>. If we define the error e:= x -ξ, it yields the error dynamics ė = A_t e. As shown in linear systems theory <cit.>, the solution of e is given by e(t) = Φ(t,0) e(0), in which Φ(t,s) is the state transition matrix of A_t from s to t. Though it is generally impossible to write down the function Φ(t,s) analytically, it can be calculated by implementing the dynamics of fundamental matrix Ω on-line Ω̇  =  A_t Ω,     Ω(t_0)= I_n Φ(t,s)  = Ω(t) Ω(s)^-1. Then, we have the new parameterisation to the state x as x_t = ξ_t - Ω_t ξ_0 + Ω_t θ with the unknown vector θ := x(0). It means that once the parameter θ have been determined as θ̂, one has the state estimation as x̂_t = ξ_t - Ω_t ξ_0 + Ω_t θ̂. By plugging the new parameterization of x into (<ref>), we have the linear regression model with respect to θ as follows Y_t = C_tΩ_t θ with the variable Y_t := y_t - C_t ξ_t + C_tΩ_t ξ_0 - D_t u̅_t. Its noisy “measurement” is defined accordingly as Y̅_t := y̅_t - C_t ξ_t + C_tΩ_t ξ_0 - D_t u̅_t. The remainder is to estimate θ from the regressor (<ref>) on-line. With measurements collected at {t_k}_k ∈ℕ, the simplest case at the moment t_N is to solve the optimisation θ̂:= θ∈^nmin ∑_k=0^N-1γ_k | Y̅(t_k) - C(t_k)Ω(t_k)θ|^2, with some coefficients γ_k >0. Hence, the PEBO approach can be summarised below. .9 Parameter Estimation-Based Observer: - dynamics: (<ref>), (<ref>) - estimate (observer output): _ PEBO from (<ref>) - optimisation: (<ref>) For batch optimisation or filtering approaches, it is necessary to impose some “informative” excitation or observability assumptions on the model (<ref>) along the trajectory. There are some observer design tools requiring observability/detectability uniformly along all feasible solutions, e.g., <cit.>; however, this is not the case in various robotic localisation and navigation problems. It means that the optimisation (<ref>) may have multiple or even infinite solutions under an insufficiently excited trajectory. §.§ The PEBO Viewpoint to Preintegration In this section, we provide our new interpretation to the preintegration approach from a nonlinear observer perspective. For states living in Euclidean space with perfect or noise-free measurement of u, we summarise our findings as follows. Consider the LTV system (<ref>) with ϵ_u =0. State estimation from the preintegration approach using (<ref>)-(<ref>) exactly coincides with that from the PEBO (<ref>)-(<ref>) using the zero initial condition ξ_0 = 0_n, in the following senses. a) The preintegration signal F and the fundamental matrix Ω satisfy Ω(t_k)  = ∏_i=0^k-1 F_i := F_k-1… F_0 , ∀ k∈ℕ Ω(t)  =  F(t)Ω(t_k) , t ∈ (t_k, t_k+1). b) The preintegration signal v and the dynamic extension variable ξ verify v_t  = ξ_t - Ω_t Ω(t_k)^-1ξ(t_k) , t ∈ (t_k, t_k+1). c) If the cost function J_ x + J_ w in (<ref>) admits a unique global minimum, the PEBO estimate equals to the one from preintegration, i.e., _ PEBO = _ PI. First, we note that the fundamental matrix Ω shares the same dynamics as the one of the matrix F in preintegration. The only difference is that the latter resets its initial values in instances {t_k^+}_k ∈ℕ. From the semigroup property of the state transition matrix Φ(t,s), as well as the resetting lim_s→ t_k^-F(s) = I_n, we have Φ(t,t_k) = F(t), t∈ (t_k, t_k+1). On the other hand, for t∈ (t_k, t_k+1) we have Ω(t)  = Φ(t,t_0) Ω_0  = Φ(t,t_k)Φ(t_k,t_k-1) ⋯Φ(t_1,t_0) I_n  =  F(t) ∏_i=0^k-1 F_i, which verifies the first claim. For the case ϵ_u =0, we have v(t) = v̅(t) for all t≥ 0. By comparing the dynamics of ξ and v, we have [̇L̇1̇Ṙ]̇v̇-̇ξ̇ =  A_t (v-ξ), thus v_t - ξ_t = Φ(t,s) (v_s -ξ_s), ∀ t_k+1>t≥ s > t_k. Selecting s= t_k, and resetting as done in preintegration lim_s→ t_k^- v(s) = 0_n, then for t∈ (t_k,t_k+1) we have v_t  = ξ_t - Φ(t,t_k)ξ(t_k), which verifies the item b). At the end, let us show the equivalence between the optimisation problems (<ref>) and (<ref>). For the case of ϵ_u = 0 (with perfect measurement of u), we have x(t_k+1) - F_k x(t_k) - v_k =0, and thus J_ x() = J_ x() + J_ w(0) ≤ J_ x() + J_ w(). Since we have assumed the unique minimum of the cost function, the optimisation in the preintegration approach becomes = min  J_ x() with the hard constraint x̂(t_k+1) - F_k x̂(t_k) -v_k =0, k =0,…, N-1. Invoking the properties a)-b), the above optimisation can be written as = ∈^(N+1)nmin ∑_k =0^N-1γ_k |y̅(t_k) - C(t_k) x(t_k) - D(t_k) u̅(t_k) |^2  = ∈^(N+1)nmin ∑_k =0^N-1γ_k |y̅_t_k - C_t_k(ξ_t_k +Ω_t_k( x_0 - ξ_0)) . . - D_t_k u_t_k|^2  = ∈^(N+1)nmin ∑_k =0^N-1γ_k | Y̅(t_k) - C(t_k)Ω(t_k) x_0 |^2, where in the last equation we have used the hard constraints (<ref>). Let us recursively solve (<ref>) – combining the properties a) and b) – we have the new constraint x̂(t_k) = ξ(t_k) + Ω(t_k) x̂_0, which has been plugged into the second equation in (<ref>). It is clear that the cost function in (<ref>) only contains the decision variable x̂_0, which is the first n-elements in , and the solution to the optimisation (<ref>) is thus given by x̂_0  = x_0 ∈^nmin ∑_k =0^N-1γ_k | Y̅(t_k) - C(t_k)Ω(t_k) x_0 |^2 together with (<ref>), and note that Ω and ξ are available signals (i.e. the dynamic extension variables in the PEBO). Obviously, this exactly coincides with the solution _ PEBO for the case with zero initial condition ξ_0 for the dynamic extension. We complete the proof for the term c). The above result establishes the connection between preintegration and PEBO for the LTV dynamical model (<ref>) with the ideal measurements of u. § IMU PREINTEGRATION AND PEBO ON MANIFOLDS In this section, we extend the results in Section <ref> to the extended pose estimation problem on the manifold SO(3) ×^n, which was the original motivation to study the preintegration approach. §.§ IMU Preintegration Let us recall the approach of IMU preintegration, which was proposed in <cit.> and elaborated on the manifold in <cit.>. The motion of rigid body can be charaterised by the kinematic model Ṙ  =  R ω_× v̇  =  a +g ṗ  =  v with the attitude R∈ SO(3), the sensor velocity v ∈^3, the “apparent” acceleration a ∈^3 in the inertial frame {I}, and the rigid-body position p ∈^3, which is briefly written as p. The gravity vector is given by g = [0,0,9.8]^⊤ m/s^2. See <cit.> for a concise representation using the matrix group SE_2(3). The IMU provides discrete-time samples of the biased acceleration and rotational velocity in the body-fixed frame {B}, i.e., a̅  =  a + b_a + ϵ_a ω̅  = ω + b_ω + ϵ_ω, in which b_a and b_ω represent the sensor biases[They are slowly time-varying, but can be modelled as constants.], and ϵ_a and ϵ_ω are measurement noise. §.§.§ Standard inertial integration If the “initial” condition at t_1 is given, then the states (R, v, q) can be uniquely obtained (for the noise-free case) as the Picard integral R(t_2) = R(t_1) + ∫_t_1^t_2 R(s) [ω̅(s)- b_ω]_× ds v(t_2) = v(t_1) + ∫_t_1^t_2 R(s) ( a̅(s) - b_a )ds + Δ_t g p(t_2) = p(t_1) + Δ_t v (t_1) + 1 2Δ_t^2 g + ∬_t_1^t_2 R(s) ( a̅(s) - b_a ) d s^2 with Δ_t:= t_2 - t_1. If Δ_t is sufficiently small, then the first integral equation in (<ref>) can be approximated by <cit.> R(t_1) ≈ R(t_1) ( ∫_t_1^t_2 (ω̅(s) - b_ω) ds ). Note that for a relatively large Δ t, this does not hold. As is shown in <cit.>, the above standard inertial integration equations have strong nonlinearity and non-convexity with respect to the unknown initial conditions, mainly stemmed from the attitude state R. Between any two key frames, it requires to repeat the above “standard” integration, which yields heavy computational burden for real-time implementation. §.§.§ Inertial preintegration It is well known that IMUs are sampled with a much higher rates than other sensors for navigation or localisation. In <cit.>, it is suggested to integrate the inertial observation between required poses in the body-fixed frame of the previous pose, and then we may view the inertial observations as a single observation in the filter. To be precise, we may define rotation matrix Δ R_t_1^t related to the attitude at t_1, i.e. R(t) = R(t_1) Δ R_t_1^t, with the state at t_1 being Δ R_t_1^t_1 = I_3. In general, the function Δ R_t_1^t does not have an analytic form, but the relative rotation matrix Δ R_t_1^t can be approximated by Δ R_t_1^t ≈( ∫_t_1^t( ω̅(s) - b_ω)ds ) for |t-t_1| sufficiently small. The inertial integration (<ref>) can be equivalently written as R(t_k+1)  =  R(t_k) Δ R_t_k^t_k+1 v(t_k+1)  =  v(t_k) + R(t_k) Δ v_t_k^t_k+1 + Δ_t g p(t_k+1)  =  p(t_k) + Δ_t v (t_k) + 1 2Δ_t^2 g + R(t_k)Δ p_t_k^t_k+1 with the functions for t≥ t_k Δ v_t_k^t = ∫_t_k^tΔ R_t_k^s ( a̅(s) - b_a ) ds Δ p_t_k^t = ∬_t_k^tΔ R_t_k^s( a̅(s) - b_a ) d s^2. Note that the terms Δ v_t_k^t_k+1 and Δ p_t_k^t_k+1 are defined in the body-fixed frame, which can be calculated perfectly – by preintegrating IMU measurements – without the access to the initial conditions (R_t_1, v_t_1, p_t_1). This is the original motivation to study IMU preintegration. §.§.§ Estimation via IMU preintegration The IMU preintegration has been widely used in many robotic applications, e.g., visual inertial SLAM and navigation. In these problems, there are numerous feature points, whose coordinates p_i ∈^3 (i=1,…, n_p) are constant and unknown, i.e., ṗ_i =0 , i=1,…, n_p. Each feature is captured by the camera, thus satisfying some algebraic equations y  = h(x) + ϵ_y with y = ^n_y and the noise ϵ_y, which is the output function (a.k.a. observation models) in the observer theory. We have defined the extended state variable as[We assume that sensors have been well calibrated to simplify the presentation. In more general cases, we may take all biases into the variable x and estimate them on-line simultaneously.] x = (R, v, p, p_1, …, p_n_p) ∈ with the manifold := SO(3) ×^3(2+n_p). At the instance t_N, we would like to estimate the state (t_N):= ( x(t_0),x(t_1), …, x(t_N)). Similar to the case in Euclidean space, we may formulate it as the batch optimisation to estimate the state = ∈^Nmin J_ I () with J_ I :=  ∑_k =0^N-1[ (k) + _ R(k) + _ v(k) + _ p(k) ] and (k)  = | y(t_k) - h(x(t_k)) |^2_Σ_y^-1(k) _ R(k)  = |R(t_k+1) - R(t_k)Δ R_t_k^t_k+1|_Σ_1^-1(k)^2 _ v(k)  = | v (t_k) + R (t_k) Δ v_t_k^t_k+1 + Δ_t g - v (t_k+1) |_Σ_2^-1(k)^2 _ p(k)  = | p_t_k + Δ_t v _t_k + 1 2Δ_t^2 g + R_t_kΔ p_t_k^t_k+1 - p_t_k+1|_Σ_3^-1^2 and Σ_i ≻ 0 (i=1,2,3) are some covariances to characterise the uncertainty in the model (<ref>). If the stochastic properties of ϵ_a and ϵ_w are known in advance, we may use some on-line propagation to approximate Σ_i(k). See <cit.> for example, and we omit its details. .9 Estimation via IMU Preintegration on Manifolds: - preintegration: (<ref>), (<ref>) - estimate: _ PI - optimisation: (<ref>)-(<ref>) §.§ Parameter Estimation-Based Observer on Manifolds In this section, we briefly summarise the main results in our previous papers <cit.> about the PEBO design on manifolds. Consider the kinematics (<ref>) with the measurable output in (<ref>). In <cit.>, the observer design is conducted in the body-fixed frame with the dynamics given by Ṙ  =  Rω_× v̇  =  -ω_× v + a̅ - b_a + R^⊤ g ṗ  =  -ω_× p - v, where p is defined as the origin coordinate of {I} in the body-fixed frame, i.e. p := R^⊤ p. In the PEBO approach, we design the dynamic extension Q̇  =  Q ω_× ξ̇  =  A(ω, Q)ξ + B(a̅ , b_a) Ω̇  =  A(ω, Q) Ω Ω(t_0)  =  I, with A(ω,Q ) := -ω_× 0 Q^⊤ -I -ω_× 0 0 0 0, B(a̅, b_a) := a̅ - b_a 0 0. The key observation in <cit.> is that the system state can be linearly parameterised as R_t  =  Q_c Q^⊤_t v p  g_c  = ξ_t - Ω_tξ_0 + Ω_t θ with the unknown constant matrix Q_c ∈ SO(3), and the vector θ:= ( v(0), p(0), g_c). Similar to the case in Euclidean space, we only need to determine (Q_c,θ) and :=(p_1, …, p_n_p), whose estimates are written as (Q̂_c, θ̂,). Then, the estimates of x∈𝒳 is given by x̂_t = (R̂, R̂v̂ ,R̂p̂ , ). with R̂  =  Q̂_c Q_t v̂, p̂, ĝ_c^⊤  = ξ_t - Ω_t ξ_0 + Ω_t θ̂. For the measurements collected at instances {t_k}, the unknown (Q̂_c, θ̂,) can be obtained from the following optimisation: (Q̂_c, θ̂, )  = Q_c ∈ SO(3) θ∈^9, ∈^3n_pmin ∑_k=0^N-1(k) ĝ_c = Q̂_c^⊤ g with defined in (<ref>). The main result of PEBO on manifolds is summarised as follows. .92 PEBO on manifolds: - dynamics: (<ref>) - estimate (observer output): _ PEBO from (<ref>)-(<ref>) - optimisation: (<ref>) §.§ The PEBO Viewpoint to IMU Preintegration We are in the position to present the main result of the paper. Similarly to the case in Euclidean space, we establish the connection between IMU preintegration and PEBO on manifolds as follows. Consider the kinematics (<ref>) with constant p_i (i=1,…, n_p). The estimation of the state of the IMU preintegration (<ref>)-(<ref>) converges to the estimate of the PEBO (<ref>)-(<ref>) as min_j=1,2,3(λ_ max{Σ_j}) → 0, in the following sense. a) The preintegration of Δ R_s^t and the extended state Q satisfy Q(t_0)^⊤ Q(t_k)  = ∏_i=0^k-1'Δ R_t_k^t_k+1 := Δ R_t_0^t_1…Δ R_t_k-1^t_k for all k∈ℕ. b) If the cost function has a global minimum, then the estimates from the PEBO and the IMU preintegration satisfy _ PI→_ PEBO λ_ max{Σ_j}→ 0  (j=1,2,3). The property a) is straightforward to verify because Δ R_t_0^t_1…Δ R_t_k-1^t_k = Δ R_t_0^t_k and d dt(RQ^⊤) = 0. When the largest eigenvalue of Σ_j converges to zero, the last three terms in (<ref>) make (<ref>) as the hard constraints. For the fact b), we need to show that the constraint (<ref>) together with ĝ_c = Q̂_c^⊤ g in PEBO yields the constraint (<ref>) in IMU preintegration. To see this, for a fixed (constant) estimate θ̂ and defining η := (v̂, p̂, ĝ_c) we have η̇  = ξ̇- Ω̇ξ_0 + Ω̇θ̂  =  A(ω, Q) ξ + B(a̅,b_a) - A(ω, Q) Ω(ξ_0 + θ̂)  =  A(ω, Q) η + B(a̅, b_a). Now, consider the coordinate transformation η↦ z= [ z_1; z_2; z_3 ] := [ R̂v̂; R̂p̂; Q̂_c ĝ_c ] . In the transformed coordinate, the dynamics verifies ż_1 = R( a- b_a) + g ż_2 = z_1 ż_3 = 0. Considering the constraint in (<ref>), we may equivalently select the decision variable as (R̂, z_1,z_2, ), and the change of decision variable does not affect the minimum of the cost function . In the new coordinate, z_1 and z_2 satisfy z_1(t_k+1)  =  z_1(t_k) + R(t_k) Δ v_t_k^t_k+1 + Δ_t g z_2(t_k+1)  =  z_2(t_k) + Δ_t v (t_k) + 1 2Δ_t^2 g + R(t_k)Δ p_t_k^t_k+1. It exactly coincides with (<ref>). Hence, following the same arguments in the proof of Proposition <ref>, we can show that the estimates from these two approaches are exactly the same. § DISCUSSION AND APPLICATIONS §.§ Discussions In this section, we present some further remarks and applications following from the connections between pre-integration and PEBO. First, let us make some comparisons between two frameworks of PEBO and preintegration. The preintegration approach may be roughly viewed as the implementation of PEBOs in a moving horizon, i.e., the “initial moment” is recursively defined as {t_k}_k∈ℕ and then the task is to estimate the state x(t_k). In PEBO, we only need to estimate the initial condition at t_0. For the ideal case with perfect models and measurements, these two frameworks exactly coincide with each other, as illustrated in Proposition <ref>. In the pose estimation-related problems, the IMU preintegration utilises the body-fixed frame for accelerations and velocities; in contrast, the PEBO in our previous works <cit.> adopts the inertial frame. In IMU preintegration, it is possible to write the state transition matrix analytically for the ( v, p)-subsystem; see (<ref>). In PEBO, we need to calculate the state transition matrix for the ( v, p)-subsystem numerically, but it brings two benefits: B1: The sensor bias b_a appears in the dynamics (<ref>) in a linear way. As shown in <cit.>, we are able to construct a linear regression model on the unknown bias b_a using the PEBO methodology. B2: In some applications, we do not need the estimation of attitude R. By applying PEBO in the body-fixed frame, we are able to estimate ( v, p, ) directly without the information of attitude. In the generalised PEBO approach <cit.>, there is a need to calculate the fundamental matrix Ω(t) over time in (<ref>). Though its dynamics is forward complete, the variable Ω is unbounded when the matrix A_t is unstable. Since Ω is part of the internal state in the observer, at some finite time the observer would become dramatically ill-conditioned and impossible to represent accurately in memory. As a result, it may bring some numerical issues and make the observer very sensitive to sorts of perturbations. For this consideration, it is reasonable to implement a PEBO in “moving horizons” like preintegration in order to improve robustness. When considering the uncertainty from the input-output measurements, the estimates from the PEBO and preintegration approaches would be different. In PEBO, we only need to solve the optimisation problem with the decision variable θ (equivalently x_0) at a single instance; in contrast, the hard constraint (<ref>) does not hold in the preintegration approach, and there are additional decision variables {x_k, w_k}_k∈ℕ. For this case, their relation resembles the single and multiple shootings in the direct methods for optimal control. State estimation via recursive algorithms under Assumption <ref> is known as the problem of sampled-data (or digital) observers <cit.>. Even for linear time-invariant (LTI) systems, there are still several open problems to design a sampled-data observer <cit.>. An useful application of the proposed equivalence between preintegration and PEBO is providing a novel method to design sampled-data observers. We will present constructive details in the next subsection. §.§ Application I: Sampled-data Observer via Preintegration In this section, we show that the proposed equivalence provides a new method to design a hybrid sampled-data observer for the LTV system (<ref>). We summarise the results as follows. To simplify the presentation, as well as to obtain asymptotic stability claims, we consider the ideal measurements (u,y) in the following proposition. Consider an observable LTV system (<ref>). Assume the sampled instances {t_k}_k ∈ℕ are selected such that P1: The pair (Φ(t_k+1, t_k), C(t_k)) is (discrete-time) uniformly completely observable, where Φ(·,·) is the continuous-time state transition matrix of A_t defined in (<ref>). P2: There exists a constant k_2 ∈ℕ_+ such that W_q := ∑_i= k^k+k_2Ψ(i,k) Q Ψ^⊤(i,k) ≻δ_q I_n for some Q ≻ 0, δ_q>0 and ∀ k∈ℕ with Ψ(i,k) the discrete-time state transition matrix of z_k+1 = Φ(t_k+1,t_k) z_k. Then, the hybrid sampled-data observer . Ḟ = A_t F, F(t_k^+) = I_n v̇ = A_t v + B_tu, v(t_k^+) = 0_n. F_k = F(t_k+1^-), v_k = v(t_k+1^-).  } ℋ_1 . x̂_k+1 = F_k x̂_k + v_k + K_k+1 e_k+1 e_k+1 = y_t_k+1 - C_t_k+1 (F_k x̂_k + v_k) - D_t_k+1u_t_k+1 K_k = P̂_k C_k^⊤ [C_k P̂_k C_k^⊤ + R] P̂_k+1 = F_k P_k F_k^⊤ + Q P_k = P̂_k - K_k C_k P̂_k. } ℋ_2 with some positive definite matrices Q and R, provides a globally asymptotically convergent estimate x̂, i.e. lim_k→∞ |x̂_k - x(t_k)| =0. According to Propositions <ref>-<ref>, the systems state x at the instances {x(t_k)}_t∈ℕ exactly satisfies the discrete dynamical model x(t_k+1)  =  F_k x(t_k) + v_k y(t_k)  =  C(t_k) x(t_k) + D(t_k) u(t_k), with the preintegration signals F_k and v_k generated from the system _1. Invoking the first equation in (<ref>), we have F_k = Ω(t_k+1) Ω^-1(t_k) = Φ(t_k+1, t_k). As a consequence, the discrete-time uniform complete observablility (UCO) of the pair (Φ(t_k+1,t_k), C(t_k)) implies the UCO of the LTV system (<ref>). Note that the system _2 is the standard Kalman-Bucy filter for the LTV system (<ref>). Together with the condition (<ref>), we conclude the global asymptotic convergence (<ref>) by invoking <cit.>. In the condition P1, it is equivalent to impose the UCO of the discrete-time LTV system (<ref>). It is relatively straightforward to verify the UCO of the continuous-time system (<ref>) is a necessary condition to P1, but it is not sufficient. Consider the constant observable pair (A_0, C_0), and let A = A_0, C(t) = C_0 for t∈ [2k, 2k+1) and C(t)= 0 for t∈ [2k+1, 2k+2) with k ∈ℕ. The resulting pair (A_t,C_t) guarantees the UCO of (<ref>) but not for the system (<ref>) if the sampled data are collected in [2k+1, 2k+2). On the other hand, the condition P1 is unnecessary to design a sampled-data observer. If the observability Gramian is positive definite only in some interval but not uniform over time, it is still possible to design globally convergent state observer by using MHE or some state-of-the-art recursive designs <cit.>. In <cit.>, nonlinear sampled-data observers are classified into two categories: i) design via approximate discrete-time models of the plant; and ii) emulation: discretisation of continuous-time observers. Clearly, the proposed observer belongs to the first class, but we utilise an exact discrete-time model rather than its approximation because of its linearity. Indeed, the proposed design is also applicable to nonlinear systems which can be transformed into the affine form. The proof of Proposition <ref> does not rely on the assumption of periodic sampling. That is, the proposed sampled-data observer is also immediately applicable to the case with asynchronous measurements, which was studied for the linear time-invariant (LTI) systems <cit.>. We provide a much simpler solution to this specific problem for LTV systems. §.§ Application II: Statistical Optimality in PEBO In this subsection, we will show that the proposed equivalence in Sections <ref>-<ref> leads to an intuitive way to improve the performance of PEBO in the presence of noisy input u. We assume that the initial condition x_0 is a deterministic variable but unknown, and model the noisy terms ϵ_u and ϵ_y as zero-mean white noise processes (<ref>), in which ϵ_u ∈^m and ϵ_y∈^p are addictive zero-mean white-noise processes, namely[Here, the white processes are not rigorously defined due to the δ-covariances, with δ the delta function. A rigorous definition is based on the stochastic differential equations <cit.>.] 𝔼[ϵ_u,tϵ_u,s^⊤] = Σ_u δ(t-s) 𝔼[ϵ_y,tϵ_y,s^⊤] = Σ_y δ(t-s). The variables {ϵ_u}, {ϵ_y} and x_0 are uncorrelated. Then, the error e= x-ξ in PEBO for the LTV system (<ref>) satisfies ė = A_t e + B_t ϵ_u. According to the state covariance propagation for LTV systems <cit.>, we have x_t - ξ_t = Ω_t(θ - ξ_0) + ϵ_e with the white-noise process ϵ_e, i.e. 𝔼[ϵ_e(t) ϵ_e(s)^⊤] = Π_t δ(t-s) and Π_t satisfies Π̇_t = A_t Π_t + Π_t A_t^⊤ + B_t Σ_u B_t^⊤, Π(0)= 0_n× n, where the initial condition of Π is due to the deterministic assumption of x_0. Noting that the uncertainties from u̅ and y̅ in (<ref>), we have Y̅ = C_tΩ_t θ + ϵ_ Y with ϵ_ Y(t) := ϵ_y - D_tϵ_u + C_tϵ_e. Unfortunately, the variables ϵ_e and ϵ_u are not independent, since ϵ_e(t) is indeed filtered from ϵ_u. However, for the LTV system (<ref>) without the feedfoward term, i.e. D_t=0, the variable ϵ_ Y is a white noise process 𝔼[ϵ_ Y(t)ϵ_ Y(s)^⊤] = (Σ_y + C_tΠ_t C_t^⊤) δ(t-s). Hence, we may reformulate the optimisation (<ref>) as θ̂:= θ∈^nmin ∑_k=0^N-1| Y̅(t_k) - C(t_k)Ω(t_k)θ|^2_(Σ_y + C_tΠ_tC_t^⊤)^-1 to obtain some statistic optimality, where Π_t is generated from (<ref>). In <cit.>, the PEBO approach is applicable to nonlinear systems in the form of ẋ = f(x,u), y= h(x,u), for which a coordinate transformation x↦ z:=ϕ(x) exists such that the lifted dynamics is given by ż = A(u,y)z + B(u,y) , y = C(u,y)z + D(u,y). It is generally difficult to calculate covariance propagation for nonlinear systems, but there are many works discussing how to empirically approximate it in the literature on preintegration <cit.>. The proposed connection between two approaches, together with the state-of-the-art development of preintegration, provides a promising way to develop nonlinear stochastic PEBO method. § CONCLUDING REMARKS In this paper, we have presented a novel observer interpretation to the IMU preintegration approach. Our findings reveal an exact correspondence between the preintegrated signals and the dynamic extended variables in PEBO that is implemented in a moving horizon. Furthermore, we have identified the precise conditions under which these two approaches yield identical estimates. These results were developed in both the Euclidean space and matrix Lie groups. Finally, we have utilised the proposed equivalence to design a novel sampled-data observer for LTV systems, and to improve the performance of PEBO in the presence of measurement noise. These connections suggest some interesting avenues for future research, including: - In the preintegration and PEBO approaches, we require that the system dynamics is in (or can be transformed into) a state-affine form (<ref>). It would be interesting to integrate them with contraction analysis <cit.>, for which the so-called differential dynamics is exactly an LTV system. - In Section <ref>, we show that different coordinates are used in the IMU preintegration and PEBO. For the latter, we adopt the body-fixed coordinate ( v, p), and it is interesting to observe the benefit of the linear parameterisation of bias b_a. This is notable by its absence in the inertial coordinate for preintegration <cit.>. Hence, it would be of practical interest to implement IMU preintegration in the body-fixed coordinate towards real-time bias estimation. - It is theoretically interesting to elaborate the results in Section <ref> using Itô integrals toward a more rigorous formulation. § APPENDIX § SOME DEFINITIONS A pair (A_k,C_k) of discrete-time systems is uniformly completely observable if the observability Gramian W_O[k,k_1] ≽δ_o I for some δ_o>0, k_1 ∈ℕ_+ and all k ≥ 0, with W_O[k,k_1]:= ∑_i=k^k+ k_1Ψ^⊤(i,k) C_k^⊤ C_k Ψ(i,k) in which Ψ(i,k) is the state transition matrix from k to i of the system z_k+1 = A_k z_k. § CREDIT AUTHORSHIP CONTRIBUTION STATEMENT Bowen Yi: Conceptualization, Methodology (propositions), Writing - original draft. Ian R. Manchester: Methodology, Writing - review and edit, Project administration. § DECLARATION OF COMPETING INTEREST The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. § ACKNOWLEDGEMENT This paper is supported by the Australian Research Council. The first author would like to thank Dr. Chi Jin for bringing IMU preintegration into his attention. abbrv
http://arxiv.org/abs/2307.07428v1
20230714154820
BiGSeT: Binary Mask-Guided Separation Training for DNN-based Hyperspectral Anomaly Detection
[ "Haijun Liu", "Xi Su", "Xiangfei Shen", "Lihui Chen", "Xichuan Zhou" ]
eess.IV
[ "eess.IV" ]
IEEE TRANSACTIONS ON IMAGE PROCESSING Liu et al.: BiGSeT for Hyperspectral Anomaly Detection BiGSeT: Binary Mask-Guided Separation Training for DNN-based Hyperspectral Anomaly Detection Haijun Liu, Member, IEEE, Xi Su, Xiangfei Shen, Lihui Chen and Xichuan Zhou, Senior Member, IEEE This paper was supported in part by the National Natural Science Foundation of China under Grant 62001063, Grant 61971072 and Grant U2133211; in part by the Graduate Research and Innovation Foundation of Chongqing, China, under Grant CYB22068; and in part by the China Postdoctoral Science Foundation under Grant 2020M673135. Corresponding author: Xichuan Zhou. H. Liu, X. Su, X. Shen, L. Chen and X. Zhou are with the School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 400044, China. August 12, 2023 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= Hyperspectral anomaly detection (HAD) aims to recognize a minority of anomalies that are spectrally different from their surrounding background without prior knowledge. Deep neural networks (DNNs), including autoencoders (AEs), convolutional neural networks (CNNs) and vision transformers (ViTs), have shown remarkable performance in this field due to their powerful ability to model the complicated background. However, for reconstruction tasks, DNNs tend to incorporate both background and anomalies into the estimated background, which is referred to as the identical mapping problem (IMP) and leads to significantly decreased performance. To address this limitation, we propose a model-independent binary mask-guided separation training strategy for DNNs, named . Our method introduces a separation training loss based on a latent binary mask to separately constrain the background and anomalies in the estimated image. The background is preserved, while the potential anomalies are suppressed by using an efficient second-order Laplacian of Gaussian (LoG) operator, generating a pure background estimate. In order to maintain separability during training, we periodically update the mask using a robust proportion threshold estimated before the training. In our experiments, We adopt a vanilla AE as the network to validate our training strategy on several real-world datasets. Our results show superior performance compared to some state-of-the-art methods. Specifically, we achieved a 90.67% AUC score on the HyMap Cooke City dataset. Additionally, we applied our training strategy to other deep network structures, achieving improved detection performance compared to their original versions, demonstrating its effective transferability. The code of our method will be available at https://github.com/enter-i-username/BiGSeT. Hyperspectral anomaly detection (HAD), deep neural network (DNN), separation training, background reconstruction, anomaly suppression. § INTRODUCTION In recent years, the research on hyperspectral imagery (HSI) has been gaining considerable attention in remote sensing due to its abundant spectral information for distinguishing materials <cit.>. The advantage of the high spectral resolution makes it possible to apply in many fields like land cover classification <cit.>, spectral unmixing <cit.> and anomaly detection <cit.>. Among them, hyperspectral anomaly detection (HAD) is becoming one of the hotspots in remote sensing because of its essentiality for many applications <cit.> and also because of the challenging nature of the problem. The HAD task aims to identify an extremely small number of pixels that differ significantly in their spectral signature from the surrounding or overall background pixels, without relying on any prior information <cit.>. The following three characteristics mainly cause the detection task to be difficult: 1) the absence of spectral information in advance for the desired target pixels; 2) the imbalance of samples between the anomalies and background in HSI; and 3) the complex diversity of the background, which leads to some pixels being erroneously recognized as abnormal. Due to the challenges posed by 1) and 2), existing methods primarily concentrate on unsupervised background modeling, which exposes abnormality by measuring the deviation degree of pixels from the background patterns directly learned from the HSI. They can be divided into three branches, including two traditional categories of statistical-based and representation-based methods, as well as deep learning-based methods. One of the early attempts is the statistical-based algorithm Reed-Xiaoli (RX) <cit.>. This method models the background from a probability distribution perspective, assuming that the majority of background pixels follow a multivariate Gaussian distribution. It calculates the covariance and mean of the HSI as the background model, and measures the differences between each pixel and the model using the Mahalanobis distance. Following the RX, various methods have been proposed to improve the statistical background modeling. To utilize more local information, the local RX (L-RX) <cit.> builds the background model in a window sliding manner where the window is centered around each pixel. The weighted RX (W-RX) <cit.> provides a better background estimation by developing a weight assignment strategy for each pixel. Moreover, some methods <cit.> adopt the kernel tricks to fit more complex non-Gaussian distributions. However, in real-world hyperspectral scenes, ideal distributions of background are not always satisfied. Some other efforts have been dedicated to the representation-based methods. They hold a weak assumption that in a local or global homogeneous scene, background pixels can be easily represented by other pixels in the HSI, while the anomalies cannot <cit.>. Thus, a background model is first learned to represent each pixel, and the residual errors between the representation and the original HSI are used to measure the abnormality. Based on this assumption, a variety of representation-based approaches have been proposed. The collaborative representation-based detector (CRD) <cit.> regards each background pixel as a linear combination of their neighboring pixels. To explore the global property of the background, the low-rank representation is introduced in <cit.>. Furthermore, the sum-to-one and non-negativity constraints are utilized in the representation to improve the physical interpretability <cit.>. In addition to the background component, the potential anomalies are also represented by other pixels to enhance the model <cit.>. Although this group of methods can achieve high detection performance in monotonous and simple scenes, the representation may fail when the background contains more complex diversity, which limits their practical applications. Recently, the successful application of deep learning-based methods in remote sensing <cit.> has also shown powerful advantages in HAD <cit.>. One of the popular methods among them is the autoencoder (AE), which can learn nonlinear and high-level features of an HSI in an unsupervised manner to handle complex background modeling. Generally, an AE is trained as a background reconstructor for a given HSI, and the anomalousness is measured using reconstruction errors. Early works appeared on sparse AEs <cit.>, where researchers imposed constraints on the sparsity of AEs. To exploit more local spatial information, additional regularizations were incorporated. Lu et al. <cit.> utilized the embedding manifold of AEs to reflect the intrinsic structure of the HSI. Fan et al. <cit.> proposed the graph regularization based on superpixel segmentation to preserve the local spatial consistency of the HSI. In addition to the simple AE architecture, other deep neural networks (DNNs) have also been extensively studied due to their ability to automatically extract deeper and more abstract features from the input data <cit.>. Although reconstruction DNNs can handle the complex background modeling, a more difficult issue arises, the identical mapping problem (IMP), also defined as the “identical shortcut” problem <cit.>. We expect a DNN to serve as a pure background reconstructor of the input HSI. However, with the progress of training, it inevitably involves the anomalies in the reconstructed image. This is because models favor learning all the information from input, including both background and anomalies simultaneously. As a consequence, the learned background reconstructor would obtain not only the background but also the anomalies together. Therefore, the background and anomaly pixels cannot be separated by the resulting reconstruction errors, leading to decreased performance. To clearly illustrate this phenomenon, we give a description of the IMP in Fig. <ref>. Note that this problem is widely prevalent in DNN structures such as AE, CNN and ViT, affecting the detection performance and hindering further research in DNN-based HAD. Some semi-supervised methods provide a perspective from dataset selection to address this problem <cit.>. They choose pure background pixels as the training dataset to reconstruct the original HSI, desiring more intensive separability of the reconstruction errors. However, if the training dataset is not properly cleansed and instead contains anomalies, models lack countermeasures to eliminate these bad samples during training. An alternative approach emphasizes model training <cit.>, instead of dataset purification. Specifically, during the training stage, anomalies are continuously prevented from being reconstructed, and this process is thus referred to as “anomaly suppression”. Until now, this category of methods has not been extensively studied yet. Also, they are currently only discussed on specific networks, which limits their potential applications in other DNNs. Therefore, the research on a general framework that can effectively suppress anomalous targets during training is of crucial importance and urgently needed. To address the IMP in DNN-based HAD, we propose a general binary mask-guided separation training framework in this article, named . In the process of model training, we explicitly separate the background and anomalies instead of treating them equally, which is the main cause of the IMP. To achieve this separation, we propose the utilization of a latent binary mask matrix that identifies potential anomalous and background pixels, thereby guiding the training process towards optimal performance. Based on this mask, we propose a separation training loss function that reconstructs the background while suppressing anomalies, thus facilitating the learning of pure backgrounds. Furthermore, for efficient suppression of anomalies, we employ the Laplacian of Gaussian (LoG) regularization, a second-order operator that mitigates the significant spatial variations of the anomalies during training. As we lack prior knowledge of anomalies, we generate the mask and regularly update it to ensure separability throughout the training process by binarizing reconstruction errors. To obtain a robust mask, we employ the statistics of the HSI to estimate the proportion threshold for the mask through a distribution-adjusted unimodal thresholding algorithm prior to training. Our  method is model-independent, making it applicable to various network structures. We conduct experiments to validate this advantage of our method, opening up new perspectives for its application in DNN-based HAD. The main contributions are listed as follows. * We propose a general separation training framework, , for DNN-based HAD to prevent the learning of identical mappings.  is based on a periodically updated binary mask matrix, and it separately considers the background and potential anomalies, preserving the former while suppressing the latter. * We propose an efficient regularization for anomaly suppression using the LoG operator, which estimates a pure background by alleviating large spatial variations of the anomalies. * We utilize the statistics of the HSI to estimate the proportion threshold through the distribution-adjusted unimodal thresholding algorithm to yield a more robust mask. * Our BiGSeT method outperforms other state-of-the-art HAD methods on the ABU benchmark dataset, demonstrating its adaptability across various scenes. Additionally, BiGSeT obtains an AUC score of 90.67% on the large HyMap Cooke City dataset, which features a more complex background. § RELATED WORK In this section, we will discuss DNN-based methods for HAD. Firstly, we will provide a brief introduction to some deep neural networks. Then, we will focus on some solutions to the IMP. §.§ Deep Neural Networks for HAD The most widely used networks are the convolutional neural networks (CNNs). Hosseiny et al. <cit.> used 1-D and 2-D stacked CNNs for extraction of deep and nonlinear relations. Cao et al. <cit.> built up their network by cascading a low-rank module and a multiscale module. Wang et al. <cit.> achieved autonomous detection using a fully convolutional AE with skip connections. Wang et al. <cit.> proposed a two-stream network that integrates local spatial-spectral information. The upstream network focuses on extraction of spatial features while the downstream network learns the distribution of the background, and the final detection map is integrated from the two streams. Very recently, Xiao et al. <cit.> for the first time introduced the vision transformer (ViT) structure into the field of HAD and found its effectiveness in detection tasks. §.§ Background Purification before Training The basic principle of the background purification methods is to remove potential anomalies and select pure background pixels as training samples in an unsupervised manner before training the network. An adversarial learning framework proposed in <cit.> utilized the density-based spatial clustering of applications with noise (DBSCAN) <cit.> after reducing the dimensionality of the original HSI, to reject low-density anomalies and noise. Based on the DBSCAN algorithm, Li et al. <cit.> searched their background data. They fed the data into a sparse coding-inspired generative adversarial network (GAN), and trained the network in an end-to-end manner. A simpler method was proposed in <cit.>, which assumes that the anomalies are larger on the Mahalanobis distance space, and the largest part is removed from the training samples. In addition to pixel-level background extraction, a superpixel-level method was also proposed in <cit.> to exploit more spatial information. The pixels were first segmented and clustered, and then the rare clusters were recognized and removed using connected domain searching. This category of methods performs background purification and network training independently, which may still pose a risk of incomplete removal of anomalies, potentially allowing DNNs to learn from remnants of anomalies in the training stage. §.§ Anomaly Suppression during Training The other branch usually takes all pixels in an image as training samples and employs anomaly suppression strategies to reduce the impact of anomalies during the training stage. In <cit.>, the l_2, 1-norm is utilized, from the perspective of gradients, to mitigate the sensitivity of the model to abnormal targets and noise. Recently, in <cit.> and <cit.>, two adaptive-weighted (AW) strategies along with their deep neural networks were proposed to suppress the anomalies during training, respectively. These AW methods generate a converse weight map calculated based on the reconstruction errors, which is incorporated into the loss function during the training process to suppress potential anomalies. Another method <cit.> introduced a guided module embedded in the AE network, which directs the training process towards pure background reconstruction. Although these methods allow for continuous adjustment of the anomaly suppression during training, they still have some limitations. For instance, the l_2, 1-norm method does not incorporate direct spatial information for DNNs. The AW methods may only weakly suppress anomalies using a soft weight map, which could potentially result in inadequate separation of anomalies from the background during the training stage. Moreover, they are only tested on specific networks, and the transferability to other networks remains to be explored. § PROPOSED METHOD §.§ Observation and Motivation As shown in Fig. <ref> (b), DNNs get decreased performance due to the IMP as training progresses. This occurs because the signatures of the background and anomalies are not explicitly distinguished during training, but each element of the reconstructed image is treated equally <cit.>. As a result, the identical mapping is inevitably learned, even if the patterns of anomalies are relatively hard to dig up <cit.>. Consequently, the reconstruction errors become inseparable as they converge to zero at a very similar rate. In comparison, the ideal training process depicted in Fig. <ref> (b) exhibits a steadily increasing detection performance that reaches its maximum and remains stable at a plateau throughout the training stage. Since the performance does not drop, the termination of the unsupervised training can be relaxed, thus we can get higher and more robust results. To this end, two conditions need to be satisfied: 1) we separately consider the anomalies and background in the estimated background image, suppressing the anomaly part while reconstructing the background part; 2) the separation of the two parts reaches a dynamic equilibrium such that the detection performance can be maintained at a high level. Notably, the separation itself is independent of the model's form, which allows us to apply this separation strategy to any reconstructor. Therefore, we design a model-independent training framework to help DNNs overcome the IMP, based on these two conditions. §.§ Overview of the Proposed Method Given a raw HSI X∈ℝ ^H × W × L, where H, W and L respectively denote the height, width and spectral bands, our goal is to generate a background image X̂∈ℝ ^H × W × L, and the reconstruction error matrix R∈ℝ_+^H × W is obtained by R_i, j = ‖X̂_i, j, : - X_i, j, :‖_2^2, where R_i, j represents the non-negative scalar error at position (i, j), and X̂_i, j, : and X_i, j, : are the spectral vectors, respectively. We introduce a binary mask M, where ones indicate the potential anomalies and zeros are the background, into our training to separately constrain the anomalies and background, and periodically update the mask to maintain the separability. The flowchart of the proposed method is shown in Fig. <ref>. The whole training consists of multiple iterations. In each iteration, 1) we first train the network and yield R (the network tarining phase), 2) subsequently update M by binarizing R (the mask updating phase), and then 3) pass it down to the next iteration. In the network training phase, the estimated image X̂ is divided into two mutually exclusive parts by M, i.e. the anomaly part and background part. The total training loss is given below to separately train the two parts: ℒ_SeT = ℒ_BR + λ·ℒ_AS, where the loss function ℒ_BR reconstructs the background part and ℒ_AS suppresses the anomaly part; λ is a tradeoff hyperparameter. In the mask updating phase, a new mask is obtained to replace the one from the last iteration by binarizing R via a proportion threshold, which is estimated by the unimodal thresholding algorithm <cit.> as an initial parameter before training. §.§ Separation Training To address the IMP, we introduce a binary mask M which explicitly distinguishes the potential anomalies from the background. The background needs to be reconstructed during training as close as possible to that of the input image, whereas the anomalies need suppression from the original HSI. We herein give the form of the separation training loss function: ℒ_BR = 1/S(M)‖ (X̂ - X) ⊙M‖_F^2, ℒ_AS = 1/S(M) + ϵℛ(M, X̂), where ⊙ represents the element-wise multiplication over spatial dimensions and M is the Boolean not operation on M; S(·) denotes the number of ones (we have S(M) = H × W and S(M) = 0 if M = 0, and a very small constant ϵ is added to avoid division by zero); ℛ(M, X̂) indicates the regularization for suppressing the anomalies selected by M in the estimated image X̂. Equation (<ref>) describes the approximation of the estimated image to the original image in the background part. However, we do not expect to reconstruct the anomaly part, but utilize the distribution of the background to predict it. Therefore, the regularization term ℛ(M, X̂) needs to learn the patterns of X̂⊙M, and simultaneously generate background-like spectra at position M. Anomaly Suppression with LoG Operator. To suppress the anomalies, or equivalently learn a pure background, we remove the selected anomalies by M, and fill in the blanks using the information from their neighboring background pixels. In this article, we adopt the well-known edge detection method Laplacian of Gaussian (LoG) to estimate the blanks. By imposing LoG of a whole blank region to be zero, there will not be large variations like edges in the region, achieving a smooth and natural transition from the boundary to the interior, as shown in Fig. <ref>. Thus, the region is completely estimated from the pure background distribution and no anomaly remains. For more efficient calculations, we take a 5×5 LoG template L to convolve with the image X̂ for each band: L = [ -2 -4 -4 -4 -2; -4 0 8 0 -4; -4 8 24 8 -4; -4 0 8 0 -4; -2 -4 -4 -4 -2 ], LoG(X̂) = L∗X̂, where ∗ is the 2D convolution operator. Before the LoG operation, we use a reflection padding of 2 to keep the spatial sizes consistent. The regularization for anomaly suppression can be then expressed as the following: ℛ(M, X̂) = ‖ LoG(X̂) ⊙M‖_F^2 . §.§ Mask Updating If the separation mask accurately reveals where the anomalous pixels locate, the model will never fall into the IMP, because the anomalies can be completely removed. However, a question arises: how do we obtain the mask if any prior knowledge is unavailable? Apparently, since the mask is an unobservable variable, we can only estimate it from the HSI itself. Suppose the detection is divided into two stages. The model is pre-trained and coarsely searches the potential anomalies in the first stage, and is fine-tuned to get a more precise result in the next stage. Specifically, let M be 0, which means there is no prior location knowledge for the anomalies, and we train the model with only the background reconstruction loss ℒ_BR for a relatively small number of epochs (e.g. 150 epochs in this paper). We calculate the reconstruction error map R as a coarse detection result, where larger values are likely to be the anomalies. Then the values in R are sorted in ascending order, and listed into a vector r̃ = [r̃_1, r̃_2, …, r̃_HW]^T ∈ℝ_+^HW. We consider the first τ· HW values in r̃ as the background part, where τ∈ (0, 1] is a proportion threshold. M is therefore updated for the next detection stage by M_i, j = 1, if R_i, j > r̃_ceil(τ· HW), 0, otherwise, where the index ceil(τ· HW) means the minimum integer no less than τ· HW. However, it must be noted that M from the first coarse search may not be an accurate locator for the anomalies to be suppressed. If we perform a long-term training based on this mask in the first stage, the two parts will not converge to a well-separated status. Therefore, M needs to be periodically updated throughout the training process to avoid the model falling into local minima. We extend the two-stage detection to totally K iterations. In the mth iteration, the model is first trained with M^(m-1) from the last iteration, and then the mask is updated to M^(m) to pass down to the next stage (we define M^(0) = 0). To further purify the background reconstruction in mth stage, we input the network with the selected background part X⊙M^(m-1) instead of the whole image X, thus the impact of potential anomalies can be reduced. Fig. <ref> visually demonstrates the training process with the periodic mask updating strategy. According to the defination of M, we know that all the points above the threshold line are marked by M as the potential anomalies. Due to the constraint of the mask updating strategy, the true anomalies eventually reach an equilibrium around τ, and will never dive to the bottom. Therefore the separability is maintained. Threshold Estimation. Another essential step is to obtain the proportion threshold τ as an initial parameter to update the mask. This paper utilizes the global statistical information of the HSI to estimate an appropriate value of τ before training. To give a clearer explanation, we illustrate the thresholding process in Fig. <ref>. The Mahalanobis distance is first calculated to project the high-dimensional HSI onto a 1D relative distance space d ∈ [0, 1]. The majority of the points concentrated around the center are the background pixels, while others far from the center are considered as the outliers. The point density goes down with the increase of distance, hence the distribution of d displays irregular unimodality, which is shown as the dashed curve on the right side in Fig. <ref>. To handle unimodal thresholding, we employ a simple algorithm <cit.> that identifies a corner point where the dominant population (background samples) changes to the minority (anomalous samples). Before we use the unimodal thresholding algorithm to separate the two parts, the distribution is adjusted by gamma transformation d' = d^γ, γ≥ 1 to the solid curve to make the peak more dense in the region close to 0, such that the density separation is more robust. It is worth noting that we do not directly use the threshold estimated by <cit.> but the corresponding proportion threshold τ, which is simply obtained by calculating the area on the left side of the separation point (green part in Fig. <ref>). This is because the proportion is not dependent on distributions derived from any transformation, but rather reflects the inherent characteristics of the separation of the HSI itself. The separation training strategy prevents reconstructing the anomalies, thus it can maintain a high detection performance without decreasing. The complete training algorithm is summarized in Algorithm <ref>. § EXPERIMENTS In this section, we conduct a series of experiments to exhibit the superiority of our proposed method. All experiments have been carried out on a PC with an Intel Core i7-11800H CPU at 2.30 GHz with 16 GB RAM, and an NVIDIA 3070 GPU with 8GB of memory. We have implemented the algorithms and run the code in Python 3.8 and PyTorch 1.11 or MATLAB 2021b environments. §.§ Experimental settings Datasets. We evaluate the detection performance of our methods using the Airport-Beach-Urban (ABU) benchmark dataset series[Available at http://xudongkang.weebly.com/], which comprises a total of 13 images, including 4 airport scenes, 4 beach scenes, and 5 urban scenes. Most of the datasets were captured using the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor, with the exception of the fourth image in the beach series (named Beach IV), which was acquired using the Reflective Optics System Imaging Spectrometer (ROSIS-03) sensor in Pavia. The scenes are mostly 100 × 100 in spatial dimensions, except for Beach I and Beach IV, which are 150 × 150. The location labels for the target objects were created using the Environment for Visualizing Images (ENVI) software tool. Another dataset used for evaluation is the HyMap Cooke City[Available at http://dirsapps.cis.rit.edu/blindtest/], which was collected by the HyMap sensor over the area surrounding Cooke City, Montana, USA. This image has larger dimensions of 280 × 800 and more complex background than ABU datasets. The objects of interest in this scene are small fabric panels and civilian vehicles. The pseudocolor images and corresponding ground truth maps of the above datasets are displayed in Fig. <ref>. We summarize the characteristics of these datasets, including the captured location, spatial resolution, spectral bands, and target pixels in Table <ref> for easy reference. Metrics. In our experiments, the receiver operating characteristic (ROC) curve is used to evaluate the detection performance. Meanwhile, the area under curve (AUC) scores of (P_d, P_f) are calculated as a quantitative evaluation. A method with a higher detection performance has an ROC curve located near the top-left corner which results in an AUC value closer to 1. §.§ Performance Comparison §.§.§ Competitive Methods Some competitive methods are selected as benchmarks for performance comparison. They are the statistical-based methods RX <cit.>, W-RX <cit.>, the representation-based methods LRASR <cit.>, PAB-DC <cit.>, and four deep learning-based methods RGAE <cit.>, Auto-AD <cit.>, MSBRNet <cit.>, S2DWMTrans <cit.>. For these deep learning methods, the RGAE trains a simple AE network with only one single hidden layer; Auto-AD and MSBRNet build up their networks with multiple convolutional layers; S2DWMTrans for the first time introduces the ViT structure into the field of HAD. In this experiment, we only adopt a vanilla AE network that contains one linear layer with 100 hidden units, a Rectified Linear Unit (ReLU) nonlinear function, and an output linear layer. We use the ADAM optimizer <cit.> to perform the backward propagation. §.§.§ Detection Results To quantitatively compare the performance, we list the AUC scores obtained by all considered methods on ABU and HyMap Cooke City datasets in Table <ref>. The best and second-best results for each dataset are highlighted in bold and underlined, respectively. We also include the computational time required by each method in the table to provide an evaluation of its complexity. We run all algorithms on the CPU, and for deep learning-based methods, we also perform an additional test on the GPU. As shown in Table <ref>, our proposed method outperforms all 8 competitive methods on 10 datasets, including all 4 Airport datasets and the Urban I, II, IV, and V datasets, as well as Beach III and HyMap Cooke City. The RX, W-RX, and Auto-AD methods achieve the highest scores on Beach I, II, and IV and Urban III, while our method ranks second. Overall, our method achieves the best average performance across all airport, beach, and urban datasets. Besides, it surpasses all competitive methods on HyMap Cooke City, which has the largest size and the most complex scene. These results demonstrate the effectiveness of our method for anomaly detection and its adaptability to various scenes. Among all methods run on the CPU, the RX algorithm has the lowest computational time on all datasets, while the S2DWMTrans method is the most time-consuming due to its complex network. For deep learning-based methods run on the GPU, the Auto-AD algorithm is the fastest, and our method ranks second on average. For visual comparisons, the detection maps generated by 9 methods on 4 analyzed datasets Airport IV, Beach IV, Urban II and Urban IV, are shown in Fig. <ref>. It can be seen that our method demonstrates the clearest detection results among all 9 methods, with the background being well removed and most anomalies being highlighted. Taking the Airport IV dataset as an example, some methods including RX, W-RX and RGAE, cannot clearly mark all three airplanes in this scene, while some methods including PAB-DC, MSBRNet, and S2DWMTrans mix obvious background components in their detection maps. Contrarily, our method can completely detect the objects of interest, and keep remaining areas cleaner. The ROC curves obtained by all methods on 4 selected datasets, i.e. Airport II, Airport IV, Beach IV and Urban V, are respectively shown in Fig. <ref> (a-d). It can be observed in Fig. <ref> (a) and (b) that the ROC curves of our method (in black) are closer to the top-left corner compared to other methods, indicating higher detection performance. In Fig. <ref> (c), the ROC curve of the Auto-AD displays a slightly higher probability of detection than our method when the false alarm rate is between 0 and close to 0.2. In Fig. <ref> (b), our method and the RX show similar trends and almost cover the ROC curves of other methods. §.§ Transferability Discussion The separation training strategy is proposed to address the IMP of deep networks, as discussed in Section III. Because our  is a model-independent training strategy (you can replace the AE structure with an arbitrary model whose input and output have the same size, and the training will still work), we validate its effectiveness by comparing the improvement of a network trained with this strategy to the original one. The validation simultaneously exhibits the transferability of the strategy from a simple AE to deeper networks. §.§.§ Experimental Setup The candidate networks are two CNNs Auto-AD and MSBRNet, the ViT S2DWMTrans and the vanilla AE. In addition to our proposed separation training strategy (denoted as BiGSeT) for anomaly suppression, we also compare two other strategies, i.e. the l_2, 1-norm <cit.> (denoted as l_2, 1) and the adaptive-weighted training <cit.> (denoted as AW), respectively. The training settings for the Auto-AD, MSBRNet and S2DWMTrans are followed according to their original papers, and we train the original AE with the Frobenius norm loss for 750 epochs. In the following experiments, we will combine the 4 networks with the 4 training methods, resulting in a total of 16 combinations (actually 15 because the original Auto-AD adopts the AW strategy), to compare the performance of the original version with the other 3 training strategies. For a fair comparison, we terminate the training processes at the 750th epoch for all networks except the original versions, and obtain the detection results accordingly. §.§.§ Quantitative Results The detection results of all combinations on ABU datasets and HyMap Cooke City dataset are listed in Table <ref>. For a clearer comparison, we also provide the improvement scores of the other three training strategies over the original ones. We can observe that our BiGSeT strategy leads to positive improvements in all cases and achieves the largest improvement on most datasets. However, the l_2, 1 or AW strategies display worse results than the original on some datasets, such as Airport II for the Auto-AD and MSBRNet networks. With the improvement of the BiGSeT, detection results can even be raised to a very high degree. For instance, the combination of S2DWMTrans with BiGSeT can raise the AUC score on Airport IV from 0.9523 to 0.9991. To conclude, our proposed BiGSeT strategy can improve the detection performance of 4 tested networks and outperforms the other two anomaly suppression methods, l_2, 1 and AW. §.§.§ Training Process Curves To better explain how these training strategies impact on detection performance, we display the process curves of the 4 networks along the training dimension on three datasets, as shown in Figs. <ref>-<ref>. It can be observed from the 3 figures that the AUC scores of the most original networks (in blue) tend to decline or fluctuate as training progresses. This is because the networks start to fit the image to a high degree, and the anomalies and background fail to be distinguished. This phenomenon seems more significant to the Auto-AD, MSBRNet and S2DWMTrans because their structures are more complex to easily overfit learning. To prevent this, the training strategies try to lift and flatten the curves (in green for l_2, 1, red for AW except blue for the Auto-AD, and black for ours), by suppressing the anomalies from being fitted. Although the l_2, 1 and AW methods show improvement in certain cases, such as using S2DWMTrans on the Airport II dataset, they fail to maintain high performance as training progresses. The l_2, 1 method lacks the ability to provide positional information of the anomalous targets, resulting in ineffective suppression of these pixels. On the other hand, the AW method employs a soft weight map that may lead to blurry separation and does not ensure a balanced equilibrium for maintaining separability. In contrast, our method separately constrains the background and anomaly parts using a binary mask, explicitly removing anomalies to the greatest extent possible. Additionally, the mask is robustly updated using a proportion threshold, which guarantees that the identified anomalies will not be learned during training. As a result, the performance does not decrease in most cases, and our proposed BiGSeT strategy provides faster-converged and stabler results among all these methods. §.§.§ Complexity of Networks We tabulate the parameters and FLOPs of 4 networks on 3 datasets in Table <ref> in order to compare their depth and complexity. According to Table <ref>, the Auto-AD has the largest parameter size and the S2DWMTrans is the network with the highest computational cost, while the AE is the simplest structure. §.§ Hyperparameter Analysis In this subsection, we explore how the hyperparameters γ and λ affect the model. As an example, we conduct parameter analysis on the Airport IV dataset. All the experiments are conducted with a vanilla AE structure introduced before. The parameter γ controls the degree of convexity of gamma transformation curves, as shown in Fig. <ref> (a). The larger γ is, the more intensive the effect of mapping d values to smaller and narrower ranges is. We test different γ values chosen from {1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0} to display its impact on the proportion threshold estimation, as shown in Fig. <ref> (b). The histogram of relative distance exhibits a unimodal characteristic, while its corresponding cumulative histogram initially shows a steep rise and then levels off, as the proportion of anomalous targets is relatively low. The ground truth proportion threshold is located near the corner. By adjusting the gamma transformation, the proportion threshold estimation is closer to the ground truth when the γ values are set to 1.5 and 2.0. The parameter λ balances the effects of background reconstruction and anomaly suppression. To get the best result and better understand the two parameters, we investigate the joint impact of γ and λ on the AUC score. We choose their values respectively from candidate pools {1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0} and {1e-5, 1e-4, 1e-3, 1e-2, 1e-1, 1}. An AUC surface with respect to (γ, -lg λ) is plotted in Fig. <ref>. It can be observed that if we fix λ as a constant (e.g. -lg λ = 2), when γ increases, the performance tends to climb up and remain stable. The larger the value of -lg λ, the flatter the trend of the variation. Similarly, if we set γ to a fixed value, the AUC score increases and reaches a plateau area as -lg λ gets larger. The best result is marked with a black triangle in Fig. <ref>, and the corresponding parameter pair (γ, λ) reads (2.0, 1e-4). § CONCLUSION We focus on the deep learning-based HAD task, and address the issue that the reconstruction DNNs may learn an identical mapping between the input and output, which can lead to decreased detection performance. The main reason for this issue is that the traditional training treats the anomalies and background equally in a given HSI, making them hard to distinguish. In this paper, we propose a model-independent training strategy for DNNs that explicitly separates anomalies and background. A separation loss function based on a latent binary mask is proposed to reconstruct the background while suppressing anomalies. The efficient second-order LoG operator is used for anomaly suppression by alleviating large spatial variations. To maintain a high detection performance during training, the mask is periodically updated via an estimated proportion threshold. Experiments on benchmark datasets demonstrate the superiority of our method in HAD performance. Furthermore, the transferability of our training strategy is validated by applying it to several deep networks and achieving improved detection performance compared to the original networks. IEEEtran 10 url@samestyle ref_dis_material_1 K. Li, Q. Ling, Y. Qin, Y. Wang, Y. Cai, Z. Lin, and W. An, “Spectral-spatial deep support vector data description for hyperspectral anomaly detection,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–16, 2022. ref_dis_material_2 J. Nascimento and J. Dias, “Vertex component analysis: a fast algorithm to unmix hyperspectral data,” IEEE Transactions on Geoscience and Remote Sensing, vol. 43, no. 4, pp. 898–910, 2005. ref_classification_1 Q. Liu, L. Xiao, J. Yang, and Z. Wei, “Cnn-enhanced graph convolutional network with pixel- and superpixel-level feature fusion for hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 10, pp. 8657–8671, 2021. ref_classification_2 Y. Dong, Q. Liu, B. Du, and L. Zhang, “Weighted feature fusion of convolutional neural network and graph attention network for hyperspectral image classification,” IEEE Transactions on Image Processing, vol. 31, pp. 1559–1572, 2022. ref_spectral_unmixing_1 N. Keshava and J. Mustard, “Spectral unmixing,” IEEE Signal Processing Magazine, vol. 19, no. 1, pp. 44–57, 2002. ref_spectral_unmixing_2 J. M. Bioucas-Dias, A. Plaza, N. Dobigeon, M. Parente, Q. Du, P. Gader, and J. Chanussot, “Hyperspectral unmixing overview: Geometrical, statistical, and sparse regression-based approaches,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 5, no. 2, pp. 354–379, 2012. ref_spectral_unmixing_3 X. Shen, H. Liu, J. Qin, F. Ge, and X. Zhou, “Toward weak signal analysis in hyperspectral data: An efficient unmixing perspective,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–14, 2022. ref_anomaly_detection_1 Y. Li, Y. Shi, K. Wang, B. Xi, J. Li, and P. Gamba, “Target detection with unconstrained linear mixture model and hierarchical denoising autoencoder in hyperspectral imagery,” IEEE Transactions on Image Processing, vol. 31, pp. 1418–1432, 2022. ref_anomaly_detection_2 B. Du and L. Zhang, “Random-selection-based anomaly detector for hyperspectral imagery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 49, no. 5, pp. 1578–1589, 2011. ref_anomaly_detection_3 M. Vafadar and H. Ghassemian, “Hyperspectral anomaly detection using combined similarity criteria,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 11, no. 11, pp. 4076–4085, 2018. ref_anomaly_detection_4 Y. Qu, W. Wang, R. Guo, B. Ayhan, C. Kwan, S. Vance, and H. Qi, “Hyperspectral anomaly detection through spectral unmixing and dictionary-based low-rank decomposition,” IEEE Transactions on Geoscience and Remote Sensing, vol. 56, no. 8, pp. 4391–4405, 2018. ref_anomaly_detection_5 X. Shen, H. Liu, J. Nie, and X. Zhou, “Matrix factorization with framelet and saliency priors for hyperspectral anomaly detection,” IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–13, 2023. ref_had_application_1 T. Adão, J. Hruška, L. Pádua, J. Bessa, E. Peres, R. Morais, and J. Sousa, “Hyperspectral imaging: A review on uav-based sensors, data processing and applications for agriculture and forestry,” Remote Sensing, vol. 2017, p. 1110, 10 2017. ref_had_application_2 T. A. Carrino, A. P. Crósta, C. L. B. Toledo, and A. M. Silva, “Hyperspectral remote sensing applied to mineral exploration in southern peru: A multiple data integration approach in the chapi chiara gold prospect,” International Journal of Applied Earth Observation and Geoinformation, vol. 64, pp. 287–300, 2018. [Online]. Available: <https://www.sciencedirect.com/science/article/pii/S0303243417301071> ref_had_application_3 M. Xu, H. Liu, R. Beck, J. Lekki, B. Yang, S. Shu, Y. Liu, T. Benko, R. Anderson, R. Tokars, R. Johansen, E. Emery, and M. Reif, “Regionally and locally adaptive models for retrieving chlorophyll-a concentration in inland waters from remotely sensed multispectral and hyperspectral imagery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 7, pp. 4758–4774, 2019. ref_no_prior_knowledge_1 S. Matteoli, M. Diani, and G. Corsini, “A tutorial overview of anomaly detection in hyperspectral images,” IEEE Aerospace and Electronic Systems Magazine, vol. 25, no. 7, pp. 5–28, 2010. ref_no_prior_knowledge_2 H. Su, Z. Wu, H. Zhang, and Q. Du, “Hyperspectral anomaly detection: A survey,” IEEE Geoscience and Remote Sensing Magazine, vol. 10, no. 1, pp. 64–90, 2022. ref_rx I. Reed and X. Yu, “Adaptive multiple-band cfar detection of an optical pattern with unknown spectral distribution,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 38, no. 10, pp. 1760–1770, 1990. ref_local_rx J. M. Molero, E. M. Garzón, I. García, and A. Plaza, “Analysis and optimizations of global and local versions of the rx algorithm for anomaly detection in hyperspectral data,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 6, no. 2, pp. 801–814, 2013. ref_weighted_rx Q. Guo, B. Zhang, Q. Ran, L. Gao, J. Li, and A. Plaza, “Weighted-rxd and linear filter-based rxd: Improving background statistics estimation for anomaly detection in hyperspectral imagery,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 7, no. 6, pp. 2351–2366, 2014. ref_kernel_1 H. Kwon and N. Nasrabadi, “Kernel rx-algorithm: a nonlinear anomaly detector for hyperspectral imagery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 43, no. 2, pp. 388–397, 2005. ref_kernel_2 A. Banerjee, P. Burlina, and C. Diehl, “A support vector method for anomaly detection in hyperspectral imagery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 44, no. 8, pp. 2282–2291, 2006. ref_kernel_3 J. Zhou, C. Kwan, B. Ayhan, and M. T. Eismann, “A novel cluster kernel rx algorithm for anomaly and change detection using hyperspectral images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 11, pp. 6497–6504, 2016. ref_representation_1 J. Li, H. Zhang, L. Zhang, and L. Ma, “Hyperspectral anomaly detection by the use of background joint sparse representation,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 8, no. 6, pp. 2523–2533, 2015. ref_representation_2 W. Li, Q. Du, and B. Zhang, “Combined sparse and collaborative representation for hyperspectral target detection,” Pattern Recognition, vol. 48, no. 12, pp. 3904–3916, 2015. [Online]. Available: <https://www.sciencedirect.com/science/article/pii/S0031320315002034> ref_representation_3 T. Cheng and B. Wang, “Graph and total variation regularized low-rank representation for hyperspectral anomaly detection,” IEEE Transactions on Geoscience and Remote Sensing, vol. 58, no. 1, pp. 391–406, 2020. ref_crd W. Li and Q. Du, “Collaborative representation for hyperspectral anomaly detection,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 3, pp. 1463–1474, 2015. ref_lrasr Y. Xu, Z. Wu, J. Li, A. Plaza, and Z. Wei, “Anomaly detection in hyperspectral images based on low-rank and sparse representation,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 4, pp. 1990–2000, 2016. ref_more_constraints Q. Ling, Y. Guo, Z. Lin, and W. An, “A constrained sparse representation model for hyperspectral anomaly detection,” IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 4, pp. 2358–2371, 2019. ref_pab_dc N. Huyan, X. Zhang, H. Zhou, and L. Jiao, “Hyperspectral anomaly detection via background and potential anomaly dictionaries construction,” IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 4, pp. 2263–2276, 2019. ref_dl_in_remote_sensing_1 S. Li, W. Song, L. Fang, Y. Chen, P. Ghamisi, and J. A. Benediktsson, “Deep learning for hyperspectral image classification: An overview,” IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 9, pp. 6690–6709, 2019. ref_dl_in_remote_sensing_2 H. Zhang, H. Chen, G. Yang, and L. Zhang, “Lr-net: Low-rank spatial-spectral network for hyperspectral image denoising,” IEEE Transactions on Image Processing, vol. 30, pp. 8743–8758, 2021. ref_dl_in_remote_sensing_3 R. A. Borsoi, T. Imbiriba, and P. Closas, “Dynamical hyperspectral unmixing with variational recurrent neural networks,” IEEE Transactions on Image Processing, vol. 32, pp. 2279–2294, 2023. ref_dl_in_had_1 S. Song, H. Zhou, Y. Yang, and J. Song, “Hyperspectral anomaly detection via convolutional neural network and low rank with density-based clustering,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 12, no. 9, pp. 3637–3649, 2019. ref_dl_in_had_2 T. Jiang, Y. Li, W. Xie, and Q. Du, “Discriminative reconstruction constrained generative adversarial network for hyperspectral anomaly detection,” IEEE Transactions on Geoscience and Remote Sensing, vol. 58, no. 7, pp. 4666–4679, 2020. ref_dl_in_had_3 X. Hu, C. Xie, Z. Fan, Q. Duan, D. Zhang, L. Jiang, X. Wei, D. Hong, G. Li, X. Zeng, W. Chen, D. Wu, and J. Chanussot, “Hyperspectral anomaly detection using deep learning: A review,” Remote Sensing, vol. 14, p. 1973, 04 2022. ref_sparse_ae_1 E. Bati, A. Çalışkan, A. Koz, and A. Alatan, “Hyperspectral anomaly detection method based on auto-encoder,” 10 2015, p. 96430N. ref_sparse_ae_2 S. Chang, B. Du, and L. Zhang, “A sparse autoencoder based hyperspectral anomaly detection algorihtm using residual of reconstruction error,” in IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium, 2019, pp. 5488–5491. ref_embedding_manifold_ae X. Lu, W. Zhang, and J. Huang, “Exploiting embedding manifold of autoencoders for hyperspectral anomaly detection,” IEEE Transactions on Geoscience and Remote Sensing, vol. PP, pp. 1–11, 11 2019. ref_rgae G. Fan, Y. Ma, X. Mei, F. Fan, J. Huang, and J. Ma, “Hyperspectral anomaly detection with robust graph autoencoders,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–14, 2022. ref_dnn_1 W. Li, G. Wu, and Q. Du, “Transferred deep learning for anomaly detection in hyperspectral imagery,” IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 5, pp. 597–601, 2017. ref_dnn_2 A. R. Rezvanian, M. Imani, and H. Ghassemian, “Patch-based sparse and convolutional autoencoders for anomaly detection in hyperspectral images,” in 2020 28th Iranian Conference on Electrical Engineering (ICEE), 2020, pp. 1–5. ref_dnn_3 Z. Li, Y. Wang, C. Xiao, Q. Ling, Z. Lin, and W. An, “You only train once: Learning a general anomaly enhancement network with random masks for hyperspectral anomaly detection,” IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–18, 2023. ref_identical_shortcut Z. You, L. Cui, Y. Shen, K. Yang, X. Lu, Y. Zheng, and X. Le, “A unified model for multi-class anomaly detection,” 2022. [Online]. Available: <https://arxiv.org/abs/2206.03687> ref_adversarial_framework W. Xie, B. Liu, Y. Li, J. Lei, and Q. Du, “Autoencoder and adversarial-learning-based semisupervised background estimation for hyperspectral anomaly detection,” IEEE Transactions on Geoscience and Remote Sensing, vol. 58, no. 8, pp. 5416–5427, 2020. ref_ls3tnet X. Wang, L. Wang, and Q. Wang, “Local spatial–spectral information-integrated semisupervised two-stream network for hyperspectral anomaly detection,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–15, 2022. ref_auto_ad S. Wang, X. Wang, L. Zhang, and Y. Zhong, “Auto-ad: Autonomous hyperspectral anomaly detection network based on fully convolutional autoencoder,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–14, 2022. ref_convs B. Hosseiny and R. Shah-hosseini, “A hyperspectral anomaly detection framework based on segmentation and convolutional neural network algorithms,” International Journal of Remote Sensing, vol. 41, pp. 6946–6975, 09 2020. ref_msbrnet W. Cao, H. Zhang, W. He, H. Chen, and E. H. Tat, “Msbrnet: Multi-scale background reconstruction network with low-rank embedding for anomaly detection in hyperspectral images,” in IGARSS 2022 - 2022 IEEE International Geoscience and Remote Sensing Symposium, 2022, pp. 3720–3723. ref_s2dwmtrans S. Xiao, T. Zhang, Z. Xu, J. Qu, S. Hou, and W. Dong, “Anomaly detection of hyperspectral images based on transformer with spatial–spectral dual-window mask,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 16, pp. 1414–1426, 2023. ref_dbscan R. Anant, J. Sunita, A. S. Jalal, and K. Manoj, “A density based algorithm for discovering density varied clusters in large spatial databases,” International Journal of Computer Applications, vol. 3, no. 6, 2010. ref_sparse_had Y. Li, T. Jiang, W. Xie, J. Lei, and Q. Du, “Sparse coding-inspired gan for hyperspectral anomaly detection in weakly supervised learning,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–11, 2021. ref_superpixel_background_extraction K. Li, Q. Ling, Y. Wang, Y. Cai, Y. Qin, Z. Lin, and W. An, “Spectral difference guided graph attention autoencoder for hyperspectral anomaly detection,” IEEE Transactions on Instrumentation and Measurement, vol. 72, pp. 1–17, 2023. ref_gaed P. Xiang, S. Ali, S. K. Jung, and H. Zhou, “Hyperspectral anomaly detection with guided autoencoder,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–18, 2022. ref_unimodal_thresholding P. L. Rosin, “Unimodal thresholding,” Pattern Recognition, vol. 34, no. 11, pp. 2083–2096, 2001. [Online]. Available: <https://www.sciencedirect.com/science/article/pii/S0031320300001369> ref_adam D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” Computer Science, 2014.
http://arxiv.org/abs/2307.04482v1
20230710110437
Nonlinear and nonreciprocal transport effects in untwinned thin films of ferromagnetic Weyl metal SrRuO$_3$
[ "Uddipta Kar", "Elisha Cho-Hao Lu", "Akhilesh Kr. Singh", "P. V. Sreenivasa Reddy", "Youngjoon Han", "Xinwei Li", "Cheng-Tung Cheng", "Song Yang", "Chun-Yen Lin", "I-Chun Cheng", "Chia-Hung Hsu", "D. Hsieh", "Wei-Cheng Lee", "Guang-Yu Guo", "Wei-Li Lee" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "cond-mat.mtrl-sci" ]
Institute of Physics, Academia Sinica, Nankang, Taipei 11529, Taiwan Department of Physics, National Taiwan University, Taipei 10617, Taiwan Department of Physics, Applied Physics and Astronomy, Binghamton University, Binghamton, New York 13902, USA Department of Physics, California Institute of Technology, Pasadena, California 91125, USA Scientific Research Division, National Synchrotron Radiation Research Center, Hsinchu 30076, Taiwan Nano Science and Technology, Taiwan International Graduate Program, Academia Sinica and National Taiwan University, Taipei, Taiwan Physics Division, National Center for Theoretical Sciences, Taipei 10617, Taiwan Graduate Institute of Photonics and Optoelectronics, National Taiwan University, Taipei, Taiwan These authors contributed equally to the work. Institute of Physics, Academia Sinica, Nankang, Taipei 11529, Taiwan Nano Science and Technology, Taiwan International Graduate Program, Academia Sinica and National Taiwan University, Taipei, Taiwan These authors contributed equally to the work. Institute of Physics, Academia Sinica, Nankang, Taipei 11529, Taiwan These authors contributed equally to the work. Institute of Physics, Academia Sinica, Nankang, Taipei 11529, Taiwan Department of Physics, National Taiwan University, Taipei 10617, Taiwan Department of Physics, California Institute of Technology, Pasadena, California 91125, USA Department of Physics, California Institute of Technology, Pasadena, California 91125, USA Institute of Physics, Academia Sinica, Nankang, Taipei 11529, Taiwan Scientific Research Division, National Synchrotron Radiation Research Center, Hsinchu 30076, Taiwan Scientific Research Division, National Synchrotron Radiation Research Center, Hsinchu 30076, Taiwan Graduate Institute of Photonics and Optoelectronics, National Taiwan University, Taipei, Taiwan Scientific Research Division, National Synchrotron Radiation Research Center, Hsinchu 30076, Taiwan Department of Physics, California Institute of Technology, Pasadena, California 91125, USA Department of Physics, Applied Physics and Astronomy, Binghamton University, Binghamton, New York 13902, USA Department of Physics, National Taiwan University, Taipei 10617, Taiwan Physics Division, National Center for Theoretical Sciences, Taipei 10617, Taiwan [email protected] Institute of Physics, Academia Sinica, Nankang, Taipei 11529, Taiwan The identification of distinct charge transport features, deriving from nontrivial bulk band and surface states, has been a challenging subject in the field of topological systems. In topological Dirac and Weyl semimetals, nontrivial conical bands with Fermi-arc surfaces states give rise to negative longitudinal magnetoresistance due to chiral anomaly effect and unusual thickness dependent quantum oscillation from Weyl-orbit effect, which were demonstrated recently in experiments. In this work, we report the experimental observations of large nonlinear and nonreciprocal transport effects for both longitudinal and transverse channels in an untwinned Weyl metal of SrRuO_3 thin film grown on a SrTiO_3 substrate. From rigorous measurements with bias current applied along various directions with respect to the crystalline principal axes, the magnitude of nonlinear Hall signals from the transverse channel exhibits a simple sinα dependent at low temperatures, where α is the angle between bias current direction and orthorhombic [001]_ o, reaching a maximum when current is along orthorhombic [11̄0]_ o. On the contrary, the magnitude of nonlinear and nonreciprocal signals in the longitudinal channel attains a maximum for bias current along [001]_ o, and it vanishes for bias current along [11̄0]_ o. The observed α-dependent nonlinear and nonreciprocal signals in longitudinal and transverse channels reveal a magnetic Weyl phase with an effective Berry curvature dipole along [11̄0]_ o from surface states, accompanied by 1D chiral edge modes along [001]_ o. Nonlinear and nonreciprocal transport effects in untwinned thin films of ferromagnetic Weyl metal SrRuO_3 Wei-Li Lee August 12, 2023 ========================================================================================================= § INTRODUCTION Since the first experimental demonstration of a quantized conductance from counter-propagating edge spin channels in HgTe quantum well system <cit.>, topological materials have become one of the main research focuses in condensed matter physics and materials science. The two dimensional (2D) quantum spin Hall phase originates from inverted bulk bands that crosses near the system's boundary, revealing one dimensional helical edge states and thus the observed conductance quantization, which is also known as the 2D topological insulator (TI) phase and also recently reported in several other 2D systems <cit.>. Extending to 3D TI, the existence of a nontrivial bulk band topology with an intrinsic topological invariant gives rise to unusual gapless Dirac surface states, which was confirmed in experiments using surface sensitive angle-resolved photoemission spectroscopy and scanning tunneling microscopy <cit.>. More recently, a remarkable advancement was made by the observation of the quantized anomalous Hall conductance at zero magnetic field in a magnetic TI <cit.>, and it is a unique transport signature due to the topological nature of the system, which was theoretically predicted long ago <cit.>. In topological Dirac and Weyl semimetals (WSM), nontrivial crossings appear in the bulk bands near the Fermi surface <cit.>, and charge transport is overwhelmed by the unusual chiral charge excitations near nodal points with Berry phase π, showing superior electron mobility due to the suppressed backscattering by spin-momentum lock effect <cit.> and negative longitudinal magnetoresistance (MR) for aligned electric field and external magnetic field due to the chiral anomaly effect <cit.>. In addition, unique Fermi-arc surface states <cit.> appear on a surface of a WSM, connecting the projected Weyl-node pair, where a number of intriguing novel charge transport features have been predicted theoretically <cit.>. For a ferromagnetic WSM, there can be a minimum number of one Weyl-node pair with opposite chiral charges near the Fermi surface, accompanied by 1D chiral zero edge modes perpendicular to the connecting momentum of the Weyl-node pair. In this work, we report the experimental observations of nonlinear Hall signals <cit.> for T ≤ 10 K in the untwinned thin film of ferromagnetic Weyl metal SrRuO_3 (SRO) grown on a miscut SrTiO_3 (STO) substrate. Rigorous bias current dependent measurements of the nonlinear Hall signals correspond to an effective Berry curvature dipole (BCD) D⃗ from surface states along the orthorhombic [11̄0]_ o, where the subscript o refers to orthorhombic-phase. Surprisingly, a nonlinear and nonreciprocal transport effect in the longitudinal channel (NRTE) was also observed. It attains a maximum when the bias current is aligned perpendicular to D⃗, but it becomes vanishing small when bias current is parallel to D⃗, which can be attributed to the 1D chiral edge modes as demonstrated previously in the quantum anomalous Hall system <cit.>. Those results support the intriguing magnetic WSM phase in SRO/STO system with an effective surface D⃗ along [11̄0]_ o accompanied by 1D chiral edge modes along [001]_ o that circles around the surface of a SRO thin film. § EXPERIMENTAL SETUP SRO is known as a ferromagnetic and metallic oxide, showing an orthorhombic crystal structure with Pbnm space group symmetry at room temperature <cit.>. In the past, the observed non-monotonic magnetization dependent anomalous Hall conductivity <cit.>, unusual temperature dependent magnon gap <cit.> and softening of the magnon mode at low temperatures <cit.> all pointed to the existence of the Weyl nodes near the Fermi surface, supporting the Weyl metal phase in SRO system. Recently, the advancement in the growth of exceptional quality SRO thin films with ultra-low ruthenium vacancy level was made possible using oxide molecular beam system <cit.>. The low residual resistivity at T = 2 K of only about 10 μΩcm for a SRO film with thickness of about 10 nm <cit.>, which may largely suppress the smearing of the Weyl nodes due to the rare region effects <cit.>, makes it possible to explore various charge transport features associated with the Fermi-arc surface states and Weyl metal phase of SRO in thin film form <cit.>. Figure <ref>(a) shows an optical image of a sunbeam device patterned on an untwinned SRO thin film with a thickness t of about 13.7 nm. By using a STO (001) substrate with a miscut angle of about 0.1 degrees along one of the principal cubic axes, the volume fraction of the dominant domain was determined by high resolution X-ray scattering via the (02±1)_ o reflections to be about 95 % <cit.> (see Supplementary Note 1), where the orthorhombic crystalline directions are shown in Fig. <ref>(a). The right panel of Fig. <ref>(a) illustrates one of the Hall bars in the sunbeam device, and α defines the angle between the bias current direction and [001]_ o. ρ_ L and ρ_ T correspond to the longitudinal and transverse resistivity, respectively. With a compressive strain of about - 0.4 %, the Curie temperature T_ c for SRO thin film is about 150 K, and the magnetic easy axis is close to the film surface normal of [110]_ o <cit.>. Fig. <ref>(b) shows the α-dependent ρ_ L and ρ_ T values at three different applied field values of 0, - 1, and + 1 T along [110]_ o at T = 2 K. The ρ_ L appears to be at a maximum value of about 10.4 μΩcm for α = 90^ o and drops to a minimum value of about 8.1 μΩcm for α = 0 and 180^ o, exhibiting a clear cos(2α) dependence. On the other hand, the ρ_ T shows a sin(2α) dependence instead with a maximum magnitude of about 0.9 μΩcm at α = 45^ o and 135^ o. The simulated curves using a resistivity anisotropy model of ρ_ L and ρ_ T are shown as red curves in Fig. <ref>(b). We note that the amplitude of the anisotropy is significantly larger than the small changes in ρ_ L and ρ_ T when reversing the magnetization by changing the field from +1 to -1 T, inferring that the observed resistivity anisotropy in our SRO thin films is not dictated by the magnetization-related effects. The upper, middle, and lower panels of Fig. <ref>(c) show the temperature dependence of ρ_ L, ρ_ T, and ρ_ T/ρ_ L ratio, respectively, for different α values ranging from 0^ o to 180^ o. The residual resistivity ratio of ρ_ L(300K)/ρ_ L(5K) varies weakly and equals about 24.0 and 21.4 for α = 0^ o and 90^ o, respectively. Those results support the nearly single structure domain and thus untwinned nature in our SRO thin films, and also the exact dimensions of each Hall bar at different α values are very close to each other, which justifies the feasibility for the investigation of anisotropy effects in our SRO thin films. As the T decreases, we note that the magnitude of ρ_ T/ρ_ L ratio for α = 45^ o slightly decreases near the T_ c and then increases again below 100K, attaining a sizable ratio of ρ_ T/ρ_ L≈ - 0.085 at T = 2 K without saturation. Now, we turn to the discussions about the anomalous Hall effect (AHE) and the magnetization data in our SRO thin films. Figure <ref>(a) shows the field dependent Hall resistivity ρ_ xy at different temperatures ranging from 2 to 180 K, where weak field hysteresis loops in ρ_ xy-μ_ 0H curves with a small coercive field of less than 0.1 T were observed below T_ c as expected. The magnitude of converted Hall conductivity |σ_ xy| at zero field was plotted in Fig. <ref>(b) as a function of the corresponding conductivity σ_ xx in logarithmic scales for SRO thin films with different thicknesses ts ranging from 3.9 to 37.1 nm. Remarkably, |σ_ xy| appears to approach a constant and t-independent value of about 2.0 × 10^4 Ω^-1m^-1 at low temperatures, which falls in the same order as the intrinsic anomalous Hall conductivity due to the Berry curvatures of the bulk band, i.e., e^2/hc_ o≈ 5.0 × 10^4 Ω^-1m^-1 (c_ o being the orthorhombic lattice constant of about 7.81Å) shown as the red dashed line in Fig. <ref>(b). We note that no significant changes in the |σ_ xy| with σ_ xx down to T = 1.4 K, and this thus suggests a negligible contribution from the extrinsic skew scattering effect to AHE, where a linear relation of |σ_ xy| ∝σ_ xx is expected instead <cit.>. On the other hand, rigorous magnetization measurements were performed on a thicker SRO film with t ≈ 37.1 nm using a SQUID magnetometer. By subtracting the diamagnetic background at 200 K, the resulting magnetization M' - H curves at different temperatures are shown in Fig. <ref>(c), where, for μ_0H ≥ 2 T, the diamagnetic response seems to increase as the temperature drops. As shown in Fig. <ref>(d), the averaged slope of dM/dH for the field regime from μ_0H ≥ 2 T to 7 T was negative with increasing magnitude as the temperature decreases to 2 K, which is in big contrast to the nearly T-independent slope from the controlled measurements on a bare STO substrate (square symbols in Fig. <ref>(d)). The observed intrinsic |σ_ xy| ∼ e^2/hc_ o <cit.> and the enhanced diamagnetic response <cit.> at low temperatures strongly support the presence of the Weyl-nodes near the Fermi surface and thus the Weyl metal phase in SRO. We also remark that the zero-field Hall signals at low temperatures in SRO are dominated by the intrinsic AHE, which would be important for the subsequent discussions about the observed nonlinear Hall signals in SRO. § RESULTS As illustrated in the right panel Fig. <ref>(a), the second harmonic longitudinal (R_ L^2ω) and transverse (R_ T^2ω) resistance were measured with a bias current of 0.7 mA at a frequency of about 18.4 Hz. The resulting complex second harmonic signal can be expressed as R̃_ L(T)^2ω = R_ L(T)^2ωX + i R_ L(T)^2ωY, which is probed by a lock-in amplifier. The upper and lower panel of Fig. <ref>(a) shows the field dependent R_ L^2ωY and R_ T^2ωY, respectively, for α = 90^ o Hall bar device at different Ts ranging from 1.4 K to 10 K. For clarity, the curves of R_ L^2ωY- μ_0H and R_ T^2ωY- μ_0H at different Ts were systematically shifted upward by multiple of 100 μΩ and 50 μΩ, respectively. For T ≥ 10K, both R_ L^2ωY and R_ T^2ωY show no hysteresis loops in the weak field regime, which is in big contrast to the sizable ρ_ xy - μ_0H loops shown in Fig. <ref>(a) at similar temperatures. Below 6K, a sizable hysteresis loop starts to appear in R_ T^2ωY as shown in the lower panel of Fig. <ref>(a), but R_ L^2ωY remains nearly field-independent without showing a hysteresis loop. The definition of Δ R_ T^2ωY is illustrated in the lower panel of Fig. <ref>(a), and it corresponds to the change of the R_ T^2ωY signal at zero magnetic field when reversing the magnetization of the SRO thin film. For α = 90^ o Hall bar device with bias current I along [11̄0]_ o, the Δ R_ T^2ωY gradually increases in magnitude as T drops, giving a Δ R_ T^2ωY ≈ 44 μΩ at T = 1.4 K. Remarkably, for α = 180^ o Hall bar device with a bias current I along [001]_ o as demonstrated in Fig. <ref>(b), the hysteresis loops appear in the longitudinal channel of R_ L^2ωY at low temperatures instead, giving a value of Δ R_ L^2ωY ≈ 100 μΩ at T = 1.4 K, and no hysteresis loops were observed in the transverse channel (R_ T^2ωY). Figure <ref>(c) summarized the results from 9 different α values Hall bars from the sunbeam device shown in Fig. <ref>(a) (see Supplementary Note 2 for detailed descriptions on measurement geometry and polarity). The upper panel of Fig. <ref>(c) shows the first harmonic signals (Δ R_ L^ωX) and second harmonic signals (Δ R_ L^2ωY) in the longitudinal channel as a function of α with different Ts. Δ R_ L^2ωY exhibits a maximum value of about 100 μΩ at α = 0^ o and 180^ o, and it gradually decreases in magnitude to zero as α approaches 90^ o. In contrast, the first harmonic signals of Δ R_ L^ωX are nearly zero for all α and T values as expected. On the other hand, the lower panel of Fig. <ref>(c) puts together the α dependent first harmonic signals (Δ R_ T^ωX) and second harmonic signals (Δ R_ T^2ωY) in the transverse channel at different Ts. Unlike the longitudinal channel, the Δ R_ T^2ωY data show a relatively good agreement to the sinα dependence (dashed red line in the lower panel of Fig. <ref>(c)), giving a value of Δ R_ T^2ωY ≈ 44 μΩ at α = 90^ o and vanishing values for α = 0^ o and 180^ o. Such a unique sinα dependence in Δ R_ T^2ωY is drastically distinct from the nearly α-independent first harmonic signals of Δ R_ T^ωX. For consistency check, the current dependent R_ L^2ωY for α = 180^ o and R_ T^2ωY for α = 90^ o at T = 2 K were carried out and shown in the upper panel and lower panel, respectively, of Fig. <ref>(a) with different bias currents ranging from 0.3 to 0.9 mA, where the curves were systematically shifted upward for clarity. For α = 180^ o, Δ R_ L^2ωY progressive increases from 35 to 84 μΩ as the bias current I increases from 0.3 to 0.9 mA. The detailed I-dependent on the second harmonic signals (Δ R_ L(T)^2ωX + i Δ R_ L(T)^2ωY) were shown in the upper panel of Fig. <ref>(b), where only Δ R_ L^2ωY data show nearly I-linear dependent behavior, and all other second harmonic signals are vanishing small. On the contrary, for α = 90^ o, the Δ R_ T^2ωY increases from about 10 to 30 μΩ as I increases from 0.3 to 0.9 mA, and the corresponding I-dependent signals are shown in the lower panel of Fig. <ref>(b). The nearly I-linear dependence of Δ R_ T^2ωY for α = 90^ o only appears in the transverse channel but not in the longitudinal channel of Δ R_ L^2ωY, justifying the presence of nonlinear Hall effect in SRO thin films. The magnitude of both Δ R_ L^2ωY for α = 180^ o and Δ R_ T^2ωY for α = 90^ o grow rapidly as T drops below 10 K as shown in the upper panel and lower panel, respectively, of Fig. <ref>(c), which is dramatically different from the minor drops in ρ_L(T) and the nearly constant σ_ xy≡ρ_T/(ρ_L^2+ρ_T^2) with decreasing T as shown in Fig. <ref>(c) and Fig. <ref>(b), respectively. We also note that the extracted Δ R_ T^2ω and Δ R_ L^2ω do not vary significantly with the bias current frequency (see Supplementary Note 3), and they derive from the difference in the second harmonic signals between opposite magnetization directions in SRO at zero external magnetic field as illustrated in Fig. <ref>(a) and (b). Therefore, the extrinsic contact effects and also possible magnetic field related effects for NRTE and nonlinear Hall effects can be excluded <cit.>. § DISCUSSIONS For SRO thin films, the onset of ferromagnetism for T ≤ 150 K with magnetization along [110]_ o can, in principle, break the mirror planes with normal vectors perpendicular to the magnetization direction, and a similar mirror symmetry breaking by magnetism has been reported before <cit.>. We also conducted rotational anisotropy second harmonic generation measurements, which can be sensitive to the magnetic order parameter in perovskite transition metal oxides <cit.>. Figure <ref>(a) shows the temperature dependence of the scattering plane angle averaged SHG intensity from a SRO/STO film with t ≈ 35 nm, which exhibits an intensity upturn below 150 K. Although we did not resolve whether the magnetic order induced SHG susceptibility is directly proportional to the magnetization or to its square (as would be the case for magnetostriction), the critical temperature is consistent with that reported for bulk single crystals. We also noted a progressive increase in the SHG intensity as temperature decreases further, inferring an increased contribution from surface states with inversion symmetry breaking. However, we can not completely exclude the possible bulk inversion symmetry breaking in SRO/STO system at low temperatures due to possible lattice strain gradient <cit.> and non-collinear magnetic configuration effects <cit.> (see also Supplementary Note 4), which requires further investigations with advanced characterization tools at low temperatures. The growing surface states contribution at low temperatures is in accord with the dramatic changes of magnetotransport behavior below 10 K as demonstrated in Fig. <ref>. As T decreases from 10 K to 1.4 K, the weak field MR shows a crossover from a negative MR to a positive MR as shown in Fig. <ref>(a), and the Hall resistivity (Fig. <ref>(a)) also shows a nonlinear field dependence below 10 K, indicating a multiple channel conduction at lower temperatures. On the other hand, pronounced quantum oscillations with a frequency of about 28 T were observed for all α values in our sunbeam device as shown in Fig. <ref> (b) for α = 90^ o, and the corresponding Fast Fourier transform (FFT) spectra for different Ts were shown in Fig. <ref>(c). We note that 28 T quantum oscillations in SRO thin film were recently reported to behave as a 2D-like Fermi pocket with signatures that are consistent with Weyl-orbit quantum oscillation effect due to the bulk tunneling between the top and bottom Fermi-arc surface states <cit.>. The open black squares and open red circles in Fig. <ref>(d) plot the rapid increase of FFT amplitude for quantum oscillations below 10 K for α = 180^ o and 90^ o, respectively, which turns out to show a strong correlation with the rapid increases of Δ R_ L^2ω (solid black squares) and Δ R_ T^2ω (solid red circles). This is in big contrast to the minor decrease of resistivity (ρ_ L) from about 13.1 to 10.3 μΩcm as T goes from 10 to 2 K. Therefore, the rapid increases of the second harmonic signals of Δ R_ T^2ω and Δ R_ L^2ω below 10 K (Fig. <ref>(c)) are unlikely scaled with the bulk Drude electron lifetime. Instead, it signifies a crossover to a surface dominant charge transport with inversion symmetry breaking below 10 K. In a magnetic system with broken time reversal symmetry, both intrinsic and extrinsic AHE can contribute to the measured Hall signals <cit.>, and nonlinear Hall signals at the second harmonic generally require additional inversion symmetry breaking <cit.>. As demonstrated in Fig. <ref>(b), the low-temperature AHE in SRO was dominated by the contribution from the intrinsic AHE due to Weyl nodes near the Fermi-surface <cit.>, where σ_ xy is nearly a constant of about e^2/hc_ o down to about 1.4 K, and thus extrinsic skew scattering effect <cit.> shall not play a significant role for our observed nonlinear Hall signals. On the other hand, the distinct sinα dependence of Δ R_ T^2ω does not seem to be compatible with the intrinsic mechanism due to the electron-lifetime-independent Berry curvature effect <cit.>, where intrinsic AHE at zero field (Δ R_ T^ω) is nearly α independent as shown in the lower panel of Fig. <ref>(c). Therefore, the observed nonlinear Hall signals of Δ R_ T^2ωY is more likely deriving from the BCD <cit.> due to surface states with inversion symmetry breaking. From rigorously calculated band dispersions along k_ // and k_ z (see Supplementary Note 5), we found that most of Weyl nodes appear to tilt along the k_// and thus [11̄0]_ o. Taking Weyl node of W_||^1 with |ε-ε_ F|= 18.36 meV as an example, the band dispersions along k_// and k_ z were plotted in the left panel and right panel, respectively, of Fig. <ref>(e). It shows a large tilting of Weyl node along k_ //, but the band dispersion along k_ z is nearly symmetric with respect to the Weyl node. It is thus expected to have nonzero total BCD D⃗ arising from surface projected Weyl nodes along [11̄0]_ o as also supported by the α-dependent Δ R_ T^2ω. The BCD contribution to the second harmonic current density can be derived as j_a^2ω = χ_abc E_bE_c, and χ_abc≡ -ε_adce^3τ/2ħ^2(1+iωτ)D_bd. The BCD can be expressed as D_bd≡∫d^3k/(2π)^3f_0∂Ω_d/∂ k_b, where f_0 and Ω are the equilibrium Fermi-Dirac distribution and the Berry curvature, respectively, and it can be nonzero for systems with titled Weyl nodes and inversion asymmetry <cit.>. Therefore, with a bias current along b axis, the resulting nonlinear Hall current is simply j_a^2ω = χ_abb E_b^2 with χ_abb = e^3τ/2ħ^2(1+iωτ)D_bc, and thus j_a^2ω is a direct measure for the Berry curvature gradient along the bias current direction. In our sunbeam device with different bias current directions of α values ranging from 0^ o to 180^ o, a largest nonlinear Hall signal was observed with α = 90^ o, inferring the presence of an effective BCD D⃗ along [11̄0]_ o. In order to compare the magnitude of our observed nonlinear Hall effect with other systems, we adopted the 3D formula with resistivity anisotropy effect shown in Fig. <ref>(b). The α dependent Δ R_ T^2ω can be deduced to give Δ R_ T^2ω = χ_abbρ_aρ_b^2/Wt^2 Isinα, where ρ_b(ρ_a) is the resistivity along [11̄0]_ o([001]_ o), and W is the width of the Hall bar device (W = 150 μm) (see Supplementary Note 6). The sinα and I-linear dependences of Δ R_ T^2ω are well confirmed by the experiment shown in lower-panel of Fig. <ref>(c) and Fig. <ref>(b), respectively. By using a Drude electron lifetime of about τ_d ∼ 1.9 × 10^-13 s, the magnitude of the effective 3D BCD can be roughly estimated to be about |D⃗| ≈ 55, which falls in the same order of magnitude as several other reported 3D Weyl systems with large BCD <cit.>. On the other hand, the observation of a large NRTE of Δ R_ L^2ω in the longitudinal channel is intriguing, and its amplitude also grows with decreasing T below 10 K, suggesting an intimate relation with the appearance of the nonlinear Hall signals of Δ R_ T^2ω. However, as demonstrated in Fig. <ref>(c), the α dependence reveals a clear orthogonality in the Δ R_ L^2ω and Δ R_ T^2ω. We thus proposed a real space scenario as illustrated in Fig. <ref>(b), where a D⃗ along [11̄0]_ o is accompanied by 1D chiral edge modes along the orthogonal direction of [001]_ o (orange line). Figure <ref>(c) illustrates a minimum Weyl model with one pair of Weyl nodes with chiral charges of +1 and -1. For the yellow-shaded slice between Weyl node pair of opposite chiral charges, the integration of the total Berry flux across each 2D slice will give a Chern number of 1 accompanied by a unique 1D chiral edge modes at the boundary of the system as shown in the upper panel of Fig. <ref>(c) <cit.>. On the other hand, for green-shaded slice with the Weyl-node pair on the same side, the total Chern number is then zero without the presence of chiral edge modes. The Fermi-arc surface states are thus the zero energy chiral edge modes, connecting the non-overlapped Weyl-node pair on a surface Brillouin zone. By searching for Weyl nodes within an energy window of |ε-ε_ F| ≤ 20 meV in the calculated SRO band structure, a number of Weyl nodes can be identified and projected on (110)_ o plane as demonstrated in Fig. <ref>(d). Symbols of sphere, square and triangle correspond to Weyl nodes from three different band pairs. The red and blue colors represent the corresponding chiral charge of +1 and -1, respectively. We note that the yellow-shaded region in Fig.<ref>(d) highlights the non-zero total Chern number and thus supports the presence of 1D chiral edge modes along k_ z. When flipping the magnetization in SRO, the signs of the chiral charges also reverse due to the swapping of spin subbands, and both the directions of BCD D⃗ and 1D chiral edge modes will reverse accordingly. Such 1D chiral edge modes are equivalent to the 1D chiral edge modes in a magnetic TI with quantum anomalous Hall phase <cit.>, where a large NRTE in the longitudinal channel had been recently reported arising from the asymmetric scattering between the 1D chiral edge modes and other surface states <cit.>. For the Weyl metal SRO, in principle, similar NRTE in the longitudinal channel for bias current along [001]_ o (Δ R_ L^2ω for α = 0^ o and 180^ o) can thus appear due to the asymmetric scattering between the 1D chiral edge modes and the Fermi-arc surface states. This may also explain the vanishing of Δ R_ L^2ω for α = 90^ o and thus the intriguing orthogonal relation between Δ R_ L^2ω and Δ R_ T^2ω shown in Fig. <ref>(c). We note that our observed Δ R_ T^2ω due to an effective BCD of surface states may be related to a recently proposed theory <cit.> that a hotline with divergent Berry curvature, separating the Fermi-arc surface states and 3D bulk states, may lead to a large nonlinear Hall response. However, the issues regarding the contribution of Fermi-arc surface states to NRTE and nonlinear Hall effect call for more theoretical and experimental efforts. § CONCLUSIONS In summary, large nonlinear and nonreciprocal charge transport effects along the longitudinal (Δ R_ L^2ω) and transverse (Δ R_ T^2ω) channels were discovered below 10 K in a sunbeam device fabricated from an untwinned thin film SRO grown on miscut STO (001) substrate. Below 10 K, the crossover of weak field MR behavior and also the rapid rise of 2D-like quantum oscillation amplitude not only support the surface dominant charge transport but also agree well with the observed T dependent Δ R_ L(T)^2ω. The detailed bias current direction dependence reveals an intriguing orthogonality between the observed Δ R_ L^2ω and Δ R_ T^2ω, and, for bias current along [11̄0]_ o (α = 90^ o), Δ R_ T^2ω is at maximum while Δ R_ L^2ω is vanishing small. Considering the dominant roles of the intrinsic AHE and surface charge transport at low temperatures in thin films of SRO/STO system, a scenario of an effective BCD D⃗ from surface states along [11̄0]_ o accompanied by 1D chiral edge modes along [001]_ o was proposed to give a qualitative explanation for the observed α dependent Δ R_ L^2ω and Δ R_ T^2ω, which is supported by the calculated band dispersion with tilted Weyl nodes. Our findings demonstrate the feasibility of using the nonlinear and nonreciprocal charge transport effect as a probe for intriguing topology-related electronic properties in a topological system, such as the BCD from nonlinear Hall and 1D chiral edge modes from NRTE. On the other hand, our observations of nonlinear Hall in SRO/STO may also highlight the intriguing possibility of investigating surface dominant charge transport behavior in topological thin film systems. § METHODS The sunbeam device was patterned on a SRO/STO thin film with SRO layer thickness t ≈ 13.7 nm, using standard photolithography followed by argon ion milling. It comprises of 16 Hall bars with α ranging from 0^ o to 360^ o, and the angle difference between adjacent Hall bars is 22.5^ o. One of the Hall bars was carefully aligned along the SRO orthorhombic [001]_ o direction, which was defined as α = 0^ o. Each Hall bar has exactly the same geometry with a width of 150 μm and a length of 290 μm between longitudinal voltage leads. The Au (35 nm)/Ti (10 nm) electrodes were deposited and fabricated via a subsequent step of photolithography. The magnetization measurements on SRO/STO thin films were carried out using a SQUID-MPMS system from Quantum Design. The longitudinal (transverse) Δ R_ L (T)^ω and Δ R_ L(T)^2ω signals were measured simultaneously by a lock-in amplifier at first and second harmonic references, respectively. Rotational anisotropy (RA) SHG measurements were performed using a high-speed rotating scattering plane method described elsewhere <cit.>. The light source was a Ti:sapphire laser of central wavelength of 800 nm. The incident beam was focused onto the sample surface at oblique incidence (θ = 10^ o) with a spot size of ∼ 30 μm. Electronic structure calculations of SrRuO_3 were performed using projector augmented plane wave method <cit.> as implemented in the Vienna ab-initio Simulation package <cit.> within the generalized gradient approximation schemes<cit.>. A 18 × 18 × 14 Gamma centered k-point mesh was used in computations with a cutoff energy of 500 eV. The convergence criterion for the electronic density was defined as 10^-6 eV. The spin-orbit coupling effects were included in self-consistent calculations along with ferromagnetic spin polarization in (110) direction. The effect of electronic correlations in the Ru d states (4d^4 for Ru4^+ ) was taken into account by using the rotationally invariant GGA+U scheme <cit.> with U = 3.0 eV and J = 0.6 eV. We have used Ru d-orbital and O p-orbital to construct the Wannier functions <cit.> with VASP2WANNIER90 <cit.> interface. We have used WannierTools <cit.> to search the Weyl points and to identify the chirality of each Weyl point. § DATA AVAILABILITY All the supporting data are included in the main text and supplementary information. The raw data and other related data for this paper can be requested from W.L.L. § CODE AVAILABILITY The input files for DFT using VASP, Wannier tight binding and WannierTools are available upon reasonable request. § ACKNOWLEDGEMENTS This work was supported by the National Science and Technology Council of Taiwan (NSTC Grant No. 108-2628-M-001-007-MY3 and 111-2112-M-001-056-MY3) and the joint project of Academia Sinica and National Taiwan University (Grant No. AS-NTU-110-10). § COMPETING INTERESTS The authors declare no competing financial or non-financial interests. § AUTHOR CONTRIBUTIONS U.K., E.C.H.L., C.T.C., IC.C., and W.L.L. carried out the low-temperature magneto-transport measurements and data analyses. U.K. and A.K.S. grew the epitaxial SRO films. A.K.S., S.Y., C.Y.L., and C.H.H. performed the X-ray measurements at NSRRC in Taiwan. P.V.S.R., G.Y.G., and W.C.L. performed SRO band calculations. Y.J.H., X.W.L., and D.H. performed the SHG measurements and analysis. W.L.L. designed the experiment and wrote the manuscript. § ADDITIONAL INFORMATION Supplementary Information accompanies the paper on the XXXX website (https://XXXXX). 10 url<#>1urlprefixURL Konig2007 authorKönig, M. et al. titleQuantum spin Hall insulator state in HgTe quantum wells. journalScience volume318, pages766–770 (year2007). Du2015 authorDu, L., authorKnez, I., authorSullivan, G. & authorDu, R.-R. titleRobust helical edge transport in gated InAs/GaSb bilayers. journalPhys. Rev. Lett. volume114, pages096802 (year2015). Fei2017 authorFei, Z. et al. titleEdge conduction in monolayer WTe_2. journalNat. Physics volume13, pages677–682 (year2017). Tang2017 authorTang, S. et al. titleQuantum spin Hall state in monolayer 1T'-WTe_2. journalNat. Physics volume13, pages683–687 (year2017). Hsieh2008 authorHsieh, D. et al. titleA topological Dirac insulator in a quantum spin Hall phase. journalNat. volume452, pages970–974 (year2008). Alpi2010 authorAlpichshev, Z. et al. titleSTM imaging of electronic waves on the surface of Bi_2Te_3: Topologically protected surface states and hexagonal warping effects. journalPhys. Rev. Lett. volume104, pages016401 (year2010). Hasan2010 authorHasan, M. Z. & authorKane, C. L. titleColloquium: Topological insulators. journalRev. Mod. Phys. volume82, pages3045–3067 (year2010). Chang2013 authorChang, C.-Z. et al. titleExperimental observation of the quantum anomalous Hall effect in a magnetic topological insulator. journalScience volume340, pages167–170 (year2013). Kou2014 authorKou, X. et al. titleScale-invariant quantum anomalous Hall effect in magnetic topological insulators beyond the two-dimensional limit. journalPhys. Rev. Lett. volume113, pages137201 (year2014). Checkelsky2014 authorCheckelsky, J. G. et al. titleTrajectory of the anomalous Hall effect towards the quantized state in a ferromagnetic topological insulator. journalNat. Physics volume10, pages731–736 (year2014). Haldane1988 authorHaldane, F. D. M. titleModel for a quantum Hall effect without Landau levels: Condensed-matter realization of the "parity anomaly". journalPhys. Rev. Lett. volume61, pages2015–2018 (year1988). Wan2011 authorWan, X., authorTurner, A. M., authorVishwanath, A. & authorSavrasov, S. Y. titleTopological semimetal and Fermi-arc surface states in the electronic structure of pyrochlore iridates. journalPhys. Rev. B volume83, pages205101 (year2011). Wang2012 authorWang, Z. et al. titleDirac semimetal and topological phase transitions in A_3Bi (A = Na, K, Rb). journalPhys. Rev. B volume85, pages195320 (year2012). Liang2015 authorLiang, T. et al. titleUltrahigh mobility and giant magnetoresistance in the Dirac semimetal Cd_3As_2. journalNat. Mater. volume14, pages280–284 (year2015). Huang2015 authorHuang, X. et al. titleObservation of the chiral-anomaly-induced negative magnetoresistance in 3D Weyl semimetal TaAs. journalPhys. Rev. X volume5, pages031023 (year2015). Xiong2015 authorXiong, J. et al. titleEvidence for the chiral anomaly in the Dirac semimetal Na_3Bi. journalScience volume350, pages413–416 (year2015). Armitage2018 authorArmitage, N. P., authorMele, E. J. & authorVishwanath, A. titleWeyl and Dirac semimetals in three-dimensional solids. journalRev. Mod. Phys. volume90, pages015001 (year2018). Potter2014 authorPotter, A. C., authorKimchi, I. & authorVishwanath, A. titleQuantum oscillations from surface Fermi arcs in Weyl and Dirac semimetals. journalNat. Commun. volume5, pages5161 (year2014). Waw2021 authorWawrzik, D., authorYou, J.-S., authorFacio, J. I., authorvan den Brink, J. & authorSodemann, I. titleInfinite Berry curvature of Weyl Fermi arcs. journalPhys. Rev. Lett. volume127, pages056601 (year2021). Gao2014 authorGao, Y., authorYang, S. A. & authorNiu, Q. titleField induced positional shift of Bloch electrons and its dynamical implications. journalPhys. Rev. Lett. volume112, pages166601 (year2014). Sodemann2015 authorSodemann, I. & authorFu, L. titleQuantum nonlinear Hall effect induced by Berry curvature dipole in time-reversal invariant materials. journalPhys. Rev. Lett. volume115, pages216806 (year2015). Ma2019 authorMa, Q. et al. titleObservation of the nonlinear Hall effect under time-reversal-symmetric conditions. journalNat. volume565, pages337–342 (year2019). Yasuda2020 authorYasuda, K. et al. titleLarge non-reciprocal charge transport mediated by quantum anomalous Hall edge states. journalNat. Nanotechnology volume15, pages831–835 (year2020). Koster2012 authorKoster, G. et al. titleStructure, physical properties, and applications of SrRuO_3 thin films. journalRev. Mod. Phys. volume84, pages253–298 (year2012). Kar2021 authorKar, U. et al. titleHigh-sensitivity of initial SrO growth on the residual resistivity in epitaxial thin films of SrRuO_3 on SrTiO_3 (001). journalSci. Rep. volume11, pages16070 (year2021). Fang2003 authorFang, Z. et al. titleThe anomalous Hall effect and magnetic monopoles in momentum space volume302, pages92–95 (year2003). Chen2013 authorChen, Y., authorBergman, D. L. & authorBurkov, A. A. titleWeyl fermions and the anomalous Hall effect in metallic ferromagnets. journalPhys. Rev. B volume88, pages125110 (year2013). Itoh2016 authorItoh, S. et al. titleWeyl fermions and spin dynamics of metallic ferromagnet SrRuO_3. journalNat. Commun. volume7, pages11788 (year2016). Jenni2019 authorJenni, K. et al. titleInterplay of electronic and spin degrees in ferromagnetic SrRuO_3: Anomalous softening of the magnon gap and stiffness. journalPhys. Rev. Lett. volume123, pages017202 (year2019). Nair2018 authorNair, H. P. et al. titleSynthesis science of SrRuO_3 and CaRuO_3 epitaxial films with high residual resistivity ratios. journalAPL Mater. volume6, pages046101 (year2018). Taki2020 authorTakiguchi, K. et al. titleQuantum transport evidence of Weyl fermions in an epitaxial ferromagnetic oxide. journalNat. Commun. volume11, pages4969 (year2020). Cap2002 authorCapogna, L. et al. titleSensitivity to disorder of the metallic state in the ruthenates. journalPhys. Rev. Lett. volume88, pages076602 (year2002). Nand2014 authorNandkishore, R., authorHuse, D. A. & authorSondhi, S. L. titleRare region effects dominate weakly disordered three-dimensional Dirac points. journalPhys. Rev. B volume89, pages245110 (year2014). Kaneta2022 authorKaneta-Takada, S. et al. titleHigh-mobility two-dimensional carriers from surface Fermi arcs in magnetic Weyl semimetal films. journalnpj Quantum Mater. volume7, pages102 (year2022). kar2022 authorKar, U. et al. titleThe thickness dependence of quantum oscillations in ferromagnetic Weyl metal SrRuO_3. journalnpj Quantum Mater. volume8, pages8 (year2023). Nagaosa2010 authorNagaosa, N., authorSinova, J., authorOnoda, S., authorMacDonald, A. H. & authorOng, N. P. titleAnomalous Hall effect. journalRev. Mod. Phys. volume82, pages1539–1592 (year2010). Rao2014 authorRaoux, A., authorMorigi, M., authorFuchs, J.-N., authorPiéchon, F. & authorMontambaux, G. titleFrom dia- to paramagnetic orbital susceptibility of massless fermions. journalPhys. Rev. Lett. volume112, pages026402 (year2014). Sue2021 authorSuetsugu, S. et al. titleGiant orbital diamagnetism of three-dimensional Dirac electrons in Sr_3PbO antiperovskite. journalPhys. Rev. B volume103, pages115117 (year2021). Morimoto2016 authorMorimoto, T. & authorNagaosa, N. titleChiral anomaly and giant magnetochiral anisotropy in noncentrosymmetric Weyl semimetals. journalPhys. Rev. Lett. volume117, pages146603 (year2016). LiRH2021 authorLi, R.-H., authorHeinonen, O. G., authorBurkov, A. A. & authorZhang, S. S.-L. titleNonlinear Hall effect in Weyl semimetals induced by chiral anomaly. journalPhys. Rev. B volume103, pages045105 (year2021). Nandy2021 authorNandy, S., authorZeng, C. & authorTewari, S. titleChiral anomaly induced nonlinear Hall effect in semimetals with multiple Weyl points. journalPhys. Rev. B volume104, pages205124 (year2021). Torre2021 authorTorre, A. d. l. et al. titleMirror symmetry breaking in a model insulating cuprate. journalNat. Phys. volume17, pages777–781 (year2021). Seyler2020 authorSeyler, K. L. et al. titleSpin-orbit-enhanced magnetic surface second-harmonic generation in Sr_2IrO_4. journalPhys. Rev. B volume102, pages201113 (year2020). Hwang2012 authorHwang, H. Y. et al. titleEmergent phenomena at oxide interfaces. journalNat. Mater. volume11, pages103–113 (year2012). Pesq2012 authorPesquera, D. et al. titleSurface symmetry-breaking and strain effects on orbital occupancy in transition metal perovskite epitaxial films. journalNat. Commun. volume3, pages1189 (year2012). Sohn2021 authorSohn, B. et al. titleSign-tunable anomalous Hall effect induced by two-dimensional symmetry-protected nodal structures in ferromagnetic perovskite thin films. journalNat. Mater. volume20, pages1643–1649 (year2021). mSHG1 authorTrain, C., authorNuida, T., authorGheorghe, R., authorGruselle, M. & authorOhkoshi, S.-i. titleLarge magnetization-induced second harmonic generation in an enantiopure chiral magnet. journalJ. Am. Chem. Soc. volume131, pages16838–16843 (year2009). mSHG2 authorSun, Z. et al. titleGiant nonreciprocal second-harmonic generation from antiferromagnetic bilayer CrI_3. journalNature volume572, pages497–501 (year2019). Du2019 authorDu, Z. Z., authorWang, C. M., authorLi, S., authorLu, H.-Z. & authorXie, X. C. titleDisorder-induced nonlinear Hall effect with time-reversal symmetry. journalNat. Commun. volume10, pages3047 (year2019). Iso2020 authorIsobe, H., authorXu, S.-Y. & authorFu, L. titleHigh-frequency rectification via chiral Bloch electrons. journalSci. Adv. volume6, pageseaay2497 (year2020). He2021 authorHe, P. et al. titleQuantum frequency doubling in the topological insulator Bi_2Se_3. journalNat. Commun. volume12, pages698 (year2021). Wang2021 authorWang, C., authorGao, Y. & authorXiao, D. titleIntrinsic nonlinear Hall effect in antiferromagnetic tetragonal CuMnAs. journalPhys. Rev. Lett. volume127, pages277201 (year2021). Liu2021 authorLiu, H. et al. titleIntrinsic second-order anomalous Hall effect and its application in compensated antiferromagnets. journalPhys. Rev. Lett. volume127, pages277202 (year2021). Gao2023 authorGao, A. et al. titleQuantum metric nonlinear Hall effect in a topological antiferromagnetic heterostructure. journalScience pages10.1126/science.eadf1506 (year2023). Zhang2018 authorZhang, Y., authorSun, Y. & authorYan, B. titleBerry curvature dipole in Weyl semimetal materials: An ab initio study. journalPhys. Rev. B volume97, pages041101 (year2018). Du2018 authorDu, Z. Z., authorWang, C. M., authorLu, H.-Z. & authorXie, X. C. titleBand signatures for strong nonlinear Hall effect in bilayer WTe_2. journalPhys. Rev. Lett. volume121, pages266601 (year2018). Zhang2022 authorZhang, C.-L., authorLiang, T., authorKaneko, Y., authorNagaosa, N. & authorTokura, Y. titleGiant Berry curvature dipole density in a ferroelectric Weyl semimetal. journalnpj Quantum Mater. volume7, pages103 (year2022). Harter2015 authorHarter, J. W., authorNiu, L., authorWoss, A. J. & authorHsieh, D. titleHigh-speed measurement of rotational anisotropy nonlinear optical harmonic generation using position-sensitive detection. journalOpt. Lett. volume40, pages4671–4674 (year2015). Kresse authorKresse, G. & authorJoubert, D. titleFrom ultrasoft pseudopotentials to the projector augmented-wave method. journalPhys. Rev. B volume59, pages1758–1775 (year1999). vasp authorKresse, G. & authorFurthmüller, J. titleEfficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set. journalPhys. Rev. B volume54, pages11169–11186 (year1996). PBE authorPerdew, J. P., authorBurke, K. & authorErnzerhof, M. titleGeneralized gradient approximation made simple. journalPhys. Rev. Lett. volume77, pages3865–3868 (year1996). Liechtenstein authorLiechtenstein, A. I., authorAnisimov, V. I. & authorZaanen, J. titleDensity-functional theory and strong interactions: Orbital ordering in mott-hubbard insulators. journalPhys. Rev. B volume52, pagesR5467–R5470 (year1995). Marzari authorMarzari, N. & authorVanderbilt, D. titleMaximally localized generalized wannier functions for composite energy bands. journalPhys. Rev. B volume56, pages12847–12865 (year1997). Mostofi authorMostofi, A. A. et al. titleAn updated version of wannier90: A tool for obtaining maximally-localised wannier functions. journalComput. Phys. Commun. volume185, pages2309–2310 (year2014). Franchini authorFranchini, C. et al. titleMaximally localized Wannier functions in LaMnO_3 within PBE + U, hybrid functionals and partially self-consistent GW: an efficient route to construct ab initio tight-binding parameters for e_ g perovskites. journalJournal of Physics: Condensed Matter volume24, pages235602 (year2012). QuanSheng authorWu, Q., authorZhang, S., authorSong, H.-F., authorTroyer, M. & authorSoluyanov, A. A. titleWanniertools: An open-source software package for novel topological materials. journalComput. Phys. Commun. volume224, pages405–416 (year2018). Roh2021 authorRoh, C. J. et al. titleStructural symmetry evolution in surface and interface of SrRuO_3 thin films. journalAppl. Surf. Sci. volume553, pages149574 (year2021).
http://arxiv.org/abs/2307.05470v1
20230708213703
A Robust and Efficient Optimization Model for Electric Vehicle Charging Stations in Developing Countries under Electricity Uncertainty
[ "Mansur Arief", "Yan Akhra", "Iwan Vanany" ]
math.OC
[ "math.OC", "econ.GN", "q-fin.EC", "stat.AP" ]
1]Mansur M. Arief cor1 [email protected] 2]Yan Akhra 2]Iwan Vanany [cor1]Corresponding Author [1]organization=Department of Aeronautics and Astronautics Engineering, Stanford University, addressline=450 Serra Mall, city=Stanford, postcode=94305, state=CA, country=USA [2]organization=Department of Industrial and Systems Engineering, Institut Teknologi Sepuluh Nopember, addressline=Sukolilo, city=Surabaya, postcode=60111, state=East Java, country=Indonesia The rising demand for electric vehicles (EVs) worldwide necessitates the development of robust and accessible charging infrastructure, particularly in developing countries where electricity disruptions pose a significant challenge. Earlier charging infrastructure optimization studies do not rigorously address such service disruption characteristics, resulting in suboptimal infrastructure designs. To address this issue, we propose an efficient simulation-based optimization model that estimates candidate stations' service reliability and incorporates it into the objective function and constraints. We employ the control variates (CV) variance reduction technique to enhance simulation efficiency. Our model provides a highly robust solution that buffers against uncertain electricity disruptions, even when candidate station service reliability is subject to underestimation or overestimation. Using a dataset from Surabaya, Indonesia, our numerical experiment demonstrates that the proposed model achieves a 13% higher average objective value compared to the non-robust solution. Furthermore, the CV technique successfully reduces the simulation sample size up to 10 times compared to Monte Carlo, allowing the model to solve efficiently using a standard MIP solver. Our study provides a robust and efficient solution for designing EV charging infrastructure that can thrive even in developing countries with uncertain electricity disruptions. * Proposed a simulation-based optimization model to design optimal EV charging station infrastructure that can withstand uncertain power supply in developing countries. * Used control variates (CV) variance reduction technique to enhance simulation efficiency and provide a highly robust solution that buffers against uncertain electricity disruptions. * Numerical experiment using data from Surabaya, Indonesia showed the proposed model achieved 13% higher average objective value compared to the non-robust solution. * The enhanced simulation efficiency through CV reduces the required sample size by a factor of 10 compared to Monte Carlo simulations * The proposed model showcases a potential to provide a robust solution to the challenges associated with EV charging infrastructure under random electricity disruptions in developing countries. electric vehicle charging station developing country uncertainty variance reduction § INTRODUCTION The growing global demand for electric vehicles (EVs) has brought to the forefront the need for reliable and easily accessible EV charging infrastructure. According to a report by the International Energy Agency, as numerous governments set ambitious goals for electrifying their transportation systems, the worldwide EV demand has exponentiated in recent years. In 2010, there were only approximately 17,000 EVs on the world’s roads. In 2019, for instance, China led the global EV market, with more than 1 million EVs cars sold that year (more than 50% of global EV demand), followed by the whole of Europe with 561,000 cars sold and the USA with 327,000 cars sold. This trend is projected to persist in the upcoming years <cit.>. Developing countries are also striving to promote EV adoption, coupled with greener electricity <cit.> to expedite the achievement of their sustainability goals. For example, Indonesia has set an ambitious target of having 20% of all automobile sales be electric by 2025, with a long-term goal of achieving fully electrified transportation by 2050 <cit.>. However, developing countries like Indonesia face significant infrastructure constraints that must be addressed to achieve these goals. The availability of EV charging infrastructure is a crucial issue that must be addressed to support the widespread adoption of EVs. In Indonesia, there were only 240 public EV charging points across the country as of 2021 <cit.>. However, an estimated 31,000 EV charging stations are required throughout the country to support sustainable electrification of vehicles in the country <cit.>. This lacking infrastructure issue is not unique to Indonesia and is faced by many other developing countries to support the growth of EV adoption. Tackling this challenge by designing a convenient and reliable EV charging network is, however, a very complex task. To ensure a convenient location, it is essential to consider factors such as population density or potential EV demand distribution <cit.>. However, in major cities in developing countries, finding suitable land for charging stations may be challenging due to limited space availability. Furthermore, in developing countries, service uncertainty, including electricity, is one of the most significant issues. Implementing smart charging strategies <cit.> becomes hardly feasible due to electricity supply uncertainty. Outages and other electricity disruptions often occur, posing a significant problem for users who demand reliable service. To address this challenge, our study proposes a robust solution for designing EV charging infrastructure that accounts for the challenge of electricity disruptions in developing countries. We introduce a simulation-based optimization model that estimates the service reliability of candidate charging stations and incorporates this information into the objective function and constraints. This approach offers a versatile solution by utilizing simulation approaches compared to previous works that assume available disruption probability models. Additionally, we employ a variance reduction technique called control variates (CV) to enhance simulation efficiency, reducing the required sample size by up to 10 times compared to naive Monte Carlo (MC) simulations. This results in an efficient mixed-integer programming (MIP) model that solves for optimal solutions that strike the balanced objective between minimizing the total cost of operating and investing in the charging infrastructure and providing high-quality service to the public. Fig. <ref> illustrates the comparison between the traditional modeling approach without variance reduction vs. the proposed framework that utilizes the variance reduction technique to achieve a tighter confidence interval (hence much more precise output) with less computational burden. Our work contributes in three key ways. Firstly, we propose a model that specifically addresses the critical issue of electricity disruption in EV charging station planning, particularly in developing countries. Secondly, we integrate the estimation of disruption probabilities into our model, providing a more data-driven approach compared to previous works that assumed available disruption probability models apriori. Finally, our study demonstrates the robustness of the proposed model in solving EV charging infrastructure problems by comparing its performance to a non-robust model, even when disruption probabilities are slightly under or over-estimated. Our numerical experiment, based on an EV dataset from Surabaya, Indonesia, shows that our model achieves a 13% higher average objective value compared to the non-robust solution, highlighting its superior performance to help build sustainable and thriving ecosystems for EVs, both in developed and developing countries in the years to come. The rest of this paper is structured as follows. In Section <ref>, we provide a concise overview of the literature related to the optimization of EV charging infrastructure We then present the proposed model formulations in Section <ref> and approach incorporating the CV technique to estimate the service reliability (i.e. the complement of disruption probability). In Section <ref>, we describe the experiment settings and discuss the main findings in Section <ref>. Finally, we conclude our work in Section <ref>. § LITERATURE REVIEW In this section, we briefly review earlier works directly related to the planning of EV charging infrastructure and relevant case studies that motivate our approach. Examining these earlier works offers insight into the evolution of methodologies, leading to the proposed work, which uniquely introduces a combination of stochastic modeling and variance reduction techniques. The summary is provided in Table <ref>. The planning of EV charging infrastructure can be viewed as a facility location problem, which aims to minimize an objective function subject to constraints related to the desired performance of the network facilities. Early studies, including those by <cit.> and <cit.>, adopted deterministic models focusing on minimizing charging stations and development costs, respectively. <cit.> sought to maximize service demand, whereas <cit.> aimed to minimize infrastructure and access costs. Similar objectives were pursued by <cit.>, <cit.>, and <cit.>, with deterministic models being the common methodology. Several other studies, like those conducted by <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>, continued the trend of deterministic models, exploring various aspects of EV charging station optimization. Other researchers, including <cit.>, <cit.>, <cit.>,<cit.>, <cit.>, <cit.>, and <cit.>, focused on minimizing the number of charging stations or the operating cost, or maximizing the EV flow coverage. Another line of work integrates charging infrastructure into the smart-grid design <cit.> or other renewable energy sources such as solar cells <cit.>. While this approach provides an integrated solution to renewable energy issues and amplifies the positive impact of EVs on the environment, it may not be practical for urban areas in developing countries. A comprehensive review of charging infrastructure designs is presented by <cit.>, emphasizing the need for increasingly detailed modeling that accounts for randomness and variability. However, there is a lack of rigorous real-world case studies that emphasize uncertainty quantification in the modeling framework. Several case studies have been conducted in both developed and developing countries. For example, <cit.> studied the problem of slow-charging technology in Lisbon, where vehicles are often parked overnight. In contrast, <cit.> considered both fast- and slow-charging technologies, focusing on robustly covering all demands and avoiding partial fulfillment in the city of Toronto. Another case study was conducted by <cit.> using a GIS-based model in Ankara and adopting a fuzzy approach. A city-scale simulation was developed for Singapore by <cit.>, focusing on the trade-off between cost minimization and customer accessibility maximization. Lastly, <cit.> proposed a set covering model for EV charging stations in Surabaya but ignored electricity disruption and only provided redundant demand coverage to provide a buffer against uncertainty, resulting in an overly simplified model and sub-optimal solutions. In light of these studies, it is clear that the EV facility location problem is a complex and multifaceted issue that requires a tailored approach for different regions and contexts. Developing countries, in particular, may face unique challenges, such as power electricity disruptions, that must be considered in the planning and design of EV facilities. Such disruptions and uncertainty are addressed only in a handful of studies. For instance, <cit.> uses a multi-criteria decision-making approach aiming to strike a balanced solution against flooding disruption that maximizes the charging convenience, minimizes the impact of flood hazards, and minimizes the impact of existing charging stations using TOPSIS. <cit.> integrates the electric bus charging stations with photovoltaic and energy storage systems using a two-stage stochastic programming model, enabling them to incorporate the uncertainty of PV power outputs. <cit.> optimizes the size of the energy storage system considering the annualized cost, penalty cost for buying power during peak hours, and penalty cost for resilience violations. Other works that consider stochastic modeling include <cit.>, which directly use either structure of the stochastic models or simulations to represent elements of uncertainty into their optimization models. The caveat is that the resulting model can be extremely hard to solve, especially when a solution with high confidence is desired. The proposed work extends the use of stochastic modeling and introduces control variates <cit.>, a variance reduction technique that can speed up a simulation-based optimization model, to the field. We propose an approach that addresses the challenges of the need to account for electricity disruptions via simulation and controlling the resulting objective value uncertainties by adjusting the simulation sample size. Simulation modeling enables the modeler to adjust the degree of modeling fidelity, depending on the prior knowledge available, and can be easily verified by estimating the probability of electricity disruptions and comparing it with available historical data. The resulting simulation-based robust model can be accelerated using variance reduction techniques (i.e., control variates), and it offers a more accurate and practical approach for planning and designing EV charging infrastructure that considers uncertainty and disruptions. The integration of stochastic modeling and control variates sets this work apart from previous research, potentially paving the way for more efficient and effective EV charging station location optimization solutions. § MODEL FORMULATION In this section, we describe our modeling components, including the decision variables, objective function, constraint set, model benchmarks (robust and non-robust model), and the CV method we employ to improve simulation efficiency. §.§ Decision Variables We consider a set of demand nodes I and supply nodes J, representing sub-district centers and charging station candidate locations in the region under study. We also consider K vehicle types, representing different vehicle modalities that the residents use for commuting (here, we consider two modalities: electric motorcycles and electric cars). The average time to travel from node i ∈ I to node j ∈ J is denoted by d_ij. A threshold parameter d_max is introduced as an upper bound for this travel time as a proxy to study the robustness of the solution w.r.t. consumer time-to-travel for charging. The decision variables include binary variables x_j indicating whether the charging station candidate j is selected or not and y_ij indicating whether demand node i is to be assigned to be served by charging station j. In addition, we also use integer decision variables v_ij^k and u_j, denoting the number of electric vehicles of type k from node i charged at node j and the number of units of charging connectors installed at node j, respectively. x_j = 1, if station j ∈ J is selected 0, otherwise y_ij = 1, if node i ∈ I is assigned to node j ∈ J 0, otherwise, v_ij^k ∈{0, 1, ⋯}, ∀ i ∈ I, j ∈ J, k ∈ K u_j ∈{0, 1, ⋯}, ∀ j ∈ J Each opened station j incurs a daily cost h_j and can only accommodate q_j charging connectors due to limited space. Each charging connector incurs g daily operational cost and has a limited daily charging throughput of c_j kWh. A vehicle type k takes e_k kWh energy and t_k time to charge using fast-charging technology. We use the electricity price denoted by r to convert the energy used to monetary value. §.§ Objective Function The objective is to maximize daily profits under random disruption events at each station, i.e., the revenue from all undisrupted stations minus operational and investment costs. We add a penalty term for any unmet customer demands due to the disruptions to study proper incentivizing mechanisms to achieve further robust models in the ablation study. To this end, we consider each charging station j ∈ J to have a reliability p_j = ℙ(Z_j ≤ z_j) = 𝔼 [𝕀(Z_j) ≤ z_j]. The disruption events are simulated utilizing random variable Z = [Z_j]_∀ j ∈ J∼ q. Z_j represents the underlying state triggering electricity disruption at station j whenever it exceeds some threshold z_j. In practice, electricity disruption events may occur due to extreme weather, spiking demand, or fallen trees <cit.> (in which Z_j might represent wind speed, cumulative region-wide demand, or fallen tree branch weights, respectively, that hits electrical equipment and z_j is the equipment threshold to deal with the corresponding random Z_j realization). <cit.> presents a review of how EV charging infrastructures strain the electricity grids, which, in turn, exacerbate the likelihood of electricity outages, especially in developing countries. With this consideration, the objective function can be formulated as follows: we have prior information about p_j, ∀ j ∈ J. max ∑_i ∈ I∑_j ∈ J∑_k ∈ Kr e_k p_j v_ij^k_revenue - s d_ij (1-p_j) v_ij^k_penalty - ∑_j ∈ J(g u_j + h_j x_j)_total cost. On the other hand, if p_j is not available, then we can use simulation to estimate the following objective: max ∑_i ∈ I∑_j ∈ J∑_k ∈ K r e_k v_ij^k 𝔼[𝕀(Z_j≤ z_j) ]_revenue - s d_ij v_ij^k 𝔼[𝕀(Z_j > z_j) ]_penalty - ∑_j ∈ J(g u_j + h_j x_j)_total cost, where 𝕀(Z_jl≤ z_j) is binary variables indicating whether the disruption occurs or not. 𝕀 (Z_jl≤ z_j) = 1, if Z_jl≤ z_j 0, otherwise. Monte Carlo (MC) simulation is one of the most practical methods to achieve this. MC uses n i.i.d. copies of the random variable to estimate the expectation. For each j ∈ J, we first generate Z_j1, Z_j2, ⋯ Z_jn. We then check if the disruption event is triggered or not at the l-th sample and output the binary indicators I_jl = 𝕀 (Z_jl≤ z_j). Then, we use the binary indicators in our final (robust) objective function: max ∑_i ∈ I∑_j ∈ J∑_k ∈ K∑_l=1^n 1/n( (r e_k v_ij^k I_jl_revenue - s d_ij v_ij^k (1-I_jl)_penalty) - ∑_j ∈ J (g u_j + h_j x_j)_total cost. We call our model the Robust Model in the experiment, to contrast with the original (Non-Robust) model proposed by <cit.>, which is attained when setting I_jl = 1 for all j ∈ J, l ∈{1, 2, ⋯ n} in (<ref>) during optimization. The solutions of both models are evaluated under random disruption events generated using a different random seed. §.§ Constraints The maximization of the objective function in (<ref>) is subject to a set of constraints: s.t.  ∑_k ∈ k v_ij^k ≤ y_ij M, ∀ i ∈ I, j ∈ J, d_ij y_ij≤ d_max , ∀ i ∈ I, j ∈ J, ∑_j ∈ J v_ij^k = w_i^k, ∀ i ∈ I, k ∈ K, ∑_i ∈ I∑_k ∈ K t_k v_ij^k ≤ c_j u_j, ∀ j ∈ J, u_j ≤ x_j q_j, ∀ j ∈ J, ∑_i ∈ I y_ij≤ x_j M, ∀ j ∈ J, ∑_j ∈ J y_ij≥ 1, ∀ i ∈ I, ∑_j ∈ J x_j ≤ N ∑_j ∈ J∑_l=1^n 1/n y_ij I_jl≥p̅, ∀ i ∈ I ∑_j ∈ J∑_l=1^n 1/n v_ij^k I_jl≥∑_j ∈ J v_ij^kp̅, ∀ i ∈ I, k ∈ K In the above formulation, constraint (<ref>) ensures that charging stations can only charge vehicles if assigned. Constraint (<ref>) ensures the maximum time-to-charge for consumers does not exceed the set threshold d_max. Constraint (<ref>) ensures all charging demands are fulfilled, where w_i^k denotes the number of vehicles of type k to charge at demand point i. Constraint (<ref>) ensures that the required charging capacity to fulfill each station's assigned demand does not exceed the installed capacity. Constraint (<ref>) restricts the number of charging connectors installed in each station. Constraint (<ref>) ensures that demands are assigned only to opened stations. Constraint (<ref>) guarantees that at least one stations cover each demand. Constraint (<ref>) limits the maximum number of stations to open. Finally, constraint (<ref>-<ref>) ensures that the probability that at least one of the assigned charging stations serving a given demand is not under an electricity outage is greater than or equal to p̅, assuming that outages between stations are independent. §.§ Robust vs. Non-Robust Model The consideration of p_j in our formulation is part of our attempt to boost the robustness of the original model and address the unique challenges and characteristics of urban areas in developing countries. The Non-Robust Model ignores disruption probability, resulting in a more simplified model. Our formulation is general, in the sense that we can attain the earlier model by setting I_jl = 1 for all j ∈ J, l ∈{1, 2, ⋯ n}. This earlier model ignores disruption uncertainty and often results in an overly cost-optimized solution that can have serious performance degradation when disruption occurs. Fig <ref> (left) shows a non-robust solution where only two stations are selected to cover 30+ demand nodes in the city of Surabaya. In this solution, many demand nodes are only covered by one station (no redundancy), and thus, when an electricity disruption hits the charging station, the charging demands will not be met and the residents are served very poorly. Our proposed robust model aims to incorporate the disruption uncertainty and optimizes the location and capacity of EV charging stations while balancing the trade-offs between consumer service level and economic profits. This incorporation maintains a linear objective function and linearized constraints, which still yields an MIP model that can solve efficiently using standard solvers. §.§ Improving the Efficiency of Disruption Probability Estimation While the proposed objective function in (<ref>) is still linear, the sample size n required to achieve high statistical confidence might blow up as the disruption probabilities 1 - p_j, ∀ j ∈ J become lower (e.g., as the utilities in developing countries mature). Note that our objective essentially estimates p_j by generating enough values Z_j1, Z_j2, ⋯, Z_jn, and compute p̂_j = 1/n∑_l=1^n 𝕀(Z_jl≤ z_j) which can be shown to be unbiased and converges to p_j. Under the assumption that Z = [Z_j]_∀ j ∈ J∼ q are independently and identically distributed, and z_j, ∀ j ∈ J are fixed threshold values, estimator p̂_j is an unbiased and consistent estimator of p_j. The proof is straightforward but is provided here for completeness. Unbiasedness: 𝔼[p̂_j] = 𝔼[ 1/n∑_l=1^n 𝕀(Z_jl≤ z_j) ] = 1/n∑_l=1^n 𝔼[ 𝕀(Z_jl≤ z_j) ] = 1/n∑_l=1^n p_j = p_j where the first equality follows from the definition of p̂_j, the second equality follows from the linearity of the expectation operator to the sum of indicator functions, and the third line follows from the fact that Z_jl are independently and identically distributed, and the third equality follows from the definition of p_j. Consistency: We know that by the law of large numbers, for any ϵ > 0, lim_n →∞ℙ(|p̂_j - p_j| ≥ϵ) = 0. Hence, p̂_j converges in probability to p_j, and thus it is a consistent estimator of p_j. Supposed that we already have an estimate p̂_j, ∀ j ∈ J. We can now plug the estimate into our optimization problem, giving max ∑_i ∈ I∑_j ∈ J∑_k ∈ Kr e_k p̂_j v_ij^k_revenue - s d_ij (1-p̂_j) v_ij^k_penalty - ∑_j ∈ J(g u_j + h_j x_j)_total cost s.t.  Constraint (<ref>)-(<ref>) ∑_j ∈ J y_ijp̂_j ≥p̅, ∀ i ∈ I ∑_j ∈ J v_ij^k p̂_j ≥∑_j ∈ J v_ij^kp̅, ∀ i ∈ I, k ∈ K . Note that this formulation using p̂_j, ∀ j ∈ J is equivalent to the robust model using indicator variables I_jl, ∀ j ∈ J, l ∈{1, 2, ⋯, n} earlier that uses the objective function (<ref>). §.§.§ Estimating p̂_j to Sufficient Accuracy While p̂_j is unbiased and consistent, the sample size to ensure a precise estimate can be arbitrarily large, especially when we want a higher accuracy (e.g. when the disruption rate 1-p_j is tiny, such as in developed countries where utility service has high reliability). Suppose we want an δ-accuracy and 1-α confidence level to estimate p_j = 0.9999. Then, we can use Hoeffding's inequality to determine the sample size. According to Hoeffding's inequality, for any δ > 0, the probability that the estimate deviates from the true value by more than δ is bounded by ℙ(|p̂_j - p_j| > δ) ≤ 2e^-2nδ^2, where n is the sample size. Hence, if we want to ensure 1-α confidence level, we set 2e^-2nδ^2 = α, and solve for n n = 1/2δ^2ln(2/α). For instance, if we want an accuracy of δ = 0.0001 and a confidence level of 1-α = 0.95, then the required sample size is n = 1/2(0.0001)^2ln(2/0.05) ≈ 114,763, which is quite huge. Figure <ref> shows the sample size (in a log_10 scale) for various α and δ values. Note, however, that this is an upper bound and in practice, this sample size is not always necessary. If we have N := |J| stations and each p_j has to be estimated using n≈ 114,763 samples, then we will need N × 114,763 samples to estimate the samples prior to solving the optimization problem, which can be overly burdensome if each simulation runs considers complex systems. Thus, we seek ways to improve efficiency and reduce the variance of the estimator. §.§.§ Improving Efficiency via Control Variates One way to improve the estimation efficiency and thus reduce the sample size is through the use of control variates (CV) <cit.>. CV involves introducing a new variable that is correlated with the random variable of interest and can be easily estimated. The CV is then used to adjust the estimate of the random variable to improve its efficiency by reducing the variance of the estimator using the cheaper-to-compute random variable. In our case, we can use CV to estimate p_j = ℙ(Z_j ≤ z_j). Let g(Z_j) be a function of Z_j that is easy to compute. Specifically, if we consider Gaussian q = N(μ, σ) and Z_j ∼ q, we can use g(z) = Φ(z) the CDF of the standard normal distribution as the CV to compute g(Z_j). The CV estimator for p_j is computed as p̂_j = 1/n∑_l=1^n 𝕀(Z_jl≤ z_j) + π_j ( 𝕀 (X_jl≤z̅_j)-g(z̅_j) ) where Z_jl is the l-th sample from the distribution q, X_jl's are standard normal random variables correlated with Z_jl, and z̅_j are the scaled version of z_j chosen to threshold X_jl. Finally, π_j is chosen to minimize the variance π_j = - Cov( ∑_l=1^n 𝕀(Z_jl≤ z_j), ∑_l=1^n 𝕀(X_jl≤z̅_j) )/Var(∑_l=1^n 𝕀(X_jl≤z̅_j)). We can show that the CV estimator is unbiased and achieves variance reductions in the following remarks. The reduction in variance, subsequently, allows us to reduce the sample size to achieve the same level of δ and α. The CV estimator (<ref>) is unbiased for p_j. The proof is straightforward, showing 𝔼[p̂_j] = p_j. 𝔼[p̂_j] = 1/n∑_l=1^n𝔼[𝕀(Z_jl≤ z_j)] +π_j (1/n∑_l=1^n𝔼[ 𝕀(X_jl≤z̅_j)]-g(z̅_j) ) = 1/n∑_l=1^np_j + π_j (1/n∑_l=1^n g(z̅_j) ) - π_j g(z̅_j) = p_j. Assuming we can generate highly correlated random variables Z_jl and X_jl simultaneously and choose the optimal π_j (<ref>), the CV estimator (<ref>) attains a variance reduction. Note that the variance without using CV is Var(p̂_j) = 1/n^2Var(∑_l=1^n𝕀(Z_jl≤ z_j)). With CV, the variance of the estimator is Var(p̂_j) = 1/n^2( Var(∑_l=1^n𝕀(Z_jl≤ z_j)) +2π_j Cov(∑_l=1^n𝕀(Z_jl≤ z_j),∑_l=1^n𝕀(X_jl≤z̅_j) ) +π_j^2 Var(∑_l=1^n𝕀(X_jl≤z̅_j)) ) . Plugging in the optimal π_j for our problem and simplifying, we have Var(p̂_j) = 1/n^2Var(∑_l=1^n𝕀(Z_jl≤ z_j)) - Cov^2(∑_l=1^n 𝕀(Z_jl≤ z_j), ∑_l=1^n 𝕀(X_jl≤z̅_j) )/n^2 Var(∑_l=1^n 𝕀(X_jl≤z̅_j)). We can see that the second term in RHS is non-positive, which means that the variance is reduced the most if 𝕀(Z_jl≤ z_j) and 𝕀(X_jl≤z̅_j) are highly correlated (either positively or negatively), which intuitively means X_jl provides some information about Z_jl. It is important to note, however, that in practice, we often use sample covariances and sample variances to compute π_j, so the CV estimator might not achieve this theoretical variance reduction. § NUMERICAL EXPERIMENTS In this study, we examine the EV and electricity data obtained from Surabaya, Indonesia. The EV dataset includes 11 candidate charging stations, 31 sub-regions of the city representing demand nodes, and two vehicle types, namely motorcycles (k=1) and cars (k=2). Figure <ref> illustrates the locations of the candidate charging stations (red nodes) and demand nodes (blue nodes), where the size of the blue nodes denotes the size of the demand at each location. This charging demand, i.e. the number of EVs of type k at each demand node i, is represented by w_i^k. The average travel time from demand node i to charging station j using vehicle k, d_ij^k, is amassed from Google Maps. The full capacity for each charging connector is considered as c_j=1440 minutes/day for all j ∈ J with 24/7 operational hours and the number of connectors installed in station j ∈ J is limited to q_j=8 for all j ∈ J, due to land availability in the candidate locations. We estimate the disruption probability by simulating random electricity demands Z = [Z_j]_∀ j ∈ J where Z_j ∼ q_j. We obtain this masked data from the local electricity company, which performed data masking and rescaling for privacy and security reasons. The masked mean and standard deviation of q_j along with demand threshold z_j are summarized in Table <ref>. The simulation uses this probability model to generate random demands and an electricity disruption event is triggered for the whole day at station j when Z_j ≥ z_j. Hence, we have station reliability p_j = ℙ(Z_j ≤ z_j), ∀ j ∈ J. The other experiment parameters are summarized in Table <ref>. We then build our model by running n simulation replications and computing the mean of the objective function values. The result is summarized in Fig. <ref> and Fig. <ref> for n up to 10,000. The selected stations and demand assignments for each model solution are shown in Fig. <ref> (left: Non-Robust Model, right: Robust Model) and Fig. <ref> (left: Misspecified Model #1, right: Misspecified Model #2). The Misspecified Model #1 is built assuming 0.95p_j while the Misspecified Model #2 assumes 1.05p_j for all j ∈ J, highlighting underestimation and overestimation of service reliability respectively. The CV estimator is constructed using standard normal random variables X_jl with z̅_j properly scaled. This gives a highly correlated random variables 𝕀(X_jl≤z̅_j) to 𝕀(Z_jl≤ z_j). We show the estimated station reliability (p_j) using MC and CV in Fig. <ref> and its standard error in Fig. <ref> to highlight the superior estimation efficiency using the CV estimator. § DISCUSSION AND FINDINGS In this section, we discuss our findings regarding the robustness of the optimal solutions against disruptions even when the probability is misspecified and the enhanced disruption simulation efficiency that allows robust decision-making for our problem against disruption uncertainties. We also highlight the limitation of the model and our outlook for future research. §.§ Robustness of the Optimal Solutions Figure <ref> summarizes the objective function values obtained by benchmarking the Robust Model, Non-Robust Model, Misspecified Model #1 (underestimated station reliability), and Misspecified Model #2 (overestimated station reliability). The optimal solution of the Robust Model (represented by orange and brown lines) outperforms the other models. Conversely, the solution of the Non-Robust Model (represented by blue and purple lines) yields the lowest objective value. The Non-Robust Model prioritizes minimizing operational and investment costs, resulting in only two charging stations being opened. This leads to lower revenue and higher penalties, particularly during disruptions. In contrast, the Robust Model balances operational and investment costs with potential revenue losses and penalties incurred during disruptions. As a result, the Robust Model opens three charging stations, distributing the large charging stations across the geography of the city, resulting in an 18% higher total cost than the Non-Robust Model solution. However, it provides better protection against revenue loss and penalties incurred during disruptions. We also suggest that these charging stations implement a smart energy management policy <cit.> for added robustness. This added robustness leads to a 10% higher revenue and 60% lower penalty when disruptions occur, yielding an approximately 13% higher overall objective. Figure <ref> shows that the Robust Model's balanced solution covers more demand points with two charging stations, resulting in a better revenue and penalty trade-off than the Non-Robust Model. The Robust Model with misspecified station reliability still provides some level of robustness, as evidenced by the objective values of both the underestimation and overestimation scenarios. These models' solutions have objective values lower than the Robust Model solution but higher than the Non-Robust Model solution. Thus, while accurately estimating station reliability is beneficial, the model can still tolerate imperfections. When utilizing the Robust Model with underestimated station reliability, the solution tends to be more conservative and provides a higher level of buffer against disruptions. This results in a solution with four charging stations, with over 90% of demand points covered by two or more charging stations. On the other hand, overestimating station reliability leads to a solution with only three charging stations, resulting in a lower cost and an objective value very close to the Robust Model. Figure <ref> illustrates the charging station placement for both the underestimated and overestimated scenarios. §.§ Improved Simulation Efficiency using CV Estimator We now discuss how we incorporate the simulation into our robust model. The main challenges center around incorporating electricity station reliability p_j, ∀ j ∈ J (and thus corresponding disruption probability 1-p_j, ∀ j ∈ J ), which might require a huge sample size to achieve desired precision level (thus increasing the computational burden of computing the objective function (either (<ref>) or (<ref>)) and the reliability constraints (either (<ref>)-(<ref>) or (<ref>)-(<ref>)). While both MC and CV estimators of the objective values are unbiased and converge to the same value for each model, the proposed CV estimation approach appears to effectively reduce the estimation variance, thus yielding tighter confidence intervals in Fig. <ref> (brown, silver, pink, and purple lines vs. orange, red, green, and blue lines). Furthermore, Fig. <ref> highlight that all CV estimators attain about 10× smaller standard errors compared to their MC counterparts. This means that CV improves the simulation efficiency and reduces the sample size required to attain the same precision up to a factor of 10 vs. naive MC simulation approach, without accuracy loss. The dominant efficiency performance of the CV-based estimation technique that reduces the sample size requirement while maintaining accuracy allows us to incorporate the estimated station reliability into the objective function and reliability constraints. This results in the proposed Robust Model that can be solved without increasing the computational cost significantly. The high efficiency of the CV over MC in estimating the reliability probabilities (even to values close to 1.00) is emphasized in Fig. <ref>, in which all CV estimates attain much tighter confidence intervals regardless of the target probability. In this estimation, again, CV estimators attain 10× smaller standard error for the same sample size used by MC estimators. This highlights the applicability of our robust modeling method to deal with problems where electricity disruptions are extremely rare and need to be estimated to an ultra-level precision. §.§ Limitation of the Current Work Although our CV-assisted robust model provides optimal solutions that strike a balance between minimal cost and buffering against electricity disruptions, we acknowledge that scaling it to larger problems, such as a larger charging station candidate set and more fine-grained demand points, heavily relies on the efficiency of the MIP solver. Moreover, we acknowledge that the electricity pricing rate used in this study is simplified, whereas more recent dynamic electricity pricing schemes are available and more realistic, though highly nonlinear. Incorporating such schemes could improve the accuracy of our revenue model, but it may not be feasible with our current solver. Additionally, the CV estimation approach used in this study is based on some prior knowledge about the probability model of the random variable triggering the disruption events. In practice, such knowledge may not be easy to obtain. However, we recognize that machine learning models can be leveraged to extract features from historical datasets and estimate disruption events. We can also leverage machine learning techniques to estimate the battery capacity of the EVs <cit.> to better predict the charging time for each arriving demand to extend our model to incorporate nonlinear dynamics and more realistic operations in our future work. § CONCLUSION In this study, we propose a simulation-based optimization model to address the critical issue of designing robust planning for EV charging stations in developing countries, where electricity disruptions may frequently occur and impact customer satisfaction. Our model considers service reliability as a key factor and incorporates it into the objective function and constraints using the control variates (CV) variance reduction technique to improve simulation efficiency. Our numerical experiment, based on a dataset from Surabaya, Indonesia, demonstrates the superior performance of our robust model solution compared to its non-robust counterpart, even in cases of underestimated or overestimated service reliability. While our proposed model shows promise, we acknowledge its reliance on an efficient MIP solver and its use of a simplified electricity pricing rate. Furthermore, our CV estimator is based on prior knowledge of the probability model, which may not be available in practice. As such, we seek to extend our model to cover nonlinear MIP and learning-based disruption estimation in future work. Nonetheless, our model's ability to reduce the required sample size by up to 10× compared to Monte Carlo simulations highlights its potential to provide a robust solution to the challenges associated with EV charging infrastructure under random electricity disruptions. elsarticle-harv
http://arxiv.org/abs/2307.05532v1
20230708070820
Opening up ChatGPT: Tracking openness, transparency, and accountability in instruction-tuned text generators
[ "Andreas Liesenfeld", "Alianda Lopez", "Mark Dingemanse" ]
cs.CL
[ "cs.CL" ]
[email protected] 0000-0001-6076-4406 Centre for Language Studies Radboud University The Netherlands [email protected] 0009-0004-5873-5496 Centre for Language Studies Radboud University The Netherlands [email protected] 0000-0002-3290-5723 Centre for Language Studies Radboud University The Netherlands Large language models that exhibit instruction-following behaviour represent one of the biggest recent upheavals in conversational interfaces, a trend in large part fuelled by the release of OpenAI's ChatGPT, a proprietary large language model for text generation fine-tuned through reinforcement learning from human feedback (LLM+RLHF). We review the risks of relying on proprietary software and survey the first crop of open-source projects of comparable architecture and functionality. The main contribution of this paper is to show that openness is differentiated, and to offer scientific documentation of degrees of openness in this fast-moving field. We evaluate projects in terms of openness of code, training data, model weights, RLHF data, licensing, scientific documentation, and access methods. We find that while there is a fast-growing list of projects billing themselves as `open source', many inherit undocumented data of dubious legality, few share the all-important instruction-tuning (a key site where human annotation labour is involved), and careful scientific documentation is exceedingly rare. Degrees of openness are relevant to fairness and accountability at all points, from data collection and curation to model architecture, and from training and fine-tuning to release and deployment. <ccs2012> <concept> <concept_id>10010147.10010178.10010179.10010182</concept_id> <concept_desc>Computing methodologies Natural language generation</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010583.10010786</concept_id> <concept_desc>Hardware Emerging technologies</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10002944.10011122.10002945</concept_id> <concept_desc>General and reference Surveys and overviews</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10002951.10003227.10003233.10003597</concept_id> <concept_desc>Information systems Open source software</concept_desc> <concept_significance>100</concept_significance> </concept> <concept> <concept_id>10002944.10011123.10011130</concept_id> <concept_desc>General and reference Evaluation</concept_desc> <concept_significance>100</concept_significance> </concept> </ccs2012> [500]Natural language generation [300]Emerging technologies [300]Surveys and overview [100]Open-source software [100]Evaluation < g r a p h i c s > A table with 16 rows and 13 columns. The first row is headed “Project" and lists the project names and organization behind it. Some projects also feature more information regarding the base large language and reinforcement learning models that are used. The The remaining 12 rows are each names after one of evaluation features in Table 1. Each cell of the table then evaluates the project for the respective feature, either giving it a pass, a partial pass, or a fail. More detailed information as well as the content of each cell can be found in the data repository that accompanies the paper. 20 April 2023 [accepted]26 May 2023 Opening up ChatGPT: Tracking openness, transparency, and accountability in instruction-tuned text generators Mark Dingemanse August 12, 2023 ============================================================================================================ § INTRODUCTION Open research is the lifeblood of cumulative progress in science and engineering. In today's technological landscape, it is hard to find any research finding or technology that does not rely to a significant extent on the fruits of open research, often publicly funded. For instance, AlexNet <cit.>, the deep neural net kickstarting the deep learning revolution a decade ago, derived its strength from a human-annotated dataset of 3.2 million images created by Princeton computer scientists <cit.>. And the striking progress in protein folding in recent years (with the AlphaFold deep learning system predicting the structure of nearly all known proteins <cit.>, where decades of prior work had reached a comparatively meagre 17%) has only been possible thanks to openly deposited structural data in the Protein Data Bank that goes back half a century <cit.>. The talk of the town in conversational interfaces today is undoubtedly ChatGPT, an instruction-tuned text generator that impresses many because of its fluid prose. Yet striking new capabilities should not detract us from the risks of proprietary systems. Only three months after OpenAI rolled out ChatGPT, it abruptly discontinued API support for its widely used Codex model that had been available as a “free limited beta” since 2021 <cit.> — surprising users with only three days' notice and undercutting at one blow the reproducibility of at least 100 research papers.[See https://aclanthology.org/search/?q=openai-davinci-002aclanthology.org/search/?q=openai-davinci-002 (the same search term yields >150 arXiv preprints and >800 entries on Google Scholar) ] This is a stark reminder that proprietary systems are designed to offer smooth onboarding and convenience but come at the price of user lock-in and a lack of reliability. Proprietary systems come with considerable further risks and harms <cit.>. They tend to be developed without transparent ethical oversight, and are typically rolled out with profit motives that incentivise generating hype over enabling careful scientific work. They allow companies to mask exploitative labour practices, privacy implications <cit.> and murky copyright situations <cit.>. Today there is a growing division between global academia and the handful of firms who wield the computational resources required for training large language models. This “Compute Divide” <cit.> contributes to the growing de-democratisation of AI. Against this, working scientists call for avoiding the lure of proprietary models <cit.>, for decolonizing the computational sciences <cit.>, and for regulatory efforts to counteract harmful impacts <cit.>. §.§ Why openness matters Open data is only one aspect of open research; open code, open models, open documentation, and open licenses are other crucial elements <cit.>. Openness promotes transparency, reproducibility, and quality control; all features that are prequisites for supporting robust scientific inference <cit.> and building trustworthy AI <cit.>. Openness also allows critical use in research and teaching. For instance, it enables the painstaking labour of documenting ethical problems in existing datasets <cit.>, important work that can sometimes result in the retraction of such datasets <cit.>. In teaching, it can help foster critical computational literacy <cit.>. Despite strong evidence of the scientific and engineering benefits of open research practices, openness is not a given in machine learning and AI research <cit.>. Gundersen and Kjensmo, in one of the most detailed examinations of reproducibility in AI to date <cit.>, systematically surveyed 400 papers for a range of open science practices. They found that only about a third of papers share test datasets, only 8% share source code, and only a single paper shared training, validation and test sets along with results. We are not aware of more recent systematic surveys of this kind (nor do we attempt this here), but the increasing trend of corporate releases with glossy blog posts replacing peer-reviewed scientific documentation provides little reason for optimism. Openness is perhaps especially important for today's breed of instruction-following text generators, of which ChatGPT is the best known example. The persuasiveness of these language models is due in large part to an additional reinforcement learning component in which text generator output is pruned according to a reward function that is based on human feedback <cit.>, using insights from early work on evaluative reinforcement <cit.>. Human users appear to be highly susceptible to the combination of interactivity and fluid text generation offered by this technology. The ubiquity of ChatGPT interfaces makes it easy for anyone today to try out some prompt engineering (while freely providing further training data to OpenAI) — but it does not allow one to gain a critical and holistic understanding of the constraints and capabilities of such systems, nor of their risks and harms. For true progress in this domain, we will need open alternatives. In this paper, we survey alternatives to ChatGPT and assess them in terms of openness of data, models, documentation and access methods. The aim of our survey is threefold: to sketch some of the major dimensions along which it is useful to assess openness and transparency of large language models; to provide a view of the state of the art in open source instruction-tuned text generation; and to contribute towards a platform for tracking openness, transparency and accountability in this domain. §.§ Previous work Existing work reviewing and comparing large language models falls into two categories: informal lists and structured surveys. Informal lists are crowd-sourced pointers to available resources, from open RLHF datasets[https://github.com/yaodongC/awesome-instruction-datasetgithub.com/yaodongC/awesome-instruction-dataset ] to open examples of instruction-tuned text generators.[https://github.com/nichtdax/awesome-totally-open-chatgpt/blob/main/README.mdgithub.com/nichtdax/awesome-totally-open-chatgpt ] Systematic surveys of instruction-tuned language models are still rare and mostly focus on comparing model capabilities and performance, e.g., of “augmented language models” <cit.> and language models for writing code <cit.> (not our focus here). Complementary to our focus on degrees of openness in instruction-tuned models, a recent survey of generative AI systems more broadly focuses on gradience in release methods, from closed to staged to fully open <cit.>. An important development in this domain the introduction of data statements <cit.> and model cards <cit.>. These are structured documents that help creators document the process of curating, distributing and maintaining a dataset or model, and that help users to critically judge underlying assumptions, potential risks and harms, and potential for broader use. These resources have seen considerable uptake in the scientific community, though their adoption by for-profit entities lags behind. The risks of relying on proprietary solutions has spurred the development of several more open alternatives. For instance, the Bloom collaboration <cit.> is a team science project of unprecedented magnitude. It has trained and open-sourced a large language model based on a collection of almost 500 HuggingFace datasets amounting to 1.6TB of text and code in 46 spoken languages and 13 programming languages. <cit.>. A related initiative is The Pile <cit.>, a 800GB dataset of English text that serves as pre-training data for language models by EleutherAI <cit.>. Meta AI's LLaMA <cit.> provides researchers with access to a series of base models trained on data claimed to be `publicly available'. It should be noted that none of these initiatives have undergone rigorous peer-review or data auditing at this point, and that claims of openness do not cancel out problems, legal or otherwise. In recent years, the private company HuggingFace has emerged as an important hub in the open source community, bringing together developers and users of projects in machine learning and natural language processing. It offers infrastructure for hosting code, data, model cards, and demos <cit.>. It also provides a widely used setup for automated evaluation, generating leaderboards and allowing quick comparison on a number of automated metrics, making it somewhat of a balancing act between offering incentives for documentation and for SOTA-chasing <cit.>. Our focus here is not performance evaluation of the kind offered by leaderboards; instead it is to survey degrees of openness in the fast-evolving landscape of text generators. § METHOD We survey open-source instruction-tuned text generators and evaluate them with regard to openness, scientific documentation, and access methods. Since any survey in this fast-growing field deals with moving targets, we focus here mainly on dimensions of enduring relevance for transparency and accountability. An up to date list of all models surveyed can be found at https://osf.io/d6fsrosf.io/d6fsr. §.§ Requirements The target breed of models in focus here is characterized by the following two features: its architecture is at base a large language model with reinforcement learning from human feedback (LLM + RLHF) and it aims for openness and transparency (along degrees we quantify). Projects are not included if they are as proprietary and undocumented as ChatGPT (like Google's Bard), or if they merely provide a front-end that calls some version of ChatGPT through an OpenAI API (like Microsoft's Bing). We explicitly include small-scale projects and projects that are in early stage development if they are open, sufficiently documented, and released under an open source license. Querying academic search engines and open code repositories, we find at least 15 projects that have sprung up in the last six months alone. §.§ Survey elements We assess projects on 13 features divided over three areas (Table 1): availability, documentation, and access methods. For each feature, we document openness along a scale from maximum to partial to no openness and transparency. For licenses, only systems that are fully covered by a true open-source licence count as maximally open, less permissive or partial licensing counts as partially open, and non-open or unclear licensing situations count as closed. Figure 1 shows a snapshot of 15 projects assessed for all features, with degrees of openness colour-coded (, ∼ , ×). Please refer to the data repository for more information about how each feature is evaluated, and for a more up to date listing. § RESULTS Projects roughly fall into two categories. First, small, relatively bare bones projects that only provide source code and build on existing large language models. These projects often cannot share information on architecture, training data, and documentation because they inherit closed-source data from the LLMs they build on. They usually also do not provide APIs or other user interfaces. However, some of such small projects do come with high-quality documentation and some build only on explicitly open LLMs. What such small projects lack in performance, they make up in utility for the open source community as they can provide useful entry points to learning about LLM+RLHF tools. We also identify a handful of projects backed by larger organisations, which aim to offer similar features to proprietary tools such as ChatGPT but are open-sourced and well documented. Two such initiatives top our list of open-source alternatives to ChatGPT: bigscience-workshop's xmtf tool building on the BLOOMZ and mT0 models (sponsored by HuggingFace) and LAION-AI's OpenAssistant based on an open, crowd-sourced RLHF training dataset (oasst1). Open Assistant also features a text-based and graphical user interface as well as a web resources for crowd-sourcing training data. We also found that several projects are not as open as they initially seemed to be, with many of them merely wrappers of closed models. We observe three recurring issues in the area of availability and documentation. Inheritance of undocumented data. Many tools build on existing large language models (which we here call base models) and inherit the undocumented datasets (often web-scraped and often of dubious legality) these base models are trained on. Training data of RLHF component is not shared. Building RLHF training datasets requires labour-intensive work by human annotators. The lack of RLHF training data is a major performance bottleneck for smaller research teams and organisations, and hampers reproducible research into the use of instruction-tuned text generators for conversational user interfaces. Papers are rare, peer-review even rarer. Most projects reviewed here follow the corporate `release by blog post' model. While there are some preprints, none of the systems we review is currently documented in a peer-reviewed paper. Habitually bypassing this important (albeit sometimes flawed) quality assurance mechanism allows systems to escape critical scrutiny and risks undermining scientific and ethical standards. Some other patterns are worth noting. One is the rise of synthetic data especially for the instruction component. Prominent examples are Self-Instruct (derived from GPT3) <cit.>, and Baize, a corpus generated by having ChatGPT engage in interaction with itself, seeded by human-generated questions scraped from online knowledge bases <cit.>. This stretches the definition of LLM + RLHF architectures because the reinforcement learning is no longer directly from human feedback but has a synthetic component, in effect parasitizing on the human labour encoded in source models. The consequences of using synthetic reinforcement learning data at scale are unknown and in need of close scrutiny. The derivative nature of synthetic datasets is probably one reason they are released specifically “for research purposes only” <cit.>, with commercial use strictly prohibited. This leads to an important wrinkle. Baize models and data are incorporated in several popular instruction-tuned text generators, including the Falcon family of models which bills itself as ready for “research and commercial utilization”[Technology Innovation Institute, https://falconllm.tii.ae/, June 7, 2023] in direct violation of Baize's prohibition against commercial use. This is merely one example of the complex dependencies embedded in these tools, and the legal quagmires obscured by simple claims of `openness'. § DISCUSSION The goal of this short paper has been to provide a critical review of degrees of openness in the fast-moving field of instruction-tuned large language models. We have found projects at varying stages of implementation, documentation, and useability. Most of them offer access to source code and some aspects of pre-training data, sometimes in legally ambiguous ways. Data from the reinforcement learning step, crucial to the simulation of instruction-following in these interfaces, is more elusive, provided by at best half of the initiatives. Strikingly, only a handful of projects are underpinned by a scientific write-up and none of them have as yet undergone scientific peer review. There are many shades of openness <cit.>, yet all of the projects surveyed here are significantly more open than ChatGPT. ChatGPT was announced in a company blog post and rolled out to the public with an interface designed to capture as much free human labour as possible, but without any technical documentation. (The RLHF component, arguably the biggest differentiator for the instruction-following behavior, was sketched in <cit.>, though without data.) Its follow-up GPT-4 continues OpenAI's tradition of openness in name only: it comes with an evaluation framework that primarily benefits the company yet contains the absolute minimum of technical documentation. In particular, an unreviewed preprint distributed by OpenAI and billed as a “technical report" <cit.> mostly provides cherry-picked examples and spends more space on crediting company workers for blog post content, communications, revenue, and legal advice than on actual technical details. (Companies like OpenAI sometimes give “AI safety" as a pretext for closedness; this is hard to take seriously when their own public-facing proprietary models provide clear and present harms <cit.>.) How can we foster more openness and accountability? First, incentives need changing. In high-stakes AI research, data work is often seen as low-level grunt work <cit.> and incentive structures generally encourage a `move fast and break things' mentality over careful scientific work <cit.>. But work that documents data provenance and traces harmful impacts <cit.> deserves major scholarly and societal credit. Here, AI and NLP might benefit from work in software engineering and infrastructure, where strong frameworks already exist to foster accountability for datasets <cit.>. Interactive model cards <cit.> offer a promising step towards a human-centered approach to documentation. Second, corporate capture and user lock-in are well-known strategies by which companies exercise control over scientific results and research infrastructure. In the age of large language models, this is amplified by the possibility to extract human labour and repackage it in amiable conversational formats. Openness not only aligns with principles of sound and ethical scholarship <cit.>; it also safeguards transparent and reproducible research <cit.>. Recent work on legal datasets offers an example in responsible data curation with insights that may be more broadly applicable <cit.>. Third, technology is never a fait accompli unless we make it so. It is one of the achievements of publicly funded science that it can afford to not jump on the bandwagon and instead make room for reflection <cit.>. Today's language technology landscape offers ample opportunities for what philosopher Ivan Illich has called counterfoil research: “Counterfoil research must clarify and dramatize the relationship of people to their tools. It ought to hold constantly before the public the resources that are available and the consequences of their use in various ways. It should impress on people the existence of any trend that threatens one of the major balances of which life depends” <cit.>. Among the consequences of unleashing proprietary LLM + RLHF models are untold harms to workers exploited in labeling data; energy demands of computational resources <cit.>; and tidal waves of plausible-looking text generated without regard for truth value (technically, bullshit <cit.>). One possible outcome of the kind of deeper understanding fostered by openness is a call for responsibly limited technology <cit.>. The spectre of regulation (a key way to keep corporate powers in check) is a powerful incentive for companies to keep things proprietary and so shield them from scrutiny. The systems we have surveyed here provide elements of a solution. Open to various degrees, they provide ways to build reproducible workflows, chart resource costs, and lessen reliance on corporate whims. § CONCLUSION Openness is not the full solution to the scientific and ethical challenges of conversational text generators. Open data will not mitigate the harmful consequences of thoughtless deployment of large language models, nor the questionable copyright implications of scraping all publicly available data from the internet. However, openness does make original research possible, including efforts to build reproducible workflows and understand the fundamentals of LLM + RLHF architectures. Openness also enables checks and balances, fostering a culture of accountability for data and its curation, and for models and their deployment. We hope that our work provides a small step in this direction. This research is funded by Dutch Research Council (NWO) grant 016.vidi.185.205 to MD. For the purpose of Open Access the authors have applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. ACM-Reference-Format
http://arxiv.org/abs/2307.04126v1
20230709084201
Compactness of sequences of warped product circles over spheres with nonnegative scalar curvature
[ "Wenchuan Tian", "Changliang Wang" ]
math.DG
[ "math.DG" ]
[4] thmTheorem[section]
http://arxiv.org/abs/2307.05258v1
20230711134726
Integrated Planning in Hospitals: A Review
[ "Sebastian Rachuba", "Melanie Reuter-Oppermann", "Clemens Thielen" ]
cs.AI
[ "cs.AI", "cs.DM", "math.OC" ]
numbers,sort compressnatbib verbose,lmargin=0.2,rmargin=0.2 plain thm definition defn[thm] plain prop[thm] lyxlist[1] #1 searchresults Search Results =10000 ifundefinedshowcaptionsetup caption=falsesubfig twe,exe]Sebastian Rachuba twe,tud]Melanie Reuter-Oppermann sr,tum]Clemens Thielen [twe]Center for Healthcare Operations Improvement & Research, University of Twente, Enschede, The Netherlands [exe]University of Exeter, Medical School, Exeter, United Kingdom [tud]Information Systems | Software & Digital Business group, Technical University of Darmstadt, Hochschulstr. 1, 64289 Darmstadt, Germany [sr]TUM Campus Straubing for Biotechnology and Sustainability, Weihenstephan-Triesdorf University of Applied Sciences, Am Essigberg 3, 94315 Straubing, Germany [tum]Department of Mathematics, School of Computation, Information and Technology, Technical University of Munich, Boltzmannstraße 3, 85748 Garching bei München, Germany Efficient planning of scarce resources in hospitals is a challenging task for which a large variety of Operations Research and Management Science approaches have been developed since the 1950s. While efficient planning of single resources such as operating rooms, beds, or specific types of staff can already lead to enormous efficiency gains, integrated planning of several resources has been shown to hold even greater potential, and a large number of integrated planning approaches have been presented in the literature over the past decades. This paper provides the first literature review that focuses specifically on the Operations Research and Management Science literature related to integrated planning of different resources in hospitals. We collect the relevant literature and analyze it regarding different aspects such as uncertainty modeling and the use of real-life data. Several cross comparisons reveal interesting insights concerning, e.g., relations between the modeling and solution methods used and the practical implementation of the approaches developed. Moreover, we provide a high-level taxonomy for classifying different resource-focused integration approaches and point out gaps in the literature as well as promising directions for future research. Operations Research Hospital Healthcare Integrated Planning Literature Review § INTRODUCTION A well-performing healthcare system is a crucial part of a modern society and determines people's lives and livelihood <cit.>. The importance of a healthcare system is also reflected in the enormous spending required. For instance, an unprecedented 10.9% of the GDP of the European Union was devoted to healthcare in 2020 <cit.>. It is widely recognized that demand for healthcare will further increase in the future due to demographic changes such as growth in elderly population in nearly all developed countries and increased longevity <cit.>. For instance, the share of over 65s (over 80s) in Germany increased from 20.6 % (5.2 %) to 22.0 % (7.1 %) between 2011 and 2021 <cit.>. Due to unavailability of crucial resources such as staff (particularly physicians <cit.> and nurses <cit.>), however, increased demand cannot be addressed by simply increasing healthcare spending to fund additional treatment capacities. Instead, the available scarce resources have to be used as efficiently as possible in order to ensure the continued provision of high-quality care in the healthcare sector. Good planning for efficient resource use in healthcare is a very challenging task due to various inherent characteristics that complicate planning decisions on all hierarchical levels – from long-term or strategic planning down to operational online decision making. These characteristics include (1) the wide-spread organisational subdivision of central entities such as hospitals <cit.>, (2) conflicting objectives and lack of cooperation between involved parties such as physicians, nurses, or administrators <cit.>, (3) unavailability of crucial information required for planning and control <cit.>, and (4) uncertainty and high fluctuation in the daily requirements for care <cit.>. Consequently, advanced planning methods are necessary in order to provide high-quality decision support to decision makers and use the available resources efficiently. Operations Research (OR) and Management Science (MS) offer a variety of scientific approaches for the efficient management and planning of limited resources that are applied with enormous success in healthcare since the 1950s <cit.>. Extensive overviews on OR/MS in healthcare are provided by Pierskalla and Brailer <cit.>, Rais and Viana <cit.>, Hulshof et al. <cit.>, and Jha et al. <cit.>. Surveys focused on methods for a particular, important resource are available for operating rooms <cit.>, inpatient beds <cit.>, intensive care units <cit.>, physicians <cit.>, and nurses <cit.>. An efficient planning of single resources such as operating rooms, beds, or specific types of staff can already lead to enormous efficiency gains and improved resource utilization in a healthcare system. Approaches that focus on isolated decision making in this way, however, ignore the inherent complex interactions between different resources or organizational units <cit.> and, therefore, often lead to suboptimal decisions on a system level. This is particularly apparent in hospitals, which collect large amounts of advanced technology and clinical specialization, but are usually subdivided into a variety of autonomously managed departments <cit.>. Consequently, a need for OR/MS models that focus on integrated planning of several resources has been identified <cit.>. This vertical integration (integration across different resources) is considered to show great potential, and an increase in publications presenting vertically integrated approaches has been observed <cit.>. It complements horizontal integration, which refers to integration across different hierarchical, or temporal, decision making levels, which are traditionally subdivided into strategic, tactical, and operational offline/online <cit.>. As noted before, the need for and potential of vertically integrated planning approaches is particularly apparent in hospitals. While hospitals are a key player in healthcare systems and account for almost 40 % of healthcare spending in OECD countries <cit.>, they are typically organized as clusters of autonomous departments, and planning is also often functionally dispersed <cit.>. The clinical pathways of patients, however, usually traverse multiple departments <cit.> where different resources are needed for providing effective treatment, which provides a strong motivation for integrated planning of these resources across departments. Consequently, this paper provides the first literature review that focuses specifically on the OR/MS literature related to vertically integrated planning in hospitals. We collect the relevant literature and analyze it with regard to different aspects such as uncertainty modeling and the use of real-life data. Several cross comparisons reveal interesting insights concerning, e.g., relations between the modeling and solution methods used and the practical implementation of the approaches developed. Moreover, we provide a high-level taxonomy for classifying different resource-focused integration approaches and point out gaps in the literature as well as promising directions for future research. The rest of this paper is organized as follows: Section <ref> describes our literature search methodology. The set of relevant papers resulting from the search is then analyzed in Section <ref> regarding the time of publication and publication outlets. Section <ref> presents our taxonomy for classifying the different vertical integration approaches used in the papers according to three levels of integration. Afterwards, Section <ref> analyzes which (combinations of) resources are most frequently planned in an integrated fashion. Section <ref> then focuses on the modeling and solution methods (including methods for uncertainty modeling) that are used for integrated planning, while Section <ref> analyzes the degree of practical implementation achieved by the developed approaches as well as the types of data that are used in the papers. Finally, Section <ref> provides an outlook on integrated planning problems that link a hospital to other hospitals and other parts of a healthcare system while Section <ref> summarizes and discusses our findings and points out research gaps and open areas. § LITERATURE SEARCH METHODOLOGY To identify relevant literature, an extensive search was performed using the data­base Web of Science (<www.webofscience.com>). In order to find papers with an OR focus, the search was performed within journals that are classified as “Operations Research & Management Science” (OR&MS) according to either their Web of Science Category or their research area (or both). Moreover, several relevant journals not classified as OR&MS (e.g., Health Systems) were identified and additionally included in the search. To find papers that deal with integrated planning in hospitals, we searched for papers published until March 2023 for which at least one term from each of the three columns of Table <ref> appears in the title, the abstract, or the author keywords. Here, the first column relates to hospital terms, the second to integration terms, and the third to planning terms. Whenever necessary, a wildcard (“$” for at most one character or “∗” for any group of characters, including no characters) has been used to represent multiple possible endings (e.g., hospital$ will find “hospital” as well “hospitals”, and integrat∗ will find “integrate”, “integrated”, “integration” etc.). The search returned a total of 1273 papers as search results, whose titles and abstracts were then examined in order to exclude papers that are irrelevant. Here, a paper was excluded if it was clear from the title and abstract that at least one of the following conditions was met: (1) The paper does not focus on hospitals, (2) no integration between multiple resources is considered, or (3) no planning or decision support using any kind of methods from Operations Research and Management Science is considered. Here, following the definition used in <cit.>, the term “resources” is broadly defined to comprise everything – from medical and non-medical staff to treatment rooms or patient appointments – that is required for the provision of healthcare (see Section <ref> for a classification of different resources considered in the final set of relevant papers). Papers for which it was unclear from the title and abstract whether any of the conditions (1)–(3) are met were not excluded here to ensure that no relevant papers are removed from examination at this stage. After the title and abstract screening, 318 potentially relevant papers were left. The full texts of all of these papers were then examined in detail, which resulted in an additional 135 papers that were excluded due to meeting at least one of the above conditions (1)–(3). This resulted in a final set of 183 relevant papers that were included in the review. The papers within this final set are listed in a separate bibliography titled “Search Results” at the end of the paper. § TEMPORAL DEVELOPMENT AND PUBLICATION OUTLETS Based on the final set of relevant papers identified, Figure <ref> shows the development of the yearly number of publications over time. While the first papers on OR/MS in healthcare and hospital contexts have been published in the 1950s <cit.>, the earliest papers on integrated planning in hospitals found in our search stem from the early 1990s, and the yearly numbers of publications show that integrated planning did not receive significant attention in the OR/MS literature until the late 2000s. Since then, the interest in the topic has increased continuously as shown by the 3 year moving average of the number of publications. Concerning publication outlets, most of the relevant papers (165 of 183) have been published in journals, while only a small number (18 of 183) have been published in conference proceedings. The most frequent publication outlet identified is the European Journal of Operational Research (EJOR) with a total of 31 published papers, while no other journal or conference appears more than 10 times. Interestingly, this also holds for journals with a particular focus on OR/MS in healthcare such as Health Care Management Science, Operations Research for Health Care, and Health Systems, which together only published 12 of the relevant papers. § TAXONOMY OF DIFFERENT LEVELS OF RESOURCE FOCUSED, VERTICAL INTEGRATION In this section, we present a high-level taxonomy for classifying the different resource focused, vertical integration approaches that are used in the final set of 183 relevant papers that were identified in our literature search (see Section <ref>). §.§ Definitions In order to classify the approaches for resource focused, vertical integration used in the set of relevant papers, we categorize them according to the following three levels of integration, where a higher level stands for a more closely integrated planning of several resources: Level 1 (Linkage by constraints / restrictions) Independent planning of each resource (e.g., staff) that incorporates constraints / restrictions concerning one / multiple other resources (e.g., available beds). These constraints are independent of the concrete solution of the planning problem for the other resource(s). Level 2 (Sequential planning) The planning problems for the different resources are solved one after the other in a predefined order (e.g., first staff, then operating room, then beds) and the results of all preceding planning problems (e.g., the staff and operating room plans) are used as input for the planning problem of each resource (e.g., for bed planning). This may or may not include the possibility to return to an earlier planning problem and change this earlier problem's solution using knowledge obtained in later problems (e.g., change the obtained operating room plan since it leads to an infeasible bed planning problem one stage later) – possibly going back and forth between the problems until the overall process converges (i.e., the solutions for all planning problems satisfy certain quality criteria). Level 3 (Completely integrated planning) All resources are planned jointly in one planning problem. Thus, decisions concerning the different resources are made simultaneously and are part of an overall solution of a single problem. Note that level 1 is conceptually different from levels 2 and 3 in that level 1 approaches do not relate the concrete solutions of the different resource planning problems to each other. By contrast, in level 2 and 3 approaches, the concrete solutions interact either since solutions of preceding planning problems are taken as input when generating the solutions to later planning problems (level 2) or since all solutions are part of an overall solution created in a single joint planning model (level 3). While this means that approaches potentially become more complex with increasing level of integration, the tighter interaction also has the potential to yield better overall solutions. When considering integrated planning of operating rooms and physicians, for instance, the level 1 approach for master surgery scheduling presented in Cappanera_2014 only takes the availability of surgeons into account via constraints, which ensure that the number of operating room time slots assigned to a given specialty in a given week does not exceed the number of slots that the specialty can cover with the available number of surgeons. Thus, the operating rooms represent the actual planned resource, while physicians (surgeons) are only incorporated via static availability constraints. By contrast, the level 2 approach in Day_2012 first assigns blocks of surgery time to surgeons in a first stage. In the second stage, the surgical cases of each surgeon are then assigned a date and a time as consistently as possible with the first stage solution. Finally, in the third stage, the surgical cases are allocated to operating rooms consistently with the solution obtained in the second stage. Thus, the solution for the block scheduling problem of surgeons in the first stage is taken as an input for the operating room planning in the later stages. Finally, the level 3 approach for operating room scheduling presented in Batun_2011 considers operating room planning and scheduling of surgeons jointly in one model that considers both operating room decisions (e.g., the number of operating rooms to open on a day and the assignment of surgeries to operating rooms) and surgeon decisions (e.g., the start time of each surgeon). Thus, decisions about both resources are taken jointly in one planning model in this case. Note that the distinction between the different planning levels is not completely clear in all cases and there exist papers and approaches that combine several levels – particularly when considering more than two resources. Overall, completely integrated planning (level 3) and linkage by constraints / restrictions (level 1) are most frequently applied with 107 and 76 papers, respectively, that use approaches of these kinds. Sequential planning (level 2) is far less common with only eight papers that use planning approaches classified according to this level of integration.[Note that the single numbers sum up to more than the total number of 183 relevant papers since, as mentioned, some papers present one or several planning approaches with different levels of integration.] §.§ Temporal development and relation to hierarchical decision making levels Figure <ref> shows the temporal development of the 3 year moving averages of the numbers of publications using approaches of the most frequent levels 1 and 3 (level 2 has been omitted due to its low absolute frequency). Interestingly, approaches on both levels of integration started receiving significant attention simultaneously in the late 2000s. They have first been similarly common with 18 papers using level 1 approaches and 21 papers using level 3 approaches before 2012. Among the papers published since 2012, however, only 58 use level 1 approaches, while 86 use level 3 approaches. This means that completely integrated approaches that plan several resources jointly in one model have become more popular compared to approaches that plan each resource independently while only incorporating constraints or restrictions concerning other resources. This trend towards completely integrated planning approaches could be explained by both a rising interest in deeper integration between planning problems of different resources but also by increasingly powerful computers and solvers, which make completely integrated models solvable in more reasonable times than before. Next, we investigate how the different levels of integration relate to the well-known hierarchical levels of strategic, tactical, operational offline, and operational online decision making <cit.>. Table <ref> shows the numbers of publications for each of the hierarchical levels in total as well as distinguished by level of integration.[Again note that the single numbers sum up to more than the total number of 183 relevant papers since some papers present several planning approaches targeting different hierarchical decision making levels and / or different levels of integration.] The table shows that, for both level 1 and level 3 integration, most papers target operational problems, the vast majority of which are operational offline. Tactical integrated planning problems are studied less frequently on both levels of integration, and even fewer publications consider a strategic planning horizon – particularly among those presenting level 3 integration approaches. § RESOURCES CONSIDERED IN INTEGRATED PLANNING APPROACHES Having demonstrated the growing interest in integrated planning problems over the the last two decades, we now analyze which resources have been at the center of attention. Initially, Figure <ref> shows the absolute frequencies of hospital resources / areas considered in integrated planning approaches. The figure shows that the vast majority of publications deal with the operating room / operating theater (OT)[We use OT as an abbreviation for consistency reasons since OR is used as an abbreviation for Operations Research.], medical staff (physicians and nurses), or beds. Other frequently considered resources include patient appointments / admissions, intensive care unit (ICU) and post-anesthesia care unit (PACU), the emergency department (ED), and inpatient wards. It is notable that, with the exception of other (non-medical) staff, resources without a direct connection to patients such as clinical and sterilization services, diagnostics (e.g., imaging), or logistics have only received limited attention. For further analysis, we aggregate some less frequent resources / areas as shown on the horizontal axis in Figure <ref>. Here, the umbrella term Diagnostics summarizes computed tomography, magnetic resonance imaging, x-ray, radiology, laboratory, medical equipment, and sterilization services; and the term Other summarizes clinical services, logistics, elevators, and physiotherapy. The analysis in Figure <ref> distinguishes between primary and non-primary resources, i.e., resources / areas that are at the center of planning and ones that are supplementary. Note that, if several resources are integrated using a level 3 integration approach (see previous section), all of these resources are usually considered as primary or leading resources, but additional non-primary resources can also be included (e.g., via constraints using a level 1 integration approach). OT, patient appointments / admissions, and ED are regularly considered as primary resources, while only a small proportion of the papers containing these resources considers them as a supplement to other resources. In contrast, physicians and bed-related resources (including wards, PACU, and ICU) are more frequently considered as supplementary than primary. With nurses, diagnostics and medical equipment, and examination / treatment rooms, the results are balanced. Interestingly, while medical staff (physicians and nurses) is more often considered as a supplementary resource, other staff (including porters and technical staff) is mostly planned as a primary resource. §.§ Considered resource combinations We now look at specific combinations of resources that are considered and further detail the analysis regarding primary and non-primary resources. We use the same categories that have previously been introduced in Figure <ref>. Figure <ref> displays the absolute frequencies of individual combinations of resources in a heat map linking primary resources in rows to combined resources (which can be either primary or non-primary) in columns. The number in each cell indicates the number of publications in which a link between the two corresponding resources is found. The background of each cell is color-coded ranging from dark green (highest absolute frequency) to red (absolute frequency zero). Combinations of a resource with itself are excluded for obvious reasons. The heat map in Figure <ref> reveals that OT (primary) & physicians (combined) and OT (primary) & beds (combined) are by far the most common combinations with absolute frequencies of 74 and 47, respectively. This is in line with our previous observation that the OT is mostly considered as a primary resource. Additionally, the fact that physicians and beds are so frequently combined with the primary resource OT provides a possible explanation why these two resources are not the primary resource in the majority of papers considering them. A more balanced distribution of which of the two considered resources is primary can be observed, e.g., for the combination nurses & physicians, where each of the two is considered primary in 26 papers. §.§ Resource combinations and levels of integration We now further distinguish the considered resource combinations with regard to the level of integration. Figure <ref> shows the absolute frequencies of different numbers of integrated resources differentiated by level of integration (level 2 has again been omitted due to its low absolute frequency). While two or three integrated resources are most common in both level 1 and level 3 approaches, four or more resources are also integrated in a non-negligible number of papers – especially for the more demanding level 3 integration. For the sake of a cross comparison of specific resource combinations and the considered level of integration, Figure <ref> depicts two heat maps analogous to the one from Figure <ref> – one for level 1 approaches and one for level 3 approaches. Recall that the number of publications for level 3 is larger than for level 1 (107 vs. 76 publications). Focusing on level 1 integration (Figure <ref>, left), there is less variety in considered resource combinations (indicated by many zeros) compared to level 3 integration. For level 1 integration, the OT is by far the most common primary resource, which is frequently combined with beds / ICU capacity or staff (nurses and physicians). In contrast, the right-hand heat map in Figure <ref> indicates a much larger variety of resource combinations for level 3 integration, with staff-focused papers in particular appearing more frequently. Overall, the cross comparison of specific resource combinations and the considered level of integration reveals that the previously-observed dominance of the OT as a primary resource is mainly due to it being so frequently considered as the main resource in level 1 integration approaches, which often address OT planning with constraints concerning the availability of beds / ICU capacity and / or staff. While the OT is still often considered as as primary resource in level 3 approaches, there is also a large number of level 3 approaches that consider staff – particularly physicians – as as primary resource. In a relevant number of level 3 approaches, we can even observe that physicians or nurses are considered as a primary resource even when combined with the OT. Since other (non-medical) staff, which is ignored almost completely in level 1 approaches, is also considered much more frequently in level 3 approaches, this could indicate a shift from OT-focused integrated planning in purely constraint-related integrated planning approaches to a more staff-focused planning in approaches that perform on a completely integrated planning of several resources. § MODELING AND SOLUTION METHODS When integrating healthcare planning problems, which are often already complex individually, they potentially grow in size and, thus, might become even more difficult to solve. Therefore, choosing adequate modeling and solution methods is a highly relevant aspect. Figure <ref> highlights the various approaches applied in the publications, aggregated into (a) optimization (132), (b) simulation (65), and (c) other methods, e.g., queuing theory or machine learning (52). Among optimization-focused publications, 114 of the publications use mixed-integer linear programming (MILP), while linear programming (LP) or other mathematical programming techniques (e.g., quadratic programming) are rarely used (22 in total). Among the simulation paradigms, discrete event simulation (DES) is most popular with 57 publications, while the other paradigms such as agent-based simulation (ABS), system dynamics (SD), or Monte Carlo simulation (MC) play only a minor role (11 papers in total). Lastly, other methods summarized in (c) predominantly focus on queuing and Markov models, while different concepts such as fuzzy sets or scattered use of methods such as machine learning hardly occur. Here, 18 papers using hybrid approaches indicate tailored modeling/solution concepts such as simulation-optimization or a combination of queuing/simulation and Markov/simulation. In Table <ref>, we visualize whether and to what extend two or more methods have been applied for solving an integrated planning problem. For optimization problems, it is common to apply only a single method when solving a problem. For simulation, however, it is more common to combine simulation (approx. 68% of the cases) with either optimization (21), other methods (14), or both (9) than to use a simulation-only approach (21). However, the use of multiple methods is typically sequential. Figure <ref> visualizes the evolution of the average number of publications by method categories. It highlights a steady increase of optimization-based studies since approximately 2010. Only twice we identify a decline, namely in 2013 and between 2020 and 2022, but only in small volume. Since 2022, an increase in the number of publications is observed again. Up to 2014, we observe a similar trend for simulation-focused publications, which did not increase (and even slightly decreased) towards 2021. Since 2021, however, we identify a strong increase in the volume of publications. Both increases in the number of publications (for optimization and simulation) are presumably COVID-related effects (i.e., publication backlog, or specific pandemic-focused publications). Lastly, other approaches start to appear from the late 2000s, with an increase to approximately 4 papers on average in 2018. This level is maintained to the present day. §.§ Cross comparisons with regard to modeling and solution methods In the following, we analyze the applied modeling and solution methods depending on (1) planned resources, (2) level of integration, and (3) hierarchical decision making level. Similar to Table <ref>, we consider the aggregated method categories optimization, simulation, and other methods. Figure <ref> shows absolute frequencies of planned resources distinguished by the three categories of methods. Usually, optimization is preferred over simulation or other approaches, which is in line with a larger share of optimization studies (see also Figure <ref>). For OT and physicians, the share of optimization-focused papers is disproportionately larger compared to other resources. Especially for the OT, the relative frequency of approaches using simulation or other methods is relatively small. While there are mostly fewer simulation papers, the majority of ED-related papers do use simulation. These findings also show when comparing the most common combinations of two resources (see Figure <ref>). Together with the previous results, we can conclude that OT-related publications predominantly use optimization, even when staff-related resources are considered as well, e.g., OT and physicians. What is interesting to note is the fact that, for purely staff-related combinations such as nurses and physicians, it is much more common to use simulation and also other approaches. In general, when no staff is involved, optimization is chosen more frequently. For the combination of ED and physicians, the share of optimization papers is the smallest among the three method categories. It is also worth noting that a combination of OT and ICU leads to the smallest share of simulation studies. While there has been an almost steady increase in optimization papers over the last two decades (see Figure <ref>), simulation and other approaches do not exceed the virtual threshold of approximately 5 papers in a 3 year average. Figure <ref> now investigates this further and depicts the development of method categories over time distinguished by the level of integration that the publications consider. What is interesting in this case is the fact that, in recent years, optimization-based studies see an increase in level 3 studies (blue line, dashed) while level 1 studies appear less frequently. For the two other method categories (orange and gray lines), however, we see a more similar development when comparing level 1 and level 3 approaches. For optimization, there has been a clear peak in the number of level 1 approaches between 2019 and 2022 followed by a strong decrease. Level 3 optimization approaches, however, have seen an almost steady increase, and a particularly strong increase since 2021. This could potentially be explained by computational advancements that now make possible to solve the typically more complex level 3 optimization models in more reasonable time than before. Table <ref> shows the numbers of publications using each method category distinguished by hierarchical decision making level. Note that papers using several methods from different categories are counted multiple times. Concerning hierarchical decision making levels, optimization approaches dominate on the tactical and operational level, while nearly equal numbers of publications use optimization and simulation approaches on the strategic level. Among the papers that use optimization approaches, 115 focus on a singleobjective approach while only 17 consider a dedicated multiobjective approach. For level 3 problems, multiobjective approaches are slightly more common (11 of 78 at this level) compared to level 1 problems (5 of 54 publications). We did not find a clear distinction with respect to the planning horizons – across strategic, tactical, and operational, the share of single- versus multiobjective approaches is to some extent balanced (strategic: 13 versus 2, tactical: 36 versus 4, and operational: 82 versus 12). What is potentially most interesting is the fact that none of the papers that use multiobjective approaches have lead to practical applications of the developed results or methods (see Section <ref> for more analyzes concerning practical implementation). §.§ Uncertainty modeling Most real-world planning problems in hospitals suffer from uncertainty, i.e., from incomplete information regarding some of the problem parameters or input data. Therefore, dealing with uncertainty is an important aspect of these problems. We now analyze the identified literature concerning the approaches used for dealing with uncertainty. Common ways to model uncertainty are stochastic models, robust models, and online models <cit.>. While the classifications robust and online apply only to optimization approaches, the classification stochastic can be considered for optimization, simulation, or other approaches, e.g. Markov or queuing models. In contrast to all three of these classifications, deterministic models assume all problem parameters and input data to be completely known without any uncertainty at the time the problem is solved. Table <ref> shows the overall distribution of publications differentiated by uncertainty modeling approach and level of integration. Note that a paper is counted twice if, e.g., both a robust and a stochastic model are presented in the corresponding publication. Values in bold font indicate the total numbers of papers (e.g., there are 56 publications in total that use deterministic planning approaches for level 3 integration) and values in parentheses indicate the numbers of papers with/without an optimization model (e.g., of the previously mentioned 56 papers that use deterministic level 3 planning approaches, 52 use an optimization model, while 4 do not). Overall, the table shows that a slight majority of papers considers uncertainty in some of the input data (104 of 183). Interestingly, the share of papers considering uncertainty is larger among papers presenting level 1 integration approaches (47 of 76) than among papers presenting level 3 approaches (58 of 107). In other words, papers presenting a completely integrated planning approach for several resources are less likely to consider uncertainty even though these approaches are more recent on average (see Section <ref>). A possible reason for this could be that completely integrated models are potentially harder to solve than models that only incorporate further resources using constraints, which could mean that considering uncertainty as well might not always be tractable in completely integrated models. Concerning the frequencies of different uncertainty modeling approaches, it turns out that, for both level 1 and level 3 integration, the vast majority of papers that consider uncertainty use stochastic approaches (96 of 98), only a few use robust approaches (10 of 98), while online approaches are not used at all. The absence of online approaches can potentially be explained by the fact that stochastic and robust modeling approaches stem from the field of OR/MS considered here, while online optimization has its origins in computer science <cit.>. Another interesting observation is that almost all of the presented deterministic approaches are optimization models (88 of 93), while simulation approaches and other approaches almost always consider uncertainty in at least some of their input data. One reason for this could be that considering uncertain parameters is generally easier in simulation models than in optimization models. While it is very common for simulation models to consider a large number of uncertain parameters simultaneously, a choice must often be made in optimization models to consider only a limited number of uncertain parameters. Therefore, we now analyze which parameters are most frequently considered as uncertain in optimization models. Since the considered (uncertain) problem parameters naturally depend on the planned (primary) resources, however, we first analyze the resources that are planned within optimization models considering uncertainty. Here, we observe that most of the papers that consider uncertainty in optimization approaches focus on the OT (41 papers), followed by physicians (34), beds (25), nurses (21), and ICU (11). Only few papers focus on the remaining resources such as other types of staff (8), inpatient wards (7), or the ED (6). Interestingly, only very few of these papers consider only two resources – the vast majority considers between three and five resources. Turning to the analysis of which parameters are considered as uncertain, we observe that, when the OT is considered as a resource, the surgery duration is by far the most frequently used uncertain parameter in optimization models. Despite OT planning being a well-studied field of research, we identified only a limited number of papers that model factors other than the surgery duration to be uncertain. Among these uncertain parameters are arrivals, i.e., authors consider the number of arriving patient as uncertain Astaraky_2015,Range_2019,Zhu_2022,Bansal_2021,Breuer_2020. This may in some cases include the possibility of emergency arrivals as a separate source Breuer_2020,Rachuba_2017,Zhu_2022,Jittamai_2011, which are otherwise only considered implicitly, e.g. as a proportion of the non-emergency arrivals or via OT time reservation Molina-Pariente_2018. Other uncertainty aspects such as no-shows Jittamai_2011, patients reneging from waiting list Astaraky_2015 or cancellations of surgeries Astaraky_2015, are only considered a few times. When the OT is linked with up-/downstream resources, uncertainty in the length of stay is frequently considered. Unusual (i.e., not frequently modeled) uncertain parameters are the discharge rate of patients (from a hospital unit) Zhu_2022, the demand for beds (similar to uncertain arrivals) Belien_2007,Kheiri_2021,Ma_2013, or nurse or surgeon availability Nasiri_2019,Breuer_2020. The paper by Hulshof et al. Hulshof_2016 is the only one to consider the care pathway of patients to be uncertain. Lastly, it is interesting to note that papers using a robust approach to solve an OT-related problem unanimously focus on the surgery duration as the uncertain parameter Bansal_2021,Breuer_2020,Neyshabouri_2017,Rachuba_2017,Rath_2017,Shehadeh_2021,Wang_2021,Keyvanshokooh_2022,Davarian_2022. Very few of these papers consider other parameters to be uncertain, e.g., the need for surgery Bansal_2021, surgeon availability Breuer_2020, emergency arrivals Breuer_2020,Rachuba_2017, or length of stay in a downstream unit Neyshabouri_2017,Shehadeh_2021. Within the 14 papers in which the OT is not considered as a resource, we found that the arrival rate of patients (or the number of patients, including one paper modeling no-shows) Augusto_2009,He_2019,Izady_2021,Kortbeek_2017,Leeftink_2019,Agrawal_2023,Gong_2022,Chan_2022, treatment durations He_2019,Gong_2022,Haghi_2022, and bed demand Ordu_2021 are commonly-used uncertain parameters. Again, in almost all cases, the uncertain parameters are patient- or demand-related, while one of these 14 publications investigates pharmacies inside hospitals and considers the delivery of medicines as uncertain Augusto_2009. § PRACTICAL IMPLEMENTATION AND DATA We also scanned our search results for information on practical implementation and the use of real data. Here, we observe two principal ways in which methods or results are used in practice. The first way is that the corresponding paper makes a suggestion for a one-time change in practice that is subsequently implemented at one or several partner hospitals. For example, Toronto’s Mount Sinai Hospital eliminated its Thoracic Surgery service during budget negotiations based on an advice from Blake and Carter Blake_2002, and recommendations made by Kortbeek at al. Kortbeek_2017 based on their research on the trade-offs between appointment scheduling constraints and access times were implemented in the Academic Medical Center in Amsterdam. The second way in which methods and results are used in practice is that the corresponding paper present a decision support tool that is then used on a regular basis to solve recurring integrated planning problems in hospitals. For example, the integrated surgery scheduling approach developed by Ozen et al. Ozen_2016 was implemented as a web-based application and integrated into the existing surgical planning systems at Mayo Clinic. Overall, only 20 of the 183 relevant papers that resulted from our search report an actual application of their work in practice in one of the above-mentioned ways. In contrast, 110 papers present a case study without mentioning a practical application, and 53 papers have a primarily methodological focus, i.e., they describe a new model or solution approach that is not directly connected to a case study or a practical application. Among the papers that do mention a practical application of their results, the earliest one was published already in 2002 on strategic resource allocation in acute care hospitals Blake_2002. Analyzing the degree of practical application depending on the utilized modeling and solution methods as shown in Figure <ref>, we observe that the highest share of publications whose results are used in practice is found among those using simulation methods, while the share of papers that are mainly methodologically focused is by far the lowest among papers using simulations. Among both the papers using optimization methods and those using other methods, the share used in practice is much lower, while papers with a methodological focus have a much higher relative frequency. Throughout all three categories of methods, however, the vast majority of papers present case studies that have not lead to practical applications of the developed methods and results afterwards. Concerning the use of real-world data, it is not surprising that 19 of the 20 publications reporting practical implementations use real data, while the remaining one at least uses realistic data, i.e., artificial data that has been validated as realistic through discussions with practitioners and / or literature research. Given the low overall number of 20 papers in which the authors report practical applications of their work, it is more surprising that the vast majority of the papers in our search results (121 of 183) still report on the use of real data. Together with the above observation that the number of case studies vastly exceeds the number of papers that have lead to practical applications, this suggests that obtaining real data on integrated planning problems in hospitals to use in a case study is significantly easier than actually bringing the results of a research project from this area into practice. Moreover, even the 20 papers that report practical applications of their methods or results mostly provide only brief descriptions about practical impact and / or implementation in the publications themselves in one or two paragraphs at the end of the paper. This could indicate that scientific journals and conferences that publish work on integrated planning in hospitals do not put much emphasis on the description of practical applications of the obtained results so far. While cross comparisons of the obtained degree of practical application and the level of integration or the considered primary resources do not yield any significant insights, another noteworthy observation is that all of the publications whose results on integrated planning are used in practice report about practical applications in Europe or the Americas, while no transfers into practice are reported in the rest of the world. § INTEGRATION WITH OTHER PARTS OF A HEALTHCARE SYSTEM Since hospitals usually have dependencies with many other care providers and healthcare services, this section provides an outlook on existing planning approaches that link a hospital to other hospitals and other parts of a healthcare system. From a patient's point of view, hospitals are one part of their overall care pathway. For example, emergency patients might have been taken to the emergency department of a hospital by an ambulance. Elective patients have been diagnosed and potentially treated previously by general practitioners or specialists and have then been transferred to a hospital for further treatment. During their hospital stay, they might need medication or blood bags that must be ordered and delivered, or they might need to be transferred from one hospital to another. Afterwards, patients may receive follow-up care at home or are transferred to a rehabilitation facility. In the following, we distinguish three kinds of dependencies of hospitals with other entities in a healthcare system based on the position of these entities on a patient's care pathway: * Pre-hospital dependencies: Dependencies with care providers or healthcare services that are positioned before a hospital stay on a patient's care pathway, * During-hospital dependencies: Dependencies with other hospitals, care providers, or healthcare services (e.g., blood banks) that are relevant during a patient's hospital stay, * Post-hospital dependencies: Dependencies with care providers or healthcare services that are positioned after a hospital in a patient's care pathway. An overview of dependencies of a hospital within a healthcare system that distinguishes between pre-, during-, and post-hospital dependencies is shown in Figure <ref>. For many of these dependencies, a simultaneous consideration and an integrated planning of the involved resources can be beneficial from both a patient and a system perspective and can even improve individual objectives for the involved care providers. In the following, we look more closely at three examples, one for each case, that have already been addressed in the OR/MS literature. §.§ Pre-hospital dependencies: Ambulance diversion and offload delay While hospitals naturally interact at least indirectly with most entities that are usually positioned before a hospital stay on a patient's care pathway (e.g., general practitioners), one of the most-studied and most direct interdependencies is between between emergency departments (EDs) of hospitals and emergency medical services (EMS). When an ED is crowded, ambulances might need to be diverted to other hospitals, which is referred to as ambulance diversion. Alternatively, they might wait in front of the hospital until patients can be admitted to the ED, which leads to a so-called (ambulance) offload delay. As defined in Li et al.'s literature review on offload delay <cit.>, “ambulance offload delay (AOD) occurs when care of incoming ambulance patients cannot be transferred immediately from paramedics to staff in a hospital emergency department.” Optimal control policies for ambulance diversion as a countermeasure to avoid offload delay are, for example, proposed by Ramirez-Nafarrate et al. <cit.>. Besides others, Allon et al. <cit.> study the impact of hospital size and occupancy on ambulance diversion in the US, and the effects of ambulance diversion are reviewed by Pham et al. <cit.>. So far, the OR/MS literature mainly addresses the ambulance diversion and offload delay problems from only one side, either focusing on the EMS or the ED, even though an exchange of information between EMS and ED together with a system-wide perspective is crucial in order to enable integrated planning <cit.>. §.§ During-hospital dependencies: Inter-hospital collaboration Many different kinds of dependencies of hospitals with other kinds of care providers and healthcare services are relevant during a patient's hospital stay and are therefore investigated in the OR/MS literature. Important examples include interactions with other entities in specific parts of a hospital's supply chain such as the blood supply chain <cit.> or the pharmaceutical supply chain <cit.>. Another important aspect that we focus on here and that relates directly to hospital resources is inter-hospital collaboration in the form of resource sharing. This means that different hospitals collaborate by sharing expensive resources such as imaging devices in order to gain efficiency. One main advantage of this type of inter-hospital collaboration is that “hospitals can avoid the purchase of expensive medical resources and patients can be treated in a timely manner in any available hospital, which will improve their quality of care.” <cit.>. Ideally, this leads to an integrated planning of the shared resources. Since resources such as imaging devices are usually not portable, one of the most crucial aspects of this is to plan how patients should be referred between the collaborating hospitals when the shared resources are required for their treatment. This question has, for example, been studied in <cit.>. Chen and Juan <cit.> consider the problem of daily patient referrals for CT scans between three hospitals, while Chen et al. <cit.> and Chen and Lin <cit.> investigate referring patients between two or multiple cooperating hospitals, respectively, that share imaging services. A related kind of hospital collaboration that also leads to patient referrals is utilization leveling among multiple hospitals with the goal of reducing disparities between the involved hospitals utilization rates. For example, Li et al. <cit.> study when patients should be referred from a high-utilization hospital to a low-utilization hospital and Li et al. <cit.> extend this investigation to a network of one high-utilization hospital and three low-utilization hospitals. Nezamoddini and Khasawneh <cit.> also integrate capacity allocation decisions by determining the optimal resource levels of EDs in several hospitals while considering patient transfers between them. Overall, while the question of patient referrals between resource-sharing hospitals or hospitals with different utilization rates has already been investigated as shown above, there does not seem to be much existing work that considers other aspects of integration that could potentially be relevant in settings in which equipment such as imaging devices is shared between hospitals. §.§ Post-hospital dependencies: Bed blocking Hospitals also interact in various ways with care providers that treat patients after their hospital stay. The perhaps most-studied aspect of these interactions is bed blocking in hospitals, which occurs when patients in a hospital are ready to be discharged but have to remain in the hospital until a bed in a follow-up care facility (e.g., in rehabilitation center or nursing home) becomes available <cit.>. Bed blocking can not only be harmful for patients due to the delay in advancing to the next step of their care pathway, but is also often costly since a hospital bed is more expensive to operate than, e.g., a geriatric bed <cit.>. The problem of bed blocking has been studied intensively in the literature (see, e.g., <cit.>) and it has been recognized that better integration and cooperation between hospitals and follow-up care facilities is necessary in order to prevent it. For instance, Mur-Veeman and Govers <cit.> state in their work on buffer management in Dutch hospitals to solve bed blocking that “although stakeholders recognize that cooperation is imperative, they often fail to take the actions necessary to realize cooperation.” Motivated by the improved integration that is needed in order to solve the bed blocking problem, Chemweno et al. <cit.> model the complete care pathway for stroke patients to analyze the effects of different intervention strategies aimed at minimizing patient waiting time delays for available bed resources. Using simulation, they show that maximizing the bed resource utility leads to a decrease in patient waiting times. Rashwan et al. <cit.> use system dynamics for obtaining a holistic and strategic national level capacity-planning model to address the problem of acute bed blocking in the Irish healthcare system, while Wood and Murch <cit.> address the general problem of blocking after any service along the patient pathway using a continuous-time Markov chain. In summary, bed blocking has already received considerable attention in the literature. In addition, the necessity for integrated models that address the problem by looking at all involved entities along the relevant care pathways has already been recognized. Nevertheless, such integrated models still seem to be scarce. § SUMMARY, DISCUSSION, AND CONCLUSION This section concludes our analysis and identifies overarching trends and open research areas. In line with operating theaters being the most-studied area of the hospital in the OR/MS literature in general <cit.>, our findings show that the OT is the resource that is most frequently combined with other resources in integrated planning approaches overall. Moreover, the OT is also the resource that is most frequently considered as a primary resource within integrated planning approaches. The second most common resource – both overall and as a primary resource – are physicians. When combining physicians and nurses under the umbrella term medical staff, however, the temporal development illustrated in Figure <ref> shows an interesting trend. The different lines visualize the 3 year moving averages of the yearly numbers of publications that do / do not consider the OT or medical staff as one of the integrated resources. Despite the large number of OT-focused publications in the OR/MS domain overall, Figure <ref> suggests that, since about 2010, the number of publications considering medical staff within integrated planning approaches constantly exceeds the number of publications considering the OT. The OT, however, is still the most frequently appearing primary resource even when counting physicians and nurses jointly. This suggests that the OT is still the most common center of attention also in integrated planning approaches, but medical staff has been considered more frequently as part of integrated planning approaches overall for more than a decade. This is also supported by the observation that the number of papers that do not include medical staff is lower compared to the number of papers that do not include the OT – both in total and relative to the number of papers that do include the respective resource. A possible explanation could stem from increasing shortages of medical staff, which might motivate to at least include staff as a supplementary resource in integrated planning problems. When considering the number of publications that include neither the OT nor medical staff, it can be observed that the average number of such publications has stayed extremely low overall, even though the average number of publications per year has increased tremendously within the last 15 years (see Figure <ref> in Section <ref>). From this we conclude that the increasing interest in integrated planning problems in hospitals observed in the OR/MS literature is so far mainly focused on planning problems linked to resources that are necessary to perform patient-related tasks (surgery, caring on a ward, etc.). In contrast, planning problems that are (further) away from the patient are still studied much less. With regard to uncertainty modeling, we found that approximately half of the publications on integrated planning we identified consider uncertainty in one or several input parameters. However, the consideration of uncertainty seems to be more common in approaches that only integrate several resources via constraints compared to completely integrated planning approaches – even though the latter are more recent on average. This may change in the future, though, when the joint consideration of integrated planning and uncertainty becomes more and more tractable due to advances in solution methods and computing power. Regarding specific uncertain parameters, durations of activities are most frequently considered to be uncertain (e.g., surgery durations or the length of stay on a ward). Even among the relatively large number of papers considering integrated planning problems that combine the OT with either medical staff or beds, only very few papers consider parameters other than durations to be uncertain (e.g., no-show rates, the availability of staff, or the availability of beds) – even though unavailability of (medical) staff, for example, has already been identified as a highly relevant aspect within the (healthcare) personnel scheduling literature <cit.>. Since medical staff is usually required for the vast majority of activities along a patient's pathway within a hospital and is often hard to replace both in the short and long term, uncertain staff availability seems to be an especially relevant aspect that should be considered more in future research. Regarding the practical implementation of the research work on integrated planning problems in hospitals, we summarize that there is only little evidence that case studies conducted as part of such work lead to implementation of the obtained findings in daily practice. This is in line with the results of previous literature reviews that – while not focusing specifically on integrated planning – also found little evidence for successful implementation of research output <cit.>. If research on integrated planning does influence decision making within a hospital, we found that simulation studies are more likely to initiate such changes. A promising avenue towards implementation of findings and initiation of changes requires OR/MS to develop strong links not only to all involved personnel and hospital decision makers, but also to informatics <cit.>. The latter is required to embed the developed approaches into hospital information systems in order to make them accessible for planners and decision makers as part of their daily practice. §.§ Research gaps and open areas Despite the large number of publications that meet our inclusion criteria, the focus in terms of hospital areas and resources is still rather narrow. It becomes clear that there is a shortage of non-OT-focused publications, which might be due to the popularity (or importance) of the OT itself or a lack of effort to explore other areas. A stronger focus on staff could be particularly promising here since staff shortages in hospitals are already visible and are expected to intensify in the future as population ageing increases demand for care relative to the size of the healthcare workforce. While staff is already the most considered supplementary resource overall in integrated planning problems, we therefore expect also the center of attention to shift from the still mostly OT-focused planning observed so far to workforce-focused planning as a driver of future research. Moreover, supply services such as sterilization or pharmacy have not yet been included in integrated planning approaches. We therefore believe that increasingly considering activities that do not immediately include patients (such as hospital pharmacies, sterilization services, or inventory of medical and non-medical supplies) may represent a promising avenue for future research. With regards to uncertainty modeling, the planning parameters that are considered as uncertain are so far mainly limited to durations. Notably, other very important issues such as no-show rates or availability of resources (staff, beds, etc.) are only rarely considered despite the substantial knock-on effects they can lead to in an integrated planning setting. No-shows of patients or other unexpected changes in patient demand might be particularly relevant here since the patient is usually the linking element between various steps along the care pathway <cit.>, so uncertain patient availability and demand will likely affect several parts of an integrated planning problem simultaneously. Staff unavailability, on the other hand, could become more and more important in the future due to the above-mentioned changes resulting from the graying population. Overall, we expect the uncertainty of parameters such as the number of available nurses, qualification levels of staff, the care chain (e.g., what resources are needed to treat a patient and are they actually available), and, finally, the flow of patients itself to be studied more in future research on integrated planning. Doing so could be especially valuable since the effects of these and other uncertain parameters a well as knock-on effects caused by patient or resource unavailability can be much better understood with an integrated perspective (e.g., uncertain staff availability in the PACU can influence the number and types of surgeries that can be performed on a day and, therefore, also the number of beds required on the ICU or regular wards in the following days). This seems particularly true for staff-related effects since staff typically works in various places of a hospital (e.g., in the OT, the ICU, on wards, or in offices) and is sometimes required to switch roles / positions during the day or week, which makes integrated planning seem inevitable in order to grasp the full consequences of uncertain staff availability and, as a result, ensure consistent availability of staff with the right expertise at the right place. We conclude this discussion and our analyses of publications on integrated planning problems in hospitals with the following take-home messages: * The further planning problems move away from patients, the fewer integrated studies exist. While patients are the main connecting element between resources and areas of a hospital to be integrated (and their pathways are often uncertain, too), there is still a lack of integrated studies that consider resources and activities that do not immediately include patients (e.g., sterilization, medical and non-medical supplies). * Medical staff usually work in different places, which makes it even more important to consider integrated planning approaches. Instead of following the patient through the hospital, movements of and requests for staff will be an interesting topic to follow. * Knock-on effects (e.g., impacts of OT utilization on ward utilization) can only be fully understood if the system of interest is modeled in an integrated way. This, in turn, suggests that simulation studies (either stand-alone Dosi_2021,Dwyer-Matzky_2021 or in connection with other planning approaches Rachuba_2022,Oliveira_2022) might receive even more interest in the future. * Successful implementation of integrated planning approaches requires the involvement of all relevant stakeholders. Despite the substantial share of papers that test their approaches in a case study, evidence of practical impact or successful implementation is still limited. This could be at least partly due to the increased amount of stakeholder involvement that might be required to implement an integrated approach in practice. While links to the involved personnel and decision makers are important for the successful implementation of any planning approach in a hospital, they seem particularly important for implementing integrated planning approaches that often involve multiple departments or decision making units of a hospital. According to our analysis of integrated planning approaches, it seems that simulation models seem to receive more buy-in from stakeholders so far compared to other approaches such as optimization models. § ACKNOWLEDGEMENTS This research was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project number 443158418. elsarticle-num * elsarticle-num search-results
http://arxiv.org/abs/2307.04056v2
20230708231953
Manifold Filter-Combine Networks
[ "Joyce Chew", "Edward De Brouwer", "Smita Krishnaswamy", "Deanna Needell", "Michael Perlmutter" ]
stat.ML
[ "stat.ML", "cs.LG", "cs.NA", "eess.SP", "math.NA" ]
Robotic Ultrasound Imaging: State-of-the-Art and Future Perspectives [ ==================================================================== We introduce a class of manifold neural networks (MNNs) that we call Manifold Filter-Combine Networks (MFCNs), that aims to further our understanding of MNNs, analogous to how the aggregate-combine framework helps with the understanding of graph neural networks (GNNs). This class includes a wide variety of subclasses that can be thought of as the manifold analog of various popular GNNs. We then consider a method, based on building a data-driven graph, for implementing such networks when one does not have global knowledge of the manifold, but merely has access to finitely many sample points. We provide sufficient conditions for the network to provably converge to its continuum limit as the number of sample points tends to infinity. Unlike previous work (which focused on specific graph constructions), our rate of convergence does not directly depend on the number of filters used. Moreover, it exhibits linear dependence on the depth of the network rather than the exponential dependence obtained previously. Additionally, we provide several examples of interesting subclasses of MFCNs and of the rates of convergence that are obtained under specific graph constructions. § INTRODUCTION Geometric deep learning <cit.> is an emerging field that aims to extend the success of deep learning from data such as images, with a regular grid-like structure, to more irregular domains such as graphs and manifolds. As part of the rise of geometric deep learning, graph neural networks (GNNs) have rapidly emerged as an extremely active area of research in data science <cit.> and are also used in industrial applications such as Google Maps<cit.> and Amazon's product recommender system<cit.>. However, there has been much less work on the development of Manifold Neural Networks (MNNs) and much of the existing literature focuses on two-dimensional surfaces embedded in three-dimensional space <cit.>. In this paper, we consider the more general setting of a compact, connected, d-dimensional Riemannian manifold ℳ embedded in D-dimensional space. One of the principal challenges in extending deep learning to graphs and manifolds is developing a proper notion of convolution, which is non-trivial because there is no natural notion of translation. In the graph setting, a popular family of solutions, known as spectral methods, define convolution via the eigendecomposition of the graph Laplacian (or another suitable matrix). A limitation of this method is that explicitly computing eigendecompositions is expensive for large graphs. To overcome this obstacle, spectral graph neural networks such as ChebNet <cit.> and CayleyNet <cit.> define convolution in terms of polynomials of the graph Laplacian 𝐋=𝐃-𝐀. This leads to filters of the form h(𝐋)𝐱 where h is a polynomial and 𝐱 is a signal defined on the vertices of the graph. With this notion of convolution, one may consider networks with layerwise update rules of the form: 𝐱^(ℓ+1)=σ(h^(ℓ)(𝐋)𝐱^(ℓ)), where σ is a pointwise, nonlinear activation function. If one is given multiple initial graph signals 𝐱_1,…, 𝐱_C organized into a data matrix 𝐗=(𝐱_1,…,𝐱_C) and uses multiple filters in each layer, then the layerwise update rule can be extended to 𝐱^(ℓ+1)_k=σ(∑_j=1^C h^(ℓ)_j,k(𝐋)𝐱^(ℓ)_k). If one assumes that each filter h^ℓ_j,k belongs to a parameterized family of functions such as Chebyshev polynomials, one could then attempt to learn the optimal parameters from training data. Inspired by this approach, Wang, Ruiz, and Ribeiro <cit.> have introduced manifold neural networks with layerwise update rules similar to (<ref>). In particular, they assume that they are given C functions f_1,…,f_C:ℳ:→ℝ and utilize a layerwise update rule of f^(ℓ+1)_k=σ(∑_j=1^C h^(ℓ)_j,k(ℒ)f^(ℓ)_k), where ℒ=-div∘∇ is the Laplace-Beltrami operator, the natural analog of the graph Laplacian in the manifold setting. They then provide an analysis of the stability of such networks to absolute and relative perturbations of the Laplace-Beltrami operator. However, many popular graph neural networks take an approach different than (<ref>). Rather than using multiple learnable filters for each input channel and then summing across channels, they instead filter each graph signal with a pre-designed operator (or operators) and then learn relationships between the filtered input signals. For example, the Graph Convolutional Network (GCN)[Here, we use the term GCN to refer to the specific network introduced in <cit.>. We will use the term GNN to refer to a general graph neural network] <cit.> performs a predesigned aggregation 𝐗→𝐀𝐗 where 𝐀=(𝐃+𝐈)^-1/2(𝐀+𝐈)(𝐃+𝐈)^-1/2 and utilizes a right-multiplication by a trainable weight matrix Θ to learn relationships between the channels. This leads to the layerwise update rule 𝐗^(ℓ+1)=σ(𝐀𝐗^(ℓ)Θ^(ℓ)), where σ is as in (<ref>).[The matrix 𝐀 can be obtained by applying the polynomial h(λ)=1-λ/2 to a normalized version of the graph Laplacian and then some adjustments which help with the training of the network. Therefore, we can essentially think of the operation 𝐱→𝐀𝐱 as a spectral convolution.] This raises an intriguing question: How should manifold neural networks be designed? Should they follow the lead of (<ref>) and (<ref>) and utilize multiple learnable filters for each input channel with a predesigned summation over channels or should they utilize predesigned filtering operations and incorporate learning via cross-feature operations analogous to (<ref>)? It is likely that the answer to this question will vary depending on the dataset and the task of interest. Networks with multiple learnable filters for each channel are more general and will have greater expressive power. On the other hand, networks that, for example, use a common (either learnable or designed) filterbank shared across all channels are a more constrained family of networks. This constraint imposes a certain structure on the network and reduces the number of trainable parameters, which may provide a useful inductive bias in certain settings and may be particularly useful in low-data environments. Another critical challenge in the development of manifold neural networks is that in many applications one does not have global knowledge of the manifold. Instead, one is given a collection of points {x_j}_j=1^n in some high-dimensional Euclidean space ℝ^D and makes the modeling assumption that the points x_j lie on some d-dimensional manifold for d≪ D. This assumption, known as the manifold hypothesis, is frequently used in the analysis of biomedical data arising from, e.g., single-cell imaging <cit.>. This leads us to the following question: How can one implement a manifold neural network when one does not have global knowledge of the manifold but only has access to finitely many sample points? In order to help answer this question, several works such as <cit.> have used an approach based on Laplacian eigenmaps <cit.> (see also <cit.>) where one builds a data-driven graph 𝐆_n such that the eigenvectors and eigenvalues of the graph Laplacian approximate the eigenfunctions and eigenvalues of the Laplace-Beltrami Operator. They show that if the graph is constructed properly, then a graph neural network of the form (<ref>) will converge to a continuum limit of the form (<ref>) as the number of sample points, n, tends to infinity. However, these results are limited in the sense that (i) they assume specific graph constructions and (ii) their rates of convergence depend exponentially on the depth of the network. In this work, we introduce a new framework for understanding MNNs that we call Manifold Filter-Combine Networks. The manifold filter-combine paradigm is meant to parallel the aggregate-combine framework commonly considered in the GNN literature (see, e.g., <cit.>) and naturally leads one to consider many interesting classes of MNNs which may be thought of as the manifold counterparts of various popular GNNs. We then provide sufficient conditions for such networks to converge to a continuum limit as the number of sample points, n, tends to infinity. More specifically, the contributions of this work are: * We introduce Manifold Filter-Combine Networks as a novel framework for understanding MNNs. This framework readily leads one to many interesting classes of MNNs such as the manifold equivalent of Kipf and Welling's GCN <cit.>, learnable variations of the manifold scattering transform <cit.>, and many others. * In Theorem <ref>, we provide sufficient conditions for the individual filters used in an MNN to provably converge to a continuum limit as n→∞ if the filtering is done via a spectral approach. Here the rate of convergence depends on the rates at which the eigenvectors/eigenvalues of the graph Laplacian approximate the eigenfunctions/eigenvalues of the Laplace-Beltrami operator as well as the rate at which discrete inner products approximate continuum inner products. * In Theorem <ref>, we prove that if the individual filters converge as n→∞, then so does the entire MNN. The rate of convergence will depend on (i) the rate of convergence of the individual filters; (ii) the weights used in the network; (iii) the depth of the network. Importantly, we note that our dependence on the depth of the network is linear, rather than the exponential dependence obtained in previous work. Additionally, our rate does not directly depend on the number of filters used per layer. We also note that Theorem <ref> does not assume that the filters have any particular form. Therefore, if one were to prove results analogous to Theorem <ref> for non-spectral filters, then Theorem <ref> would immediately imply the convergence of networks constructed from those filters. * We then provide several corollaries to Theorem <ref>, which give concrete examples of our results in special cases of interest in Corollaries <ref>, <ref>, <ref>, and <ref>. These results may be summarized as follows: * If the filters are implemented spectrally, then the discretization error of the entire MFCN tends to zero at a rate depending on how fast the eigenvalues/eigenvectors of the Laplacian corresponding to the data-driven graph 𝐆_n converge to the eigenvalues/eigenfunctions of the continuum Laplacian and how fast discrete inner products converge to continuum inner products. * If 𝐆_𝐧 is constructed via a Gaussian kernel and the filters are implemented spectrally, then (up to log factors) the discretization error is 𝒪(n^-2/(d+6)). * If 𝐆_𝐧 is constructed via a k-NN graph or an ϵ-graph and the filters are implemented spectrally, then (up to log factors) the discretization error is 𝒪(n^-1/(d+4)). §.§ Notation We let ℳ be a compact, connected, d-dimensional Riemannian manifold with normalized Riemannian volume form μ such that μ(ℳ)=1. We let 𝐋^2(ℳ) denote the set of functions that are square integrable with respect to μ and 𝒞(ℳ) denote the set of continuous functions on ℳ. We let ℒ=-div∘∇ denote the Laplace-Beltrami operator and let {ϕ_i}_i=1^∞ denote an orthonormal basis of eigenfunctions ℒϕ_i=λ_iϕ_i, with 0=λ_1<λ_2≤…. We will use these eigenfunctions to define Fourier coefficients denoted by f(i). In much of our analysis, we will assume that ℳ is unknown and that we only have access to a function f∈𝒞(ℳ) evaluated at sample points {x_j}_j=1^n⊆ℝ^D. In this setting, we will let P_n:𝒞(ℳ)→ℝ^n be the normalized evaluation operator (P_nf)(i)=1/√(n)f(x_i), and let 𝐆_n denote a graph whose vertices are the sample points x_j. We will let 𝐋_n denote the graph Laplacian associated to 𝐆_n and let ϕ_i^n be an orthonormal basis of eigenvectors, 𝐋_nϕ_i^n=λ^n_iϕ_i^n, 0=λ^n_1≤λ^n_2≤…≤λ^n_n. Analogous to the continuous setting, we will use the ϕ_i^n to define discrete Fourier coefficients 𝐱(i). In this paper, we consider a family of neural networks to process functions defined on ℳ. Towards this end, we will let F=(f_1,…,f_C) denote a row-vector valued function and let F^(ℓ) denote the hidden representation in the ℓ-th layer of our network, with F^(0)=F. When we approximate our network on 𝐆_n, we will instead assume that we are given an n× C data matrix 𝐗=(𝐱_1,…,𝐱_C). §.§ Organization The rest of this paper is organized as follows. In Section <ref>, we will provide an overview of spectral convolution on manifolds, explain how to implement such networks on point clouds, and state a theorem providing sufficient criteria for the discrete point-cloud implementation to converge to the continuum limit as the number of sample points tends to infinity. In Section <ref>, we introduce manifold-filter combine networks, discuss several examples of networks contained in our framework, and state a theorem showing that a discrete point cloud implementation converges to the continuum limit as well as several corollaries focusing on specific graph constructions. In Appendices <ref> and <ref>, we will prove the theorems stated in Sections <ref> and <ref>. We will conduct numerical experiments in Section <ref>, before providing a brief conclusion in Section <ref>. § SPECTRAL CONVOLUTION ON MANIFOLDS As alluded to in the introduction, the extension of convolutional methods to the manifold setting is non-trivial because there is no natural notion of translation. Many possible solutions to this problem have been proposed including methods based on parallel transport <cit.>, local patches <cit.>, or Fréchet means <cit.>. In this section, we will focus on spectral methods that rely on a generalized Fourier transform defined in terms of the eigendecomposition of the Laplace-Beltrami operator. Let ℳ be a compact d-dimensional Riemannian manifold without boundary, and let ℒ be the Laplace-Beltrami operator on ℳ. It is well-known that ℒ has an orthonormal basis of eigenfunctions {ϕ_i}_i=1^∞ with ℒϕ_i=λ_iϕ_i, λ_i≥ 0. This implies that for f∈𝐋^2(ℳ), we may write f=∑_i=1^∞f(i) ϕ_i, where, for 1≤ i <∞, f(i) is the generalized Fourier coefficient defined by ⟨ f,ϕ_i⟩_𝐋^2(ℳ). Motivated by the convolution theorem in real analysis, we will define manifold convolution as multiplication in the Fourier domain. In particular, give a bounded measurable function w:[0,∞)→ℝ, we define a spectral convolution operator, w(ℒ):𝐋^2(ℳ)→𝐋^2(ℳ) by w(ℒ)f=∑_i=1^∞ w(λ_i) f(i) ϕ_i. By Plancherel's theorem, we may observe that w(ℒ)f_𝐋^2(ℳ)=(∑_i=1^∞ |w(λ_i)|^2|f(i)|^2)^1/2≤w_𝐋^∞([0,∞))f_𝐋^2(ℳ). Additionally, we note that since these spectral convolution operators are defined in terms of a function w:[0,∞)→ℝ, one may verify that the w(ℒ) does not depend on the choice of the orthonormal basis {ϕ_i}_i=1^∞. (See for example Remark 1 of <cit.>.) In our analysis of such filters, similar to <cit.> and <cit.>, we will assume that w is Lipschitz, and let A_Lip denote the smallest constant such that for all a,b∈[0,∞) we have |w(a)-w(b)| ≤ A_Lip(w)|a-b|. We will also assume that either f or w(ℒ) is bandlimited as defined below. Let κ>0, let f∈𝐋^2(ℳ), and let w(ℒ) be a spectral filter. We say that f is κ-bandlimited if f(i)=0 for all i>κ. Similarly, w(ℒ) is said to be κ-bandlimited if w(λ_i)=0 for all i>κ. §.§ Implementation of Spectral Filters on Point Clouds In many applications of interest, one does not know the manifold ℳ. Instead, one is given access to finitely many sample points x_1,…,x_n∈ℝ^D and makes the modeling assumption that these sample points lie upon (or near) an unknown d-dimensional Riemannian manifold for some d≪ D. In this setup, it is non-trivial to actually implement a neural network since one does not have global knowledge of the manifold. Here, we will use an approach based on manifold learning <cit.> where we construct a data-driven graph 𝐆_n, whose vertices are the sample points x_1,…,x_n, and use the eigenvectors and eigenvalues of the graph Laplacian 𝐋_n to approximate the eigenfunctions and eigenvalues of the Laplace-Beltrami operator. As we will discuss below, there are numerous methods for constructing 𝐆_n including k-nn graphs, ϵ-graphs, and graphs derived from Gaussian kernels. More specifically, we let {ϕ_i^n}_i=1^n be an orthonormal basis of eigenvectors, 𝐋_n ϕ_i^n = λ_i^n ϕ_i^n, 0=λ_1^n≤λ_2^n≤…λ_n^n, and analogous to (<ref>) we will write 𝐱=∑_i=1^n 𝐱(i) ϕ_i^n, 𝐱(i)=⟨𝐱,ϕ^n_i⟩_2 for 𝐱∈ℝ^n. We then define a discrete approximation of w(ℒ) defined by w(𝐋_n)𝐱=∑_i=1^∞ w(λ^n_i) 𝐱(i) ϕ^n_i. Our hope is that if 𝐆_n is constructed properly, then w(𝐋_n)P_nf-P_nw(ℒ)f_2 will converge to zero as n tends to infinity, where P_n:𝒞(ℳ)→ℝ^n is the normalized evaluation operator defined as in (<ref>). Notably, in order to bound w(𝐋_n)P_nf-P_nw(ℒ)f_2 we must account for three sources of discretization error: * The graph eigenvalue λ_i^n does not exactly equal the manifold eigenvalue λ_i. Intuitively, this should yield an error on the order of α_i,nA_Lip(w), where α_i,n=|λ_i-λ_i^n|. * The graph eigenvector ϕ_i^n does not exactly equal P_nϕ_i, the discretization of the true continuum eigenfunction. One may anticipate this yielding errors of the order β_i,n, where β_i,n=ϕ_i^n-P_nϕ_i_2. * The discrete Fourier coefficient 𝐱(i) is not exactly equal to f(i). Since Fourier coefficients are defined in terms of inner products, one expects this error to be controlled by a term γ_n which describes how much discrete inner products ⟨ P_n f,P_n g⟩_2 differ from continuum inner products ⟨ f,g⟩_𝐋^2(ℳ). Combining these sources of error, and letting α_n=max_iα_i,n,β_n=max_iβ_i,n, one anticipates that if either f or w(ℒ) is κ bandlimited, then the total error will be 𝒪(κ(α_nA_Lip(w)+β_n+γ_n)). This intuition is formalized in the following theorem. For a proof, please see Appendix <ref>. Let w:[0,∞)→ℝ, w_𝐋^∞([0,∞))≤ 1, let f∈𝐋^2(ℳ) be a continuous function, and assume that either f or w(ℒ) is κ-bandlimited. Assume that there exist sequences of real numbers {α_n}_n=1^∞, {β_n}_n=1^∞, {γ_n}_n=1^∞, with lim_n→∞α_n=lim_n→∞β_n=lim_n→∞γ_n=0, such that for all 1≤ i ≤κ and for n sufficiently large, we have |λ_i-λ^n_i|≤α_n, P_nϕ_i-ϕ_i^n_2≤β_n, |⟨ P_nf, P_ng ⟩_2 - ⟨ f,g⟩_𝐋^2(ℳ)| ≤γ_n^2fg_𝐋^∞(ℳ), Then for n large enough such that (<ref>) holds and α_n,β_n,γ_nκ^1/2≤ 1, we have w(𝐋_n)P_nf-P_nw(ℒ)f_2≤ C_ℳκ((A_Lip(w)α_n+β_n)f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ)). Furthermore, for all n large enough such that (<ref>) holds and α_n,β_n,γ_nκ^1/2≤ 1 and all 𝐱∈ℝ^n, we have w(𝐋_n)𝐱-P_nw(ℒ)f_2≤𝐱-P_nf_2 + C_ℳκ((A_Lip(w)α_n+β_n)f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ)), where, in both (<ref>) and (<ref>), C_ℳ is a constant depending on the geometry of ℳ. In particular, if 𝐱=P_nf, (<ref>) implies that lim_n→∞w(𝐋_n)𝐱-P_nw(ℒ)f_2=0. Inspecting the proof of Theorem <ref>, one may note that A_Lip(w) may actually be replaced by the Lipschitz constant on the smallest interval containing all λ_i and all λ_i^n, 1≤ i ≤κ, where λ_i≠λ_i^n. This means that, if f is bandlimited, our result may be applied to any continuously differentiable function w. Moreover, for most common graph constructions, we have λ_1=λ_1^n=0 and 0<λ_2,λ_2^n. This implies that our theorem can be applied to any w which is continuously differentiable on (0,∞) even if, for example, lim_t→ 0^+w'(t)=+∞ (which is the case for certain wavelets, such as those considered in <cit.>). Additionally, we note that with minor modifications, results similar to Theorem <ref> may be obtained for functions or filters which are approximately bandlimited in the sense that either sup_k>κ|w(λ_k)| or ∑_k>κ|f(k)|^2 are sufficiently small. In these cases, we will have lim sup_n→∞w(𝐋_n)𝐱-P_nw(ℒ)f_2 ≤sup_k>κ|w(λ_k)|f_𝐋^2(ℳ) or lim sup_n→∞w(𝐋_n)𝐱-P_nw(ℒ)f_2 ≤w_∞(∑_k>κ|f(k)|^2)^1/2. In particular, results similar to Theorem <ref> may be obtained for filters w_t(λ) e^-tλ, which correspond to the heat kernel. In the following section, we will consider neural networks constructed from spectral filters and use Theorem <ref> to show that discrete approximations of such networks converge to their continuum limit as n→∞. However, first, we will consider several examples of graph constructions where estimates for α_n and β_n are known. In all of the examples below, we will assume that the data points x_i are generated i.i.d. uniformly at random (with respect to the normalized Riemannian volume form μ). In this setting, Lemma 5 of <cit.> implies that with probability at least 1 - 𝒪(1/n^9) we have γ_n = (18log(n)/n)^1/4. We note that in <cit.> the inequality (<ref>) was derived via Hoeffding's inequality which is why the definition of γ_n involves the ℓ^∞ norm of fg. However, if one were to use a different method, such as Bernstein's inequality to derive bounds for |⟨ P_nf, P_ng ⟩_2 - ⟨ f,g⟩_𝐋^2(ℳ)| in terms of other norms, then all of our proof techniques could likely be pushed through to obtain results similar to Theorem <ref>. [Gaussian Kernels] One simple way to construct a graph is with a Gaussian kernel. Specifically, given a bandwidth parameter ϵ, we define a weighted adjacency matrix 𝐖_ϵ whose entries are given by [𝐖_n,ϵ]_i,j = 1/nϵ^1 + d/2e^-𝐱_i - 𝐱_j_2^2 / ϵ and let 𝐃_n,ϵ be the corresponding diagonal degree matrix. Then the associated graph Laplacian 𝐋_n,ϵ is 𝐋_n, ϵ = 𝐃_n, ϵ-𝐖_n, ϵ. In this case, if ϵ∼ n^-2/(d+6), and the data points x_i are generated i.i.d. uniformly at random, then Theorem 5.4 of <cit.> implies that, under mild assumptions, we may choose α_n = C_ℳ n^-2/d+6, β_n = C_ℳ n^-2/d+6√(log(n)), with probability at least 1 - 𝒪(1/n^9)[For details on how to deduce (<ref>) from Theorem 5.4 of <cit.> we refer the reader to Remark 1 of <cit.> and the proof of Theorem 10 of <cit.>.]. Estimates such as these were used to analyze the convergence of the manifold scattering transform on Gaussian-kernel graphs in <cit.> and more general MNNs in <cit.> and <cit.>. While constructing a graph from a kernel is simple, it has the drawback of producing dense graphs which pose computational issues for large values of n. Therefore, we also consider two methods for constructing sparse graphs that have previously been analyzed in works such as <cit.> and <cit.>. [ϵ-graphs] Let ϵ>0, let η:[0,∞)→ [0,∞) be a nonincreasing function supported on the interval [0,1] such that η(1/2)>0 and the restriction of η to [0,1] is Lipschitz continuous. A weighted ϵ-graph is constructed by placing an edge between all x_i,x_j such that |x_i-x_j|≤ϵ. Then, if x_i and x_j are connected by an edge, the corresponding entry in a weighted adjacency matrix is given by [𝐖_n,ϵ]_i,j=η(|x_i-x_j|/ϵ). The ϵ-graph Laplacian is then given by 𝐋=c_η/nϵ^d+2(𝐃_n,ϵ-𝐖_n,ϵ), where c_η is the constant c_η = ∫_ℝ^d |y_1|^2 η(|y|)dy, and y_1 is the first coordinate of a vector y ∈ℝ^d, and 𝐃_n,ϵ is the weighted degree matrix corresponding to 𝐖_n,ϵ. Theorems 2.4 and 2.7 of <cit.> show, for example, that if ϵ is chosen as ϵ∼ ( log(n)/n )^1/d+4, then, under mild assumptions, we may choose α_n = C_ℳ ( log(n)/n )^1/d+4, β_n = C_ℳ ( log(n)/n )^1/d+4 with probability at least 1 - 𝒪(n^-9). Estimates similar to (<ref>) were used to analyze the convergence of MNNs on ϵ-graphs in <cit.> and <cit.>. The graph Laplacians of ϵ-graphs are sparse by construction, and their sparsity is indirectly controlled by the length scale parameter ϵ. To directly control the sparsity of the graph Laplacian in an adaptive manner without specifying a length scale, one may also consider k-NN graphs. [k-NN graphs] For a positive integer k, symmetric k-Nearest Neighbor (k-NN) graphs are constructed by placing an edge between x_i and x_j if x_j is one of the k closest points to x_i (with respect to the Euclidean distance) or[One might also consider mutual k-NN graphs where we require x_i to be one of the k closest points to x_j and x_j to be one of the k-closest points to x_i. However, such graphs are not analyzed in the theorem we cite from <cit.>.] if x_i is one of the k closest points to x_j. Then, the edges can be given weights in a manner similar to <Ref>. Formally, let ϵ_k(x_i) denote the distance from x_i to its k-th closest neighbor (with respect to Euclidean distance) and let r_k(x_i,x_j) max{ϵ_k(x_i),ϵ_k(x_j)}. Then, if x_i and x_j are connected by an edge in the k-NN graph, the corresponding entry in a weighted adjacency matrix is given by [𝐖_n,k]_i,j = η ( |x_i - x_j|/r_k(x_i,x_j) ) where η satisfies the same assumptions as in <Ref>. Note that if η(t) = χ_[0,1](t), then we obtain the standard unweighted k-NN graph. The k-NN graph Laplacian is then given by 𝐋_n,k=c_η/n(nc_d/k)^1+2/d(𝐃_n,k-𝐀_n,k), where c_η is defined as in <Ref>, c_d is the volume of the d-dimensional Euclidean unit ball, 𝐖_n,k is the unweighted adjacency matrix associated with the k-NN graph, and 𝐃_n,k is the corresponding degree matrix. If η(t) = χ_[0,1](t), then c_η = c_d/d+2. Theorems 2.5 and 2.9 of <cit.> show that, for example, if k is chosen as k ∼log(n)^d/d+4 n^4/d+4, then, under mild assumptions, we may choose α_n = C_ℳ ( log(n)/n )^1/d+4, β_n = C_ℳ ( log(n)/n )^1/d+4 with probability at least 1 - 𝒪(n^-9). Corollary <ref> stated in Section <ref> applies these estimates to establish the convergence of MFCNs for k-NN graphs. To the best of our knowledge, this is the first result to establish a quantitative rate of convergence for MNNs in this setting. Comparing the examples above, we see that the rates of convergence are faster for dense graphs. Therefore, they may be preferable when n is only moderately large, but one still desires a good approximation of the continuum. However, for very large n, dense graphs become expensive to store in memory. Therefore, one might instead prefer to utilize either ϵ- or k-NN graphs. We also note that the theorems discussed above do not explicitly guarantee that P_nϕ_i≈ϕ_i^n. Instead, they show that P_nϕ_i≈±ϕ_i^n. However, as discussed earlier our spectral filters do not depend on the choice of orthonormal basis. Therefore, we may ignore this issue when applying Theorem <ref>. § MANIFOLD FILTER-COMBINE NETWORKS In this section, we introduce a novel framework for thinking about manifold neural networks. We will refer to the networks we consider as Manifold Filter-Combine Networks paralleling the aggregate-combine framework commonly used in the graph setting (see, e.g., <cit.>). Here, we will use the term filter, rather than aggregate because our filters may be arbitrary linear operators on 𝐋^2(ℳ) (which in most examples will be defined in terms of some notion of convolution) and are not required to be localized averaging operations. Much of our analysis (except for Theorem <ref>) focuses on the case that the filtering step is implemented in the spectral domain. In this case, the class of all MFCN coincides with the class of MNNs considered in previous work such as <cit.>. However, even in the spectral case, we find that the filter-combine paradigm is a useful framework for thinking about MNNs since it naturally leads one to many interesting subclasses of networks and also allows us to obtain convergence rates that do not directly depend on the width of the network. We will assume that our input data is a row-vector[We define the output of F to be ℝ^1× C in order to highlight the parallels with the data matrices commonly considered in the GNN literature where rows correspond to vertices and columns correspond to features.] valued function F∈𝐋^2(ℳ,ℝ^1× C), F=(f_1,…,f_C), where each f_i∈𝐋^2(ℳ). Each hidden layer of the network will consist of the following five steps: (i) filtering each input channel f_k by a family of linear operators W_j, 1≤ j≤ J, (ii) For each fixed j, we combine the filtered feature functions f̃_j,k=(W_jf_k) into new feature functions g_j,k where each g_j,k is a linear combination of the f̃_j,k, (iii) For each fixed k, we perform a cross-channel convolution that maps { g_j,k}_j=1^J to {g̃_j,k}_j=1^J' where each g̃_j,k is a linear combination of the g_j,k, (iv) apply some non-linear, nonexpansive pointwise activation function σ to each of the g̃_j,k, to obtain h_j,k=σ∘g̃_j,k, (v) reshape the collection of functions {h_i,j}_1≤ i ≤C̃,1≤ j≤ J' into {f'_i}_i=1^C', where C'=C̃J'. In many applications, it may be sufficient to use a common filter bank {W_j}_1≤ j≤ J for all input channels. However, in other settings, it may be useful to give the network additional flexibility to learn different filters along different input signals. Therefore, for the sake of generality, we actually define the filtering step by f̃_j,k=(W_j,kf_k), where for each fixed k, {W_j,k}_1≤ j ≤ J is a collection of linear operators (i.e., filters) to be applied to the input channel f_k. Explicitly, we define our layerwise update rule in the following manner. Let F^(0)=F, C_0=C and given F^(ℓ)=(f_1^(ℓ),…,f_C_ℓ^(ℓ)), we define F^(ℓ+1)=(f_1^(ℓ+1),…,f_C_ℓ+1^(ℓ+1)) via: Filtering: f̃^(ℓ)_j,k=W^(ℓ)_j,kf^(ℓ)_k, 1≤ j ≤ J_ℓ, 1≤ k≤ C_ℓ Combine: g_j,k^(ℓ)=∑_i=1^C_ℓf̃^(ℓ)_j,iθ^(ℓ,j)_i,k, 1≤ j≤ J_ℓ, 1≤ k ≤ C'_ℓ Cross-Channel Convolution: g̃_j,k= ∑_i=1^J_ℓα^(ℓ,k)_j,ig_i,k, 1≤ j≤ J_ℓ',1≤ k≤ C'_ℓ Activation: h_j,k^(ℓ)=σ^(ℓ)∘g̃_j,k^(ℓ), 1≤ j≤ J_ℓ, 1≤ k ≤ C'_ℓ Reshaping: f^(ℓ+1)_(j-1)C_ℓ+k = h^(ℓ)_j,k, 1≤ j≤ J_ℓ',1≤ k≤ C'_ℓ, where C_ℓ+1=J'_ℓ C_ℓ', and the reshaping operator allows for multiple layers to be stacked upon each other. Importantly, we note one may effectively omit the combine step by setting the matrix Θ^(ℓ,j)(θ_i,k^(ℓ,j))_1≤ i,k≤ C_ℓ equal to the identity matrix for each ℓ and j. Similarly, one may omit the cross-channel convolutions by setting the matrices (α_j,i^(ℓ,k))_1≤ i,j≤ J_ℓ to the identity. Additionally, we note that since we allow for the possibility of using different filters along each channel, it is, in general, possible to write the same network as an MFCN in more than one way. For instance, if one fixes the cross channel convolutions equal to the identity, uses a shared filter bank {W^(ℓ)_j}_1≤ j ≤ J (independent of k) and chooses the combine step to be independent of j (i.e. θ_i,k^(ℓ,j)=θ_i,k^(ℓ)) then we have f^(ℓ+1)_(j-1)C_ℓ+k = σ^(ℓ)(∑_i=1^C_ℓW^(ℓ)_jθ^(ℓ)_i,kf_i), which may also be obtained by using filters of the form W^(ℓ)_(j-1)C_ℓ+k,i=W_jθ^(ℓ)_i,k and using a combine step with θ̃_i,k^(ℓ,j)=1. Therefore, the set of networks that may be obtained by setting θ_i,k^(ℓ,j)=1 is just as large as the set of all MFCN. A similar conclusion holds for the cross-channel convolutions. Therefore, in the case where all filters are implemented in the spectral domain, the class of MFCNs is actually the same as the class of MNNs considered in previous work such as <cit.> (see Example <ref> below). However, as alluded to earlier, we find that thinking of the filtering, combination, and cross-channel convolutions steps separately is a useful framework for a couple of reasons. First, it facilitates our mathematical analysis of the convergence rate obtained in Corollary <ref> and in particular allows us to produce rates that depend only linearly on the depth of the network and do not directly depend on the network's width. Second, it highlights a variety of natural subclasses of networks that may be useful for various data sets or tasks of interest. For instance, each piece of the architecture can either be designed in advance or learned from data. Moreover, one may choose to use a common filter bank W_j, 1≤ j≤ J for all input functions and in all layers or one may choose to use different filters in each layer and/or for each signal. Below we will consider several examples of such classes, but first, we remark that our analysis does not depend on the order in which the steps are performed. Therefore, the theoretical guarantees obtained in Theorem <ref> and Corollary <ref> also apply, for example, to networks in which the cross-channel convolutions occur after the activation. Additionally, we note that one may make different choices in each layer. For example, one may use a hand-crafted filter bank in the first several layers and then a learnable filter bank in the later layers. Similarly, the activation functions may vary from one layer to the next. However, we will often depress the dependence of the activation function on the layer and simply write σ in place of σ^(ℓ). [Different Filters Along Each Channel] If we set the cross-channel convolution equal to the identity, set C_ℓ'=1 and set θ_i,k^(ℓ,j)=1 then we obtain the layerwise update rule f^(ℓ+1)_j=σ(∑_j=1^CW^(ℓ)_j,kf_k). If each of the W_j,k^(ℓ)=w^(ℓ)_j,k(ℒ) is a spectral filter (as defined in Section <ref>), we then obtain the layerwise update rule f^(ℓ+1)_j=σ(∑_j=1^Cw^(ℓ)_j,k(ℒ)f_k). which was introduced in <cit.> and has been subsequently studied in <cit.>. Notably, in this example the reshaping operator is the identity (since C'_ℓ=1)) and the filters W_j,k^(ℓ) depend on both the layer ℓ and the input channel k. As mentioned above (see the discussion surrounding (<ref>)), this class of networks is the most general and actually includes all MFCNs. However, considering, e.g., the filter and combine steps separately helps facilitate our analysis. For instance, our rate of convergence obtained in Theorem <ref> depends on max_j,k(|∑_i=1^C_ℓ |θ_i,k^(ℓ,j)|), but unlike the results obtained in previous work does not directly depend on the width of the network. In particular, if we set θ_i,k^(ℓ,j)=1/C_ℓ, then we have max_j,k(|∑_i=1^C_ℓ |θ_i,k^(ℓ,j)|)=1. [Shared Filter Banks Along Each Channel] In order to reduce the number of trainable parameters, it may be useful to utilize a (learned) filter bank which is shared across all input channels and a combination matrix which is shared across all filters. In this case, one obtains a layerwise update rule of the form (<ref>). Such networks may loosely be thought of as a low-rank subset of the more general networks discussed in Example <ref>. (In this setting, since the filter banks are learned, there is still no need for cross-channel convolutions.) Due to the irregularity of the data geometry, many popular GNNs such as the GCN of Kipf and Welling <cit.> use predesigned aggregations and incorporate learning through the combine steps. The next example discusses the analog of such networks on manifolds. [MCNs] Set the cross-channel convolutions equal to the identity and let J=J'=1. Let A be a fixed operator which should be thought of as either a low-pass filter or a localized averaging operator, and set W^(ℓ)_i,1=A for all i. Let the matrix Θ^(ℓ) = (θ^(ℓ,1)_i,k)_1≤ i≤ C_ℓ,1≤ k ≤ C'_ℓ be a learnable weight matrix. Then our layerwise update rule becomes f_k^ℓ+1=∑_i=1^C_ℓAf_kθ_i,k^(ℓ,1) which may be written compactly as F^(ℓ+1)=σ(AF^(ℓ)Θ^(ℓ)). Therefore, we obtain a network similar to the GCN of Kipf and Welling which we refer to as the manifold convolutional network (MCN). Notably, A can be designed in a variety of ways, but one possible choice is to define it in the spectral domain where w is a non-increasing function such as an idealized low-pass filter w(λ)=1_λ≤ a or setting w(λ)=e^-tλ which corresponds to convolution against the heat kernel. Additionally, one could consider the filter bank consisting of powers of A, i.e. W^(ℓ)_j=A^j, 1≤ j ≤ J, use a different combine matrix in each channel, and employ a simple cross-channel convolution by setting α_j,i^(ℓ,k)=1. In this case, one obtains a layerwise update rule of the form F^(ℓ+1)=σ(∑_j=1^JA^JF^(ℓ)Θ^(ℓ,j)), which can be thought of the manifold analog of the higher-order GCNs considered in work such as <cit.>. Similar to the above example, one could also consider the manifold analogs of other popular spectral GNNs such as ChebNet<cit.> or CayleyNet<cit.>. Our framework also includes the manifold scattering transforms. [Hand-Crafted Scattering Networks] Let {W_j}_j=1^J be a predesigned collection of filters, which are thought of as wavelets and do not depend on the layer or the input channel. Set the combine and cross-channel convolutions equal to the identity. One then obtains an entirely predesigned, multilayered network known as the manifold scattering transform. Such networks were considered in <cit.> in order to analyze the stability of and invariance properties of deep learning architectures defined on manifolds, building off of analogous work for Euclidean data <cit.> and graphs <cit.>. [Learnable Scattering Networks] For both Euclidean data and graphs, there have been a variety of papers that have introduced learning into the scattering framework. In the Euclidean setting, <cit.> created a network that acts as a hybrid of the scattering transform and a CNN using predesigned, wavelet filter in some layers and learnable filters in others. Subsequent work by <cit.> introduced learning in a different way, incorporating cross-channel convolutions into an otherwise predesigned network. One may construct an analogous MFCN that corresponds to utilizing a predesigned filter bank {W_j}_j=1^J which is shared across all channels, setting the combine step equal to the identity, and letting α_j,i^(ℓ,k) be learnable. (Traditionally, scattering networks have used |·| as the activation function, but one could readily use other choices instead.) In the graph setting, <cit.> incorporated learning into the scattering framework by utilizing using predesigned wavelet filters, but learnable combine matrices (along with a few other features to boost performance). In a different approach, <cit.> sought to relax the graph scattering transform by replacing dyadic scales 2^j with an increasing sequence of scales t_j which are learned from data via a selector matrix. To obtain an analogous MFCN, we set W_j=e^-jℒ for 0≤ j ≤ J, which diffuses the input signal over the manifold at different time-scales, corresponding to the diffusion module utilized in <cit.>. We then set the combination step equal to the identity and learn relationships between the diffusion scales via cross-channel convolutions (where the cross-channel convolutions utilized in <cit.> have a certain structure that encourages the network to behave in a wavelet-like manner). Additionally, as has previously been noted in <cit.>, these two forms of learnable geometric scattering are compatible and one could readily utilize learnable combine steps while also using cross-channel convolutions to learn relationships between diffusion scales. Lastly, we also note that our framework includes simple multilayer perceptrons. [Multilayer Perceptron] If one sets J_ℓ=1 and sets both W_1,k^(ℓ) and the cross-channel convolution to be the identity operator then one obtains a simple dense layer that does not utilize the geometry of the manifold. In some sense, this is contrary to our goal of developing networks that utilize the manifold structure of the data. However, including some simple dense layers might nevertheless be useful for, for example, reducing the number of channels in the network. §.§ Implementation from point clouds As alluded to earlier, in many applications one does not have global knowledge of the manifold ℳ and merely has access to n data points {x_j}_j=1^n and evaluations of F at those data points. This leads us to recall the normalized evaluation operator (P_nf)(j)=1/√(n)f(x_j) and approximate F by an n× C data matrix 𝐗=(𝐱_1,…,𝐱_C), where 𝐱_k=P_nf_k. One may then implement an approximation of the network via the discrete update rules. Filtering: 𝐱̃^(ℓ)_j,k=𝐖^(ℓ)_j,k𝐱^(ℓ)_k, 1≤ j ≤ J_ℓ, 1≤ k≤ C_ℓ Combine: 𝐲_j,k^(ℓ)=∑_i=1^C_ℓ𝐱̃^(ℓ)_j,iθ^(ℓ,j)_i,k, 1≤ j≤ J_ℓ, 1≤ k ≤ C'_ℓ Cross-Channel Convolution: 𝐲̃^(ℓ)_j,k= ∑_i=1^J_ℓα^(ℓ,k)_j,i𝐲_i,k, 1≤ j≤ J_ℓ',1≤ k≤ C'_ℓ Activation: 𝐳_j,k^(ℓ)=σ∘𝐲̃_j,k^(ℓ), 1≤ j≤ J_ℓ, 1≤ k ≤ C'_ℓ Reshaping: 𝐱^(ℓ+1)_(j-1)C_ℓ+k = 𝐳^(ℓ)_j,k, 1≤ j≤ J_ℓ',1≤ k≤ C'_ℓ where 𝐖_j,k^(ℓ) is a matrix which acts as a discrete approximation of W_j,k^(ℓ). The following theorem shows that the discrete implementation will converge to its continuum counterpart in the sense that P_n F^(ℓ)≈𝐗^(ℓ) if the matrices 𝐖_j,k^(ℓ) are designed so that 𝐖_j,k^(ℓ)P_n f_k^(ℓ)≈ P_n W_j,kf_k^(ℓ). For a proof, please see Appendix <ref>. Let f ∈𝒞(ℳ), and suppose that for all ℓ, there exists ϵ_ℓ>0 such that we have P_nW_j,k^(ℓ)f_k^(ℓ)-𝐖^(ℓ)_j,k𝐱^(ℓ)_k_2 ≤𝐱^(ℓ)_k-P_nf_k^ℓ_2+ ϵ_ℓ,n for all 1≤ k ≤ C_ℓ. Let A_1^(ℓ)=max_j,k(|∑_i=1^C_ℓ |θ_i,k^(ℓ,j)|), A_2^(ℓ)=max_j,k(∑_i=1^J_ℓ |α_j,i^(ℓ,k)|) and assume that σ is non-expansive, i.e. |σ(x)-σ(y)|≤ |x-y|. Then, 𝐱_k^ℓ-P_nf_k^ℓ_2≤∑_i=0^ℓ-1∏_j=i^ℓ-1 A_1^(j) A_2^(j)ϵ_i,n. Notably, Theorem <ref> does not assume the filters are constructed in the spectral domain nor does it assume they have any particular form. It is a general result that shows that if individual filters converge, then so does the multilayer network. Moreover, if the weights α_j,i^(ℓ,k) and θ_j,i^(ℓ,j) are normalized so that the A_1^(j)=A_2^(j)=1, then the rate of the convergence is linear in the depth of the network. This is in contrast to previous results in <cit.> whose rate of convergence featured an explicit exponential dependence on the depth of the network. (A similar exponential dependence was also encountered in <cit.> where the limiting object is a graphon rather than a manifold.) Combining Theorem <ref> with Theorem <ref> immediately leads to the following corollary which gives a quantitative rate of convergence for Manifold Filter-Combine Networks constructed utilizing spectral filters when either the filter or the input signals are bandlimited. Notably, if one proves theorems analogous to Theorem <ref> for other classes of filters (constructed either by spectral or not spectral methods) such as the α-FDT filters considered in <cit.> or the closely related γ-FDT filters considered in <cit.>, then one may immediately obtain similar corollaries.[Such results were obtained for α-FDT filters with specific graph constructions in <cit.>.] Assume that each W_j,k^(ℓ) is a spectral filter of the form W_j,k^(ℓ)=w_j,k^(ℓ)(ℒ) with w_j,k^(ℓ)_𝐋^∞([0,∞))≤ 1, and the matrices 𝐖_j,k are given by 𝐖_j,k^(ℓ)=w_j,k^(ℓ)(𝐋_n). As in Theorem <ref>, let A_1^(ℓ)=max_j,k(|∑_i=1^C_ℓ |θ_i,k^(ℓ,j)|), A_2^(ℓ)=max_j,k(∑_i=1^C_ℓ |α_j,i^(ℓ,k)|) and assume that σ is non-expansive, i.e. |σ(x)-σ(y)|≤ |x-y|. Let A^(ℓ)_maxLip=max_j,k,A_Lip(w^(ℓ)_j,k). Assume that there exist sequences of real numbers {α_n}_n=1^∞, {β_n}_n=1^∞, {γ_n}_n=1^∞, with lim_n→∞α_n=lim_n→∞β_n=lim_n→∞γ_n=0, such that |λ_i-λ^n_i|≤α_n, P_nϕ_i-ϕ_i^n_2≤β_n, |⟨ f, g ⟩_2 - ⟨ f,g⟩_𝐋^2(ℳ)| ≤γ_n^2fg_𝐋^∞(ℳ), Assume n is large enough such that (<ref>) holds and α_n,β_n,γ_nκ^1/2≤ 1. Then, the error in each channel of the ℓ-th layer satisfies 𝐱_k^ℓ-P_nf_k^ℓ_2≤∑_i=0^ℓ-1∏_j=i^ℓ-1 A_1^(j) A_2^(j) C_ℳκmax_k'((A^(i)_maxLipα_n+β_n)f^(i)_k'_𝐋^2(ℳ)+γ_nf^(i)_k'_𝐋^∞(ℳ)). In particular, if we assume that we have A_1^(j), A_2^(j), A^(i)_maxLip≤ 1, for all i and j we have 𝐱_k^ℓ-P_nf_k^ℓ_2≤ C_ℳκℓ((α_n+β_n)max_k',if^(i)_k'_𝐋^2(ℳ)+γ_nmax_k',if^(i)_k'_𝐋^∞(ℳ)). In <Ref>, we provided several examples of α_n, β_n, and γ_n for three graph constructions. Using <Ref>, we immediately obtain the following three corollaries giving rates of convergence for each of these constructions. Assume the same conditions on W_j,k^(ℓ), 𝐖_j,k, A_1^(ℓ), A_2^(ℓ), A^(ℓ)_maxLip, and σ as in <Ref>, and assume A_1^(j), A_2^(j), A^(i)_maxLip≤ 1. Assume an MFCN is implemented with a data-driven graph 𝐆_n constructed as in <Ref> with a Gaussian kernel. Then with probability 1 - 𝒪(1/n^9), for large enough n, the error in each channel of the ℓ-th layer of the MFCN satisfies 𝐱_k^ℓ-P_nf_k^ℓ_2≤ C_ℳκℓ(√(log(n))/n^2/(d+6)max_k',if^(i)_k'_𝐋^2(ℳ)+ (18log(n)/n)^1/4max_k',if^(i)_k'_𝐋^∞(ℳ)). Assume the same conditions on W_j,k^(ℓ), 𝐖_j,k, A_1^(ℓ), A_2^(ℓ), A^(ℓ)_maxLip, and σ as in <Ref>, and assume A_1^(j), A_2^(j), A^(i)_maxLip≤ 1. Assume an MFCN is implemented with a data-driven ϵ-graph 𝐆_n constructed as in <Ref>. Then with probability 1 - 𝒪(1/n^9), for large enough n, the error in each channel of the ℓ-th layer of the MFCN satisfies 𝐱_k^ℓ-P_nf_k^ℓ_2≤ C_ℳκℓ( ( log(n)/n )^1/d+4max_k',if^(i)_k'_𝐋^2(ℳ)+ (18log(n)/n)^1/4max_k',if^(i)_k'_𝐋^∞(ℳ)). Assume the same conditions on W_j,k^(ℓ), 𝐖_j,k, A_1^(ℓ), A_2^(ℓ), A^(ℓ)_maxLip, and σ as in <Ref>, and assume A_1^(j), A_2^(j), A^(i)_maxLip≤ 1. Assume an MFCN is implemented with a data-driven k-NN graph 𝐆_n constructed as in <Ref>. Then with probability 1 - 𝒪(1/n^9), for large enough n, the error in each channel of the ℓ-th layer of the MFCN satisfies 𝐱_k^ℓ-P_nf_k^ℓ_2≤ C_ℳκℓ( ( log(n)/n )^1/d+4max_k',if^(i)_k'_𝐋^2(ℳ)+ (18log(n)/n)^1/4max_k',if^(i)_k'_𝐋^∞(ℳ)). § NUMERICAL EXPERIMENTS In this section, we compare the performance of three different examples of manifold filter-combine networks on the ModelNet dataset<cit.>. In particular, we focus on the MNN with different learnable filters in each channel (DLF), the MCN, and the manifold scattering transform (Scattering) discussed in Examples <ref>, <ref>, and <ref>. The code for reproducing our experiments is available at <https://github.com/KrishnaswamyLab/mfcn>. §.§ Data We used the ModelNet10 dataset which consists of three-dimensional point clouds sampled from various objects belonging to the classes bathtub, bed, chair, desk, dresser, monitor, nightstand, sofa, table, and toilet. Examples of point clouds in the dataset are given in Figure <ref>. For each point cloud, we preprocess the data by scaling the point coordinates (z-scaling), then randomly sample 100 points from the whole point cloud. We then create a graph via the constructions discussed in Examples <ref>, <ref>, and, <ref>, i.e., Gaussian kernels (dense), ϵ-graphs, and unweighted k-NN graphs. We use the x, y, and z coordinates of the nodes as input signals. The ModelNet10 dataset comes with a predefined training set (3901 samples) and test set (799 samples). In our experiments, we randomly select 20% of the training set to use for validation. We then consider two regimes. In the full data regime, we use the entire remaining 80% of for training. In the subset data regime, we randomly select 1000 samples from that 80% to use for training. We repeat this procedure five times and report our accuracies in the format mean ± std. §.§ Models In our experiments, we consider three manifold neural network architectures as described below. For each model, we used two layers of manifold networks, followed by a multi-layer perceptron classifier consisting of a single hidden layer. For further details of our hyperparameter settings and training procedures please see Table <ref> in Appendix <ref>. Scattering We follow the experimental procedure utilized in <cit.> and compute zeroth-, first-, and second-order scattering moments. More specifically, for 0≤ j≤ J and 1≤ q≤ Q, we define first-order, q-th scattering moments by Sf[j,q]∫_ℳ|W_jf(x)|^qdx=W_jf_𝐋^q(ℳ)^q, where W_j are spectral wavelet filters corresponding to the functions w_j(λ)=e^2^j-1λ-e^2^jλ for 1≤ j≤ J and w_0(λ)=1-e^-λ. We define second-order moments, for 0≤ j<j'≤ J, by Sf[j,j',q]∫_ℳ|W_j'|W_jf(x)||^qdx=W_j'|W_jf|_𝐋^q(ℳ)^q. Zeroth-order moments are defined simply by Sf[q]∫_ℳ|f(x)|^qdx=f_𝐋^q(ℳ)^q. In our experiments, we set J=8, Q=4 and use the first 20 eigenvalues and eigenvectors of the graph Laplacian to implement the spectral wavelet filters. DLF We used two layers of DLF, where each layer consists of J_ℓ spectral filters (J_1=16, J_2=32). After applying the J_ℓ filters per input dimensions, we combined the channels by summation (i.e., θ^(ℓ,j)_i,k = 1). Similarly, as for scattering, we used the first 20 eigenvalues and eigenvectors of the Laplacian matrix to compute our filters. We used a ReLU activation and the identity map for the cross-channel convolution. We used average pooling at the last layer to obtain the feature vector to be processed by the classifier. We considered two parameterizations of the filters w(λ), one denoted DLF-MLP, where we parametrize each w(λ) as a 2-layer MLP, and the other denoted DLF-POLY, in which we parameterize each w(λ) as a degree-four polynomial of e^-λ (which is the parameterization utilized in, e.g., <cit.>). MCN We used two layers of graph convolutional networks with J_l (J_1=16, J_2=32) hidden dimension applied to the input graph with ReLU activations. As in <cit.>, our low-pass filter was implemented by 𝐀̂=(𝐃+𝐈)^-1/2(𝐀+𝐈)(𝐃+𝐈)^-1/2 which is equivalent to applying the spectral filter w(λ)=1-λ/2 to the normalized graph Laplacian and then utilizing a renormalization trick in order to facilitate the learning process. We used a ReLU activation and the identity map for the cross-channel convolution. We used average pooling at the last layer to obtain the feature vector to be processed by the classifier. §.§ Results We compared the performance of the different models and graph construction based on the classification accuracy on the left-out test set. In Table <ref>, we report the mean and standard deviation of the test accuracy across the five different splits (5-folds) for both the full and subset data regimes. All of the models consistently perform much better than random chance (which is roughly 10% accuracy since there are ten classes) but are all far from 100% accuracy. In particular, in the full data regime, accuracy levels range from 54% to 75% and from 44% to 70% in the subset data regime. Overall the two versions of DLF are the best performing methods, particularly on the Dense graphs and the Epsilon Graphs. We note that DLF-MLP outperforms DLF-POLY in four out of six cases, but has the drawback of requiring more parameters. On the k-NN graphs, MCN performs nearly as well as DLF, but is the least accurate method on the dense graph construction. Scattering is overall the lowest performing method. However, its performance is the least affected by the number of samples. For instance, on the dense graph construction, it loses four percentage points of accuracy compared to MCN and DLF which lose ten and nine points. This suggests that the wavelet filters are useful geometric descriptors, but that overly hand-crafted networks lack the flexibility to learn from data. § CONCLUSION We have introduced a new framework for analyzing and implementing manifold neural networks that we call manifold filter-combine networks. This framework naturally allows us to think about many interesting classes of MNNs such as the manifold analogs of GCNs and several relaxed variations of the manifold scattering transform. Additionally, we have provided methods for implementing such networks when one does not have global knowledge of the manifold, but merely has access to n sample points, that converge provably to their continuum limit as n→∞. In order to establish this result, we also prove a theorem establishing sufficient convergence conditions for the individual filters used in the network. This result is not specific to any particular graph construction. Instead, it shows that if the eigenvectors and eigenvalues of the graph Laplacian converge (and additionally that discrete inner products converge to continuum inner products) then spectral filters constructed from the graph Laplacian will converge as well. This allows our results to be applied to a wide variety of graph constructions including those discussed in Examples <ref>, <ref>, and <ref>. The flexibility of our setup is deliberate. The development of manifold neural networks is in its infancy, even compared to graph neural networks, and there are many questions about which networks will perform best in practice. Should networks use learnable filter banks similar to a CNN or predesigned averaging operations similar to a common aggregate-combine network? Are cross-channel convolutions a viable way to introduce learning in settings where there are no nontrivial relations between input channels? In this work, we do not claim to provide an answer to the question “what are the best ways to design a manifold neural network?" which ultimately will need to be answered through thorough experimentation. The purpose of this paper is instead to facilitate this experimentation by providing a useful framework for thinking about MNNs. We also note several other important areas of future work. (i) In examples <ref>, <ref>, and <ref>, we consider settings where the data points {x_i} lie exactly on the manifold and are sample i.i.d. uniformly at random. Relaxing these assumptions would greatly increase the applicability of our theory to noisy real-world data. (ii) Most of the data sets used in the MNN literature focus on two-dimensional surfaces. Developing challenging and relevant benchmarks for learning on higher-dimensional manifolds would help facilitate the experimental exploration of various MNN architectures. § ACKNOWLEDGEMENT The authors thank Luana Ruiz for helpful discussion that greatly improved the quality of our exposition. plain § THE PROOF OF THEOREM <REF> We first note that if either w or f is κ bandlimited, we have w(𝐋_n)P_nf-P_nw(ℒ)f_2 = ∑_i=1^κ w(λ_i^n)⟨ P_nf,ϕ_i^n⟩_2ϕ_i^n - ∑_i=1^κ w(λ_i)⟨ f,ϕ_i⟩_ℳP_nϕ_i_2 ≤ ∑_i=1^κ (w(λ_i^n)-w(λ_i))⟨ P_nf,ϕ_i^n⟩_2ϕ_i^n_2+∑_i=1^κ w(λ_i)(⟨ P_nf,ϕ_i^n⟩_2ϕ_i^n- ⟨ f,ϕ_i⟩_ℳP_nϕ_i)_2. To bound the first term from (<ref>), we note that by the triangle inequality, the Cauchy-Schwarz inequality, and the assumption that n is large enough so that α_n≤ 1, we have ∑_i=1^κ (w(λ_i^n) - w(λ_i))⟨ P_nf,ϕ_i^n⟩_2ϕ_i^n_2 ≤ max_1≤ i ≤κ |w(λ_i^n)- w(λ_i)| ∑_i=1^κP_n f_2 ϕ_i^n^2_2 ≤ A_Lip(w)α_n ∑_i=1^κP_n f_2 ϕ_i^n^2_2 ≤ A_Lip(w)κα_n P_n f_2 ≤ A_Lip(w)κ(α_n f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ)), where we use the fact that ϕ_i^n_2^2=1 and that P_nf_2≤(f_𝐋^2(ℳ)^2 + γ_n^2f_𝐋^∞(ℳ)^2)^1/2≤f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ). Now, turning our attention to the second term from (<ref>), we have ∑_i=1^κ w(λ_i)(⟨ P_nf,ϕ_i^n⟩_2ϕ_i^n- ⟨ f,ϕ_i⟩_𝐋^2(ℳ)P_nϕ_i)_2 ≤ ∑_i=1^κ w(λ_i)⟨ P_nf,ϕ_i^n⟩_2(ϕ_i^n-P_nϕ_i)_2 +∑_i=1^κ w(λ_i)(⟨ P_nf,ϕ_i^n⟩_2- ⟨ f,ϕ_i⟩_𝐋^2(ℳ)P_nϕ_i_2. By the assumption (<ref>), we have ϕ_i^n-P_nϕ_i_2≤β_n. Therefore, since w non-amplifying, we see ∑_i=1^κ w(λ_i)⟨ P_nf,ϕ_i^n⟩_2(ϕ_i^n-P_nϕ_i)_2 ≤κmax_1≤ i≤κ |⟨ P_nf,ϕ_i^n⟩_2|ϕ_i^n-P_nϕ_i_2 ≤κβ_nP_nf_2 ≤κβ_n (f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ) ), where the final inequality follows from (<ref>). Meanwhile, the second term from (<ref>) can be bounded by ∑_i=1^κ w(λ_i)(⟨ P_nf,ϕ_i^n⟩_2- ⟨ f,ϕ_i⟩_ℳ)P_nϕ_i_2 ≤ ∑_i=1^κ |w(λ_i)| |⟨ P_nf,ϕ_i^n⟩_2- ⟨ f,ϕ_i⟩_ℳ|P_nϕ_i_2 ≤ ∑_i=1^κ |⟨ P_nf,ϕ_i^n⟩_2- ⟨ f,ϕ_i⟩_ℳ|P_nϕ_i_2 ≤ ∑_i=1^κ |⟨ P_nf,ϕ_i^n⟩_2-⟨ P_nf,P_nϕ_i⟩_2|P_nϕ_i_2+∑_i = 1^κ |⟨ P_nf,P_nϕ_i⟩_2- ⟨ f,ϕ_i⟩_ℳ|P_nϕ_i_2. By the Cauchy-Schwarz inequality, (<ref>), (<ref>), and the assumption that n is large enough so that β_n≤ 1, we have |⟨ P_nf,ϕ_i^n⟩_2-⟨ P_nf,P_nϕ_i⟩_2| ≤ P_nf_2 ϕ_i^n-P_nϕ_i_2≤β_n(f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ))≤(β_nf_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ)). And also by (<ref>) we have |⟨ P_nf,P_nϕ_i⟩_2- ⟨ f,ϕ_i⟩_2| ≤γ_n^2f_𝐋^∞(ℳ)ϕ_i_𝐋^∞(ℳ), and P_nϕ_i_2≤ 1+γ_nϕ_i_𝐋^∞(ℳ). It is known (see, e.g., Appendix L of <cit.> and the references there) that ϕ_i_𝐋^∞(ℳ)≤ C_ℳ i^(d-1)/2d≤ C_ℳi^1/2. Therefore, for all i≤κ the assumption that n is large enough that γ_nκ^1/2≤ 1 implies |⟨ P_nf,P_nϕ_i⟩_2- ⟨ f,ϕ_i⟩_2| ≤ C_ℳγ^2_nκ^1/2f_𝐋^∞(ℳ)≤ C_ℳγ_n, and P_nϕ_i_2≤ 1+γ_n κ^1/2≤ 2. Therefore, if n is large enough such that γ_nκ^1/2<1, then the second term from (<ref>) can be bounded by ∑_i=1^κ w(λ_i)(⟨ P_nf,ϕ_i^n⟩_2 -⟨ f,ϕ_i⟩_ℳ)P_nϕ_i_2 ≤ ∑_i=1^κ |⟨ P_nf,ϕ_i^n⟩_2-⟨ P_nf,P_nϕ_i⟩_2|P_nϕ_i_2 +∑_i=1^κ|⟨ P_nf,P_nϕ_i⟩_2- ⟨ f,ϕ_i⟩_2|P_nϕ_i_2 ≤ ∑_i=1^κ(β_nf_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ))P_nϕ_i_2 +∑_i=1^κ C_ℳγ_nf_𝐋^∞(ℳ)P_nϕ_i_2 ≤ C_ℳ(κ(β_nf_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ)) + γ_nκf_𝐋^∞(ℳ)) ≤ C_ℳκ( β_nf_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ)). Therefore, combining Equations (<ref>) through (<ref>) yields w(𝐋_n)P_nf-P_nw(ℒ)f_2 ≤ ∑_i=1^κ (w(λ_i^n) - w(λ_i))⟨ P_nf,ϕ_i^n⟩_2ϕ_i^n_2+∑_i=1^κ w(λ_i)(⟨ P_nf,ϕ_i^n⟩_2ϕ_i^n-⟨ f,ϕ_i⟩_ℳP_nϕ_i)_2 ≤ A_Lip(w)κ (α_nf_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ))+ C_ℳ(κβ_n(f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ)) + γ_nκf_𝐋^∞(ℳ)) ≤ C_ℳκ((A_Lip(w)α_n+β_n)f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ)) thus completing the proof of (<ref>). To prove (<ref>), we observe that since w_𝐋^∞([0,∞)), we have w(𝐋_n)𝐱-w(𝐋_n)P_nf_2 ≤𝐱-P_nf_2 by the same reasoning as (<ref>). Therefore, by the triangle inequality, we have w(𝐋_n)𝐱-P_nw(ℒ)f_2 ≤ w(𝐋_n)𝐱-w(𝐋_n)P_nf_2 + w(𝐋_n)P_nf-P_nw(ℒ)f_2 ≤ 𝐱-P_nf_2 + C_ℳκ((A_Lip(w)α_n+β_n)f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ)) as desired. § THE PROOF OF THEOREM <REF> In order to prove Theorem <ref>, we need the following lemma which bounds the error in each step. The errors induced by the non-filtering steps of our network may be bounded by 𝐲_j,k^(ℓ)-P_ng_j,k^(ℓ)_2 ≤max_1≤ i≤ C_ℓ𝐱̃^(ℓ)_j,k-P_nf̃^(ℓ)_j,k_2∑_i=1^C_ℓ |θ_i,k^(ℓ,j)|, 𝐲̃_j,k^(ℓ)-P_ng̃_j,k^(ℓ)_2 ≤max_1≤ i≤ J_ℓ𝐲^(ℓ)_j,k-P_n g^(ℓ)_j,k_2∑_i=1^J_ℓ |α_j,i^(ℓ,k)|. 𝐳^(ℓ)_j,k-P_nh^(ℓ)_j,k_2 ≤𝐲̃^(ℓ)_j,k-P_ng̃^(ℓ)_j,k_2 To verify (<ref>), we observe that 𝐲_j,k^(ℓ)-P_ng_j,k^(ℓ)_2 =∑_i=1^C_ℓ𝐱̃^(ℓ)_j,kθ_i,k^(ℓ,j)-P_nf̃^(ℓ)_j,kθ_i,k^(ℓ,j)_2 ≤∑_i=1^C_ℓ|θ_i,k^(ℓ,j)|𝐱̃^(ℓ)_j,k-P_nf̃^(ℓ)_j,k_2 ≤max_1≤ i≤ C_ℓ𝐱̃^(ℓ)_j,k-P_nf̃^(ℓ)_j,k_2∑_i=1^C_ℓ |θ_i,k^(ℓ,j)|. The proof of (<ref>) is identical to the proof of (<ref>). For (<ref>), we see that since σ is non-expansive we have 𝐳^(ℓ)_j,k-P_nh^(ℓ)_j,k^2_2 =∑_i=1^n| (𝐳^(ℓ)_j,k)(i)-(P_nh^(ℓ)_j,k)(i)|^2 =∑_i=1^n| (𝐳^(ℓ)_j,k)(i)-h^(ℓ)_j,k(x_i)|^2 =∑_i=1^n| σ((𝐲̃^(ℓ)_j,k)(i))-σ(g̃^(ℓ)_j,k(x_i))|^2 ≤∑_i=1^n| (𝐲̃^(ℓ)_j,k)(i)-g̃^(ℓ)_j,k(x_i)|^2 =𝐲̃^(ℓ)_j,k-P_ng̃^(ℓ)_j,k^2_2. It follows from the definition of the reshaping operator that max_k𝐱_k^(ℓ+1)-P_nf_k^(ℓ+1)_2 = max_j,k𝐳^(ℓ)_p,r-P_nh^(ℓ)_p,r_2. Therefore, by Lemma <ref> we have max_k𝐱_k^(ℓ+1)-P_nf_k^(ℓ+1)_2 = max_j,k𝐳^(ℓ)_p,r-P_nh^(ℓ)_p,r_2. ≤ P_ng̃^(ℓ)_j,k-𝐲̃^(ℓ)_j,k_2. ≤ A^(ℓ)_2max_j,kP_n g^(ℓ)_j,k-𝐲^(ℓ)_j,k_2 ≤ A^(ℓ)_2A^(ℓ)_1max_j,kP_n f̃^(ℓ)_j,k-𝐱̃^(ℓ)_j,k_2 ≤ A^(ℓ)_2A^(ℓ)_1(max_k𝐱_k^(ℓ)-P_nf_k^(ℓ)_2 +ϵ_ℓ,n) Since 𝐱_0^(ℓ)-P_nf^(0)_k_2=0 for all k, we may use induction to conclude that 𝐱^(ℓ)_k-P_nf^(ℓ)_k_2≤∑_i=0^ℓ-1∏_j=i^ℓ-1 A_1^(j) A_2^(j)ϵ_i,n. § TRAINING AND IMPLEMENTATION DETAILS We trained all three models by minimizing the cross-entropy loss between predicted probabilities for each of the 10 categories and the ground truth category of each point cloud. We used the Adam optimizer for 200 epochs with a batch size of 32. The learning rate was selected according to validation performance and was chosen among 0.01 and 0.001. For each model, we used two layers of manifold networks, followed by a multi-layer perceptron classifier consisting of a single hidden layer. The hyper-parameters specific to each model and graph construction scheme are given in Table <ref>.
http://arxiv.org/abs/2307.07319v1
20230714125151
The Power of Large Language Models for Wireless Communication System Development: A Case Study on FPGA Platforms
[ "Yuyang Du", "Soung Chang Liew", "Kexin Chen", "Yulin Shao" ]
eess.SP
[ "eess.SP" ]
[]+0.575cmThe Power of Large Language Models for Wireless Communication System Development: A Case Study on FPGA Platforms Yuyang Du^a, Soung Chang Liew^a*, Kexin Chen^a, Yulin Shao^b ^a The Chinese University of Hong Kong, Hong Kong SAR, China ^b University of Exeter, Exeter, U.K. *Corresponding author: S. C. Liew ([email protected]). August 12, 2023 =============================================================================================================================================================================================================================================== Large language models (LLMs) have garnered significant attention across various research disciplines, including the wireless communication community. There have been several heated discussions on the intersection of LLMs and wireless technologies. While recent studies have demonstrated the ability of LLMs to generate hardware description language (HDL) code for simple computation tasks, developing wireless prototypes/products via HDL poses far greater challenges because of the more complex computation tasks involved. In this paper, we aim to address this challenge by investigating the role of LLMs in FPGA-based hardware development for advanced wireless signal processing. We begin by exploring LLM-assisted code refactoring, reuse, and validation, using an open-source software-defined radio (SDR) project as a case study. Through the case study, we find that an LLM assistant can potentially yield substantial productivity gains for researchers and developers. We then examine the feasibility of using LLMs to generate HDL code for advanced wireless signal processing, using the Fast Fourier Transform (FFT) algorithm as an example. This task presents two unique challenges: the scheduling of subtasks within the overall task and the multi-step thinking required to solve certain arithmetic problem within the task. To address these challenges, we employ in-context learning (ICL) and Chain-of-Thought (CoT) prompting techniques, culminating in the successful generation of a 64-point Verilog FFT module. Our results demonstrate the potential of LLMs for generalization and imitation, affirming their usefulness in writing HDL code for wireless communication systems. Overall, this work contributes to understanding the role of LLMs in wireless communication and motivates further exploration of their capabilities. FPGA, Verilog, large language models, wireless communication, prototype § INTRODUCTION The emergence of large language models (LLMs) has garnered significant attention within the research community. Scholars from diverse scientific disciplines are intrigued by LLMs due to their potential to transcend the realm of natural language processing (NLP). Researchers in the wireless communication field have also taken notice of this trend, leading to a series of insightful discussions in the community <cit.>. Recent studies indicated that LLMs possess the ability to generate hardware description language (HDL), such as Verilog, thereby painting a promising picture of LLMs aiding researchers and hardware engineers in the foreseeable future. The advancement of LLM-written HDL is particularly encouraging for the wireless communication community, given that many wireless products and software-defined radio (SDR) prototypes rely on field programmable gate arrays (FPGA) platforms that are programmed using HDL. However, carrying out a complex FPGA project is challenging. Prior works on LLM-assisted Verilog programming only reported simple hardware logics, such as those for shift register and multi-function calculators with addition/subtraction/multiplication/division (see Section <ref>-C for a detailed analysis). These operations are only small atomic computational tasks, whereas the algorithms for signal processing blocks in wireless communication are far more complex. Knowing only the Verilog code for these atomic operations does not guarantee successful Verilog code for the higher-level signal processing blocks. This raises a critical question: Can LLMs assist in the development of FPGA projects involving intricate wireless signal-processing algorithms? This question remains open and calls for further investigation. This paper presents our endeavors in addressing this compelling issue, contributing to the growing discourse on the potential intersection between LLMs and wireless communication technologies. This paper contributes to two pertinent topics. In our first contribution, we investigate the potential of LLMs as a valuable tool for wireless researchers engaged in FPGA-based prototype/product development. To this end, we examine an open-source, FPGA-based SDR project <cit.> as a case study. We thoroughly analyze the project's code and conduct comprehensive experiments to explore the extent to which an LLM can assist in implementing a wireless system. Our examination identifies three pivotal uses of LLMs: code refactoring, code reuse, and code validation. These uses of LLM, while seemingly mundane, are indispensable in hardware development and point towards the capacity of LLMs to substantially amplify researchers' productivity and expedite their research and development process. In our second contribution, we delve into the possibility of employing LLMs to generate sophisticated HDL code for advanced signal processing algorithms required in wireless communication. To illustrate this, we focus on the Fast Fourier Transform (FFT) algorithm as an example. We emphasize that utilizing LLMs to implement FFT in HDL presents significantly more formidable challenges compared to generating code in commonly used languages such as C, Python, or MATLAB (see Section <ref>-A for a detailed discussion). The first challenge we encountered in using LLMs to generate HDL code for FFT is the scheduling of subtasks inside the FFT module. In general, a complex computation task like FFT can be broken down into a connection of smaller subtasks <cit.>. In high-level languages like C, compilers and operating systems (OSs) can schedule the processing of subtasks to hardware processors in a computer so that programmers do not need to worry about the scheduling issue. However, HDL programming requires designers to interface directly with the hardware without the guidance provided by compilers or OSs. This necessitates a meticulous consideration of parallelism and the precedence relationship between subtasks in HDL programming, a nuance that we have observed to be lacking in current LLMs like ChatGPT. Consequently, using similar prompts as in previous works <cit.> cannot yield a workable FFT for more than four points. To address the challenge, our prompt design applies in-context learning (ICL). This few-shot learning technique enables an LLM to rapidly learn from the additional examples we give it about the parallelism/precedence inside a small-scale FFT (say four-point or eight-point), on which it has not been previously trained. The second challenge is the limited multi-step thinking ability of LLM. Some recent works <cit.> have reported that LLMs do not perform well when given a complex multi-step task, as they cannot decompose the problem as a human engineer can. In an FFT module, however, there are some complex processings that need to be decomposed into multiple steps, say calculating the twiddle factors and expressing them in the form of signed binary numbers. To augment the multi-step problem-solving ability of LLMs with that of human engineers, we exploit Chain-of-Thought (CoT) prompting. This method teaches LLMs how to approach complex multi-step problems in a way that mimics human thinking, enabling them to handle more complex tasks with greater accuracy and efficiency. By incorporating the latest research outcomes in NLP (i.e., ICL and CoT prompting) into the FPGA implementation of complex wireless communication algorithms, we achieved a remarkable milestone in this paper: the successful generation of a 64-point Verilog FFT module using LLM. To the best of our knowledge, this is the first LLM-written complex HDL module ever reported in the field. More importantly, our explorations provide valuable insights into the understanding of LLMs: * LLMs demonstrate remarkable generalization abilities. They can realize sophisticated iterative wireless communication algorithms in HDL, provided that all ambiguities are effectively addressed during the instructional phase. * LLMs exhibit a strong ability to imitate. Once taught the problem-solving approach of a human, they can comprehend complex calculations. These insights highlight the potential of LLMs and pave the way for leveraging LLMs to write HDL code for wireless communication building blocks. § BACKGROUND AND RELATED WORKS §.§ Large Language Models (LLMs) LLMs leverage the transformer architecture <cit.>. Early research like BERT <cit.> and GPT-2 <cit.> paved the way for today's boom. But it was the advent of GPT-3 <cit.> and its successors that brought the public's attention to the potential of these models. Today, the landscape is diverse, featuring numerous LLMs, with an array of options for both general and task-specific applications. Despite their variations, all LLMs share core characteristics. First, they all serve as “scalable sequence prediction models" <cit.>, meaning that they generate the “most probable" continuation of an input prompt. Second, LLMs operate on “tokens", which are commonplace character sequences specified through byte pair encoding. This method allows efficient data handling within the constraint of LLMs' fixed context size. By operating over tokens instead of characters, LLMs can process more text. For instance, in OpenAI's models, each token corresponds to roughly 4 characters, and the context windows can accommodate up to 8,000 tokens. §.§ FPGA-based SDR Development in Wireless Communication Software-defined radios (SDRs) provide a flexible and cost-effective solution to adapt equipment to the fast-evolving wireless communication standards and serve various research purposes. Instead of hardware-centric traditional radios, where different hardware is required to process different signals, SDRs allow the functionalities of a radio system to be defined and altered through software, making it possible to support multiple standards and applications with a single platform. Thanks to its parallel processing capabilities, reprogrammability, and high performance, FPGA plays a pivotal role in SDR development. It provides an efficient platform for implementing complex, computation-intensive signal processing algorithms essential in SDRs. The application of FPGA-based SDRs applications span across various domains, from cellular networks, WiFi, and satellite communications to specialized underwater communication systems. FPGA programming involves using an HDL to describe digital circuitry, ranging from simple combinational circuits to intricate sequential circuits and more complex systems-on-chip (SoCs). There are two standard HDLs for FPGA: VHDL and Verilog, with this paper using Verilog as an example. One of the most widely recognized open-source FPGA-based SDR projects in recent years is OpenWiFi <cit.>, which aims to provide a fully software-defined, reprogrammable WiFi networking solution via Verilog programming, and it has attracted much research interest. OpenWiFi runs on a high-performance Xilinx FPGA board, which provides ample hardware resources to implement the IEEE 802.11 standards. §.§ LLM for Hardware Design Recent research has shown a growing interest in harnessing the capabilities of LLMs to assist researchers and engineers in hardware design. In <cit.>, the authors employed GitHub Copilot to scrutinize the incidence rates of six types of Verilog bugs. Following this, <cit.> and <cit.> investigated the potential for automated bug repairs with LLMs' assistance. This trend is not limited to academia, as industry players such as RapidSilicon are promoting their upcoming LLM-assisted tool for hardware design, called RapidGPT <cit.>. Beyond assisting researchers and engineers, recent studies also show interest in replacing human HDL programmers with LLMs. Initial efforts in this direction were documented in <cit.>, where a fine-tuned GPT-2 model was trained with synthetically generated Verilog snippets. However, the limited generalization ability to unfamiliar tasks was a notable shortcoming of <cit.>. Subsequent research <cit.> expanded on this concept by investigating various strategies for fine-tuning Verilog-writing models. More recently, two studies delved into LLMs' applications in chip design. The former, Chip-Chat <cit.>, employed the latest LLM to design an 8-bit shift register chip; while the latter, ChipGPT <cit.>, focused on the power-performance-area (PPA) optimization of an LLM-composed chip design. In contrast to the aforementioned works, this paper makes significant contributions from two key aspects. In comparison to the first category of works <cit.>, the first contribution of this paper involves using LLMs to assist FPGA development and offers a comprehensive study of the role of LLMs in facilitating the entire FPGA development process, beyond merely bug fixing. We investigate previously unexplored areas, including validation and maintenance issues, and our research includes the first LLM study that further refines a real-time communication system, OpenWiFi, that has already undergone rigorous validation and practical demonstrations previously. In comparison to the second category of works <cit.>, our second contribution focuses on utilizing LLMs to write HDL code specifically for wireless communication hardware, involving significantly more complex signal-processing algorithms than those addressed in prior research. Earlier LLM-written HDL codes, as presented in <cit.> and <cit.>, were confined to simple signal processing no harder than undergraduate-level assignments, as acknowledged by the authors in their subsequent work <cit.>. Although <cit.> and <cit.> advanced the complexity by tackling an 8-bit shift register and a multi-functional calculator, these tasks still fall short in complexity compared to communication algorithms such as FFT. As a result of tackling more demanding coding tasks, we encountered two challenges that had not been previously reported in prior works: the subtask scheduling challenge and the multi-step thinking challenge. Addressing these challenges necessitates the use of advanced methods, namely in-context learning (ICL) and chain-of-thought (CoT) prompting, which have never been considered in prior works <cit.>. § USING LLMS TO ASSIST FPGA DEVELOPMENT This section delves into the various tasks frequently encountered when prototyping wireless systems on FPGAs. Our contribution lies in the proposition of utilizing LLMs to amplify implementation efficiency and productivity in the realm of wireless communication research. By refining OpenWiFi <cit.>, a well-known open-source FPGA-based SDR project, we not only provide valuable insights and practical experiences but also pave the way for AI-assisted SDR development on FPGA. The advantages of harnessing LLMs in hardware development are prominently demonstrated across three pivotal dimensions: 1) Code Refactoring, 2) Code Reuse, and 3) Code Validation. §.§ Code Refactoring Improving code quality is crucial in FPGA design, as even functioning code may still benefit from further enhancements <cit.>. Code refactoring is a routine task for FPGA engineers, involving manual review and editing. Recent research works suggest that artificial intelligence (AI) can assist in scanning and revising code <cit.>. This subsection demonstrates the competence of LLMs in code refactoring and shows how LLMs can offer valuable assistance to engineers in this kind of work. To illustrate this, we choose a simple signal delay module from OpenWiFi and showcase how an LLM can improve the code in terms of readability, efficiency, and reliability. Fig. <ref> presents the original code, while Fig. <ref> presents our prompt for ChatGPT. The first objective of code refactoring is to improve its readability. A Verilog project that is easy to read and understand facilitates easier future maintenance. Key characteristics of well-written code include consistent programming style, meaningful module/variable names, and sufficient comments that clearly explain code functionality. However, in practice, code contributors collaborating on a project may have their own programming styles. Despite widely accepted programming standards and additional coding requirements within a development team, poorly written code can still present challenges in terms of readability and maintainability. LLMs offer a consistent programming style, thus aiding in unifying the code within the same FPGA project written by diverse coders. Additionally, LLMs possess the intelligence to address naming and commenting issues effectively. In our experiments, ChatGPT provides readability suggestions from three distinct perspectives. First, ChatGPT recommends using meaningful names for modules and variables. For instance, in Line 1, the original module name “DelayT" can be replaced with “DelayBuffer" for clarity. Similarly, in Line 15, variable “i" can be renamed to “index" for improved understanding. Second, ChatGPT identifies redundant code within the “always" block and suggests shortening it to enhance readability. The revised code, presented in Lines 17 to 21 of Fig. <ref>, demonstrates the reduction in code length while maintaining readability. Lastly, ChatGPT automatically adds detailed comments to assist readers in better comprehending the code. These comments are also displayed in Fig. <ref>. The second objective of code refactoring is to enhance efficiency. Efficiency in hardware language differs significantly from that in software languages like C or Python. In hardware projects, designers must consider the physical implementation of the hardware within the chip after synthesis. For example, as highlighted by ChatGPT, the code within the “always" block is realized as a shift register in the FPGA. However, this implementation would be wasteful of resources if the function of the IP core is configured to delay the signal for a relatively long time, say more than 10 clock periods. A more efficient approach to implement delay function is using a read/write counter and a block RAM (mature RAM IP cores are readily available). Fig. <ref> does not revise the code based on this suggestion, as it would involve a fundamental redesign of the module. However, we consider this comment to be of significant importance for achieving highly efficient hardware processing in OpenWiFi, especially considering that the delay module is frequently reused in their designs. The third objective of code refactoring is to enhance reliability. Some code may appear to work well in a design simply because the bugs within it are not triggered. For instance, a flawed design may cause problems when the operating voltage or clock frequency is high. Additionally, advanced HDL design tools may automatically correct bugs during the synthesis stage or the place-and-route stage. However, these design tools could not revise the code itself, leaving underlying problems unresolved (although they may not manifest in the final output). Identifying and addressing such problems can be challenging as they produce the correct output at the moment but may cause trouble if triggered in the future, particularly when reusing the code on a different hardware platform with a higher operating voltage or if the HDL design tool changes. Although no one can guarantee bug-free code, and such issues are common in practice (sometimes referred to as “features" rather than “bugs"), avoiding such mistakes during the coding stage is crucial for reliable hardware design. In light of the Verilog code and the prompt provided, ChatGPT highlights two severe problems that could lead to potential system instability. First, ChatGPT suggests adding “wire" data type specifications to the “input" and “output" ports. We consider this comment valuable. Although a Verilog synthesizer can infer the data type of these ports and apply default settings, it is good practice to explicitly state them to be sure. Second, ChatGPT recommends including the negative edge of active-low reset signals (i.e., reset_n) in the sensitivity list of the “always" block. This modification offers two advantages: 1) a digital circuitry utilizing an active-low reset signal is less likely to being erroneously triggered by noise compared to those employing an active-high reset signal <cit.>; 2) asynchronous reset is more reliable because the system can respond immediately upon detecting an error, without waiting for the rising edge of the clock signal <cit.>. Based on the authors' experience in the IC industry, “asynchronous active-low reset" is a widely adopted programming standard. And we believe this is a crucial issue overlooked by the developers of OpenWiFi. In conclusion, through this and similar exercises, we have gained substantial confidence in asserting that LLMs can serve as valuable assistants in improving and refactoring Verilog codes. §.§ Code Reuse Reusing mature designs is a common approach for efficient FPGA development. Highly configurable code allows for easy reuse by simple parameter adjustments. For example, if we apply the default settings for the Verilog module presented in Fig. <ref>, it can delay a 32-bit signal by one clock period. However, if we intend to use the same code for a 64-bit signal and require a delay of four clock periods, we simply need to set the parameters DATA_WIDTH and DELAY to 64 and 4, respectively. In realistic engineering scenarios, it is true that not all codes are written in a parameterized manner. This can lead to a large workload when attempting to customize the code for specific requirements. For instance, Fig. <ref> shows the complex multiplier module used in OpenWiFi, where the code is specifically designed for input signals with a 16-bit data width. If there is a need to enhance the calculation precision to 32 bits for better signal processing accuracy, it is necessary to invest time to rewrite the code. This process can be tedious and time-consuming, requiring careful modifications and adjustments. We found that LLMs can provide significant assistance in such tasks. In Fig. <ref> below, we present the prompt used for the parameterization job, and in Fig. <ref>, we show the new code generated by ChatGPT. As seen, ChatGPT successfully parameterizes the code while maintaining its correctness. The revised code is versatile and capable of accommodating diverse data widths and latency requirements by adjusting the module's parameter settings. By applying this method to modify the codes in OpenWiFi, we can enhance its user-friendliness for reuse and extensions. This approach facilitates easier customization and adaptation of the project to different specifications and requirements. §.§ Code Validation In FPGA development, code validation is a routine task that engineers undertake to ensure the correctness of the code. This typically involves writing a testbench and attempting to cover a wide range of input possibilities. This subsection points out a potential shift in the future: we may not need to invest extensive time in testbench development, as LLMs can assist in generating rigorous testbenches with comprehensive input coverage. Fig. <ref> presents the prompt we used to generate the testbench for the complex multiplier given in Fig. <ref>. The testbench code generated by ChatGPT is shown in Fig. <ref>. It is evident from the code that ChatGPT produces a well-structured testbench, incorporating all the necessary elements. We test the revised complex multiplier with the generated testbench, and the result indicates that both the revised module and the testbench are error-free. Additionally, ChatGPT can also provide the expected outputs for each input it generated, which can further assist in the code validation process. For example, with the prompt presented in Fig. <ref>, we obtain more potential inputs for the testbench, and ChatGPT also outputs the corresponding calculation result for each input. This greatly simplifies the validation process. § CHALLENGES IN IMPLEMENTING COMPLEX SIGNAL-PROCESSING ALGORITHM IN VERILOG Although previous research <cit.> has demonstrated LLMs' ability in generating basic hardware modules, such as shift registers or dice rollers (as discussed in Section <ref>), employing LLMs to generate HDL code for advanced signal-processing algorithms remains unexplored. To bridge this gap, the following two sections present our efforts in pushing the knowledge boundary of LLMs. Specifically, we delve into utilizing LLMs to generate complex HDL code for advanced wireless communication algorithms, going beyond the simple code refinement task discussed in Section <ref>. As a case study, we focus on FFT, a complex yet vital component in wireless communication hardware. In this section, we highlight two challenges when employing LLMs to generate Verilog code for the FFT module, namely the “subtask scheduling" problem and the “multi-step thinking" problem. In the next section, we present our approaches to address these challenges through the utilization of in-context learning (ICL) and Chain-of-Thought (CoT) prompting techniques. The analysis in these two sections provides valuable insights to effectively leverage LLMs for generating complex HDL code specifically tailored for iterative signal-processing algorithms like FFT. §.§ Verilog Code Generated by ChatGPT Let us begin with a conversation with ChatGPT. In our experiments, we used the prompt given in Fig. <ref> for multiple tries and found that ChatGPT was unable to generate the code for a 64-point FFT module. Specifically, the AI either provided a general framework for the FFT module, requiring additional manual input to complete the specific code, or offered an implementation limited to a trivial two-point FFT. We invite readers to personally engage with this exercise, as it offers a firsthand understanding of the current capabilities of ChatGPT in this particular application. Upon the failure, we scaled down the complexity of the task to generating an eight-point FFT module instead of a 64-point FFT module. ChatGPT managed to create code that seemed correct and professional (as depicted in Fig. <ref>). However, despite its polished appearance, the generated code failed to function as expected and did not pass the FFT testbench. §.§ Challenges One: The Subtask Scheduling Problem The first issue we identify in the generated code is that ChatGPT lacks an essential understanding of task scheduling and sequential control. In general, complex computational tasks can be broken down into a series of simpler subtasks, among which parallel and precedence relationships often exist. For example, in the code provided in Fig. <ref>, we observe that the FFT computation can be decomposed into numerous butterfly computations and complex multiplications (as depicted from Line 38 to 59). Some subtasks, such as butterfly computations from the same FFT stage, can be executed concurrently. However, other tasks, like butterfly computations of successive stages, have a precedence relationship and cannot be parallelized. To better elucidate this precedence relationship, we present the flow graph of an eight-point FFT in Fig. <ref>. Here, it is clear that the output of the orange/blue butterfly computation (in the first FFT stage) serves as the input for the red butterfly computation (in the second FFT stage). We cannot execute the red task until the blue and orange tasks have been completed, indicating a precedence relationship between the red task and the blue/orange task. For a more detailed explanation and formal definition of the precedence relationships among butterfly computations, we refer readers to Section II of a related paper <cit.>. Bearing the above discussion in mind, we note that the precedence relationships among butterfly computations necessitate careful subtask scheduling when implementing FFT in Verilog. Unlike high-level programming languages, which have the assistance of compilers and OSs to manage subtask's scheduling[The scheduling issue can be less complex for high-level programming languages like C or python, as the software compilers and the OS can help to handle the scheduling problem. These tools can distinguish the parallel/precedence relationship and assign the butterfly computations to hardware processers accordingly, and users can just describe their algorithm without too much worry about the scheduling issue. For more details about how to implement FFT in C and what a compiler/OS can help in the implementation, we refer the reader to the documentation of FFTW, a high-performance FFT library <cit.>.], Verilog faces hardware directly. Therefore, designers need to consider the scheduling issue themselves in the HDL code so that these subtasks can be executed in a sequential manner. Over the past few decades, FFT implementation has been extensively studied and several classic scheduling schemes have emerged <cit.>. The simplest approach for precise subtask execution control involves the use of enable signals and output-state-indicating signals. Specifically, if the execution of subtask A depends on the completion of subtask B, we can connect the output-state-indicating signal of B with the enable signal of A to manage their execution. Upon the completion of B, its output indicating signal becomes valid, which subsequently triggers the execution of A. The output-state-indicating signal is sometimes referred to as the “done" signal, as it becomes valid only when the associated subtask is fully executed. We now look back to the code generated by ChatGPT. It is apparent that there is no task execution control in the implementation. Neither the basic method of using enable/done signals nor more advanced methods like state machines <cit.> are observed in the code. To confirm our observation, we validated the code using our testbench and found that many subsequent subtasks were prematurely executed before the outputs of their preceding tasks became valid. This resulted in erroneous outputs at the final stage. The experiment results corroborate our initial assertion: ChatGPT, in its current state, lacks awareness of subtask scheduling and sequential execution control. Therefore, it is incapable of generating a viable FFT module autonomously. In Section <ref>-A, we detail our approach to enabling ChatGPT to comprehend the concept of precedence relationships among subtasks and subsequently implement execution control using enable/done signals. §.§ Challenges Two: The Multi-step Thinking Problem The second issue we identified with the LLM-generated code is the inability of ChatGPT to correctly generate the twiddle factors, a crucial component in FFT calculations (refer to lines 16 to 19 in Fig. <ref>). This issue persisted regardless of the number of attempts or variations in the prompts we used. Before diving into why ChatGPT is unable to generate these factors, it is necessary to provide a detailed understanding for the concept of twiddle factors. As observed in Fig. <ref>), the data in the course of the algorithm is multiplied by trigonometric constant coefficients, denoted as W_N^k=e^-j( 2π k/N), where N=8 is the size of FFT, and index k∈{ 0,1,...N/2}. These coefficients are referred to as the twiddle factors. In theory, the real and imaginary parts of W_N^k are numbers no larger than one. In hardware processing, however, things are different because digital circuitry is designed to handle integers expressed in binary form. Here, we illustrate how a human engineer would transform the complex number W_N^k into a 32-bit binary sequence, with a 16-bit imaginary part and a 16-bit real part, using W_8^1 as an example: * Step One (calculation): we have W_8^1=e^-j( π/4 )=0.7071-0.7071i form trigonometric calculations. * Step Two (scaling): we scale the real and imaginary parts of W_8^1 by multiplying them by a scaling factor, typically chosen as the maximum value that can be represented by the number of bits allocated for each part (in this case, 16 bits). Hence, we amplify Re( W_8^1) and Im( W_8^1) by 2^15-1. Now, we have Re( W_8^1)=23169.5457 and Im( W_8^1)=-23169.5457. * Step Three (rounding): we do rounding operation on Re( W_8^1) and Im( W_8^1), and now we have Re( W_8^1)≈23170 and Im( W_8^1)≈ -23170. * Step Four (Conversion to binary): we convert Re( W_8^1) and Im( W_8^1) to their binary representations, which is “0101,1010,1000,0010" and “1010,0101,0111,1110", respectively. * Step Five (Concatenation): we concatenate the binary representations of the scaled real and imaginary parts to form a 32-bit binary sequence, with the higher 16 bits being the imaginary part and the lower 16 bits being the real part. We can now represent W_8^1 by “1010,0101,0111,1110, 0101,1010,1000,0010". By following the above five steps, a human engineer can transform W_N^k into a 32-bit binary sequence suitable for hardware processing in digital circuitry. One more thing we note is that, when employing the 32-bit sequence for complex multiplication, we need to shrink the multiplication output appropriately to maintain accuracy, as we have amplified W_N^k in Step Two. From the previous discussion, it becomes evident that generating the twiddle factors is not a straightforward process. It involves five different steps. Although the logical reasoning required for each individual step might not pose a significant challenge for ChatGPT, the entire problem becomes very difficult for the AI model, as it lacks the ability to decompose the problem into intermediate steps as a human engineer would do. This limitation, known as the lack of multi-step thinking ability, has also been observed in recent research within the NLP community <cit.>. A number of studies have been carried out to enhance the capabilities of large language models like ChatGPT by aiding them in emulating human-like multi-step reasoning processes<cit.>. This line of research aims to help AI overcome complex problems that require intermediate steps for solution. Given the analysis above, we have identified the underlying reason why ChatGPT could not generate the twiddle factors in our initial trials. In Section <ref>-B, we will further discuss our approach to addressing this “multi-step thinking" problem and making ChatGPT able to perform our task. § SOLVING IMPLEMENTATION CHALLENGES VIA ICL AND COT PROMPTING §.§ In-context Learning (ICL) for Challenge One A brief introduction about ICL Let us first briefly introduce the concept of ICL. The concept of ICL was popularized in <cit.>, which introduced how to enable GPT-3 to learn from a few examples. In ICL, we give an LLM a prompt containing several question-answer pairs as examples to demonstrate how to complete a task. Following these pairs, a new, unaddressed question is appended to the prompt. The aim is for the LLM to analyze the previously given examples, extrapolate the underlying task, and provide an answer to this new question based on that learning context. In Fig. <ref>, we give an example prompt for using LLMs in a news classification task. As input examples, we provide several news titles and their corresponding topic classifications, creating a series of question-answer pairs. We then present the LLM with a news title for which it must generate the relevant topic classification. To correctly answer this question, the model must analyze the provided examples to understand several aspects of the problem: the structure of the input (news titles), the range of possible outputs (possible news topics), the mapping from input to output (topic classification), and the formatting of the output (a single word with the first letter capitalized). With this understanding, ChatGPT generates the correct answer, i.e., “Technology". ICL distinguishes itself from conventional machine learning algorithms in several key ways <cit.>. Most notably, it does not require any parameter optimization or the addition of new parameters to the model. ICL works effectively with only a handful of training examples to get an LLM operational on a new topic, and its natural language interface is intuitive, even for beginners. There have been recent research efforts aiming to decipher why ICL performs so remarkably well. The prevailing theory is that an LLM can more effectively “locate" a previously learned concept with the assistance of ICL. Specifically, since an LLM is trained on a vast amount of text encompassing a wide range of topics and formats, it can model a diverse array of learned concepts with knowledge from various domains. An LLM can deliver better results if we assist it in selecting the most suitable domain knowledge with the hints provided in our ICL examples. For instance, in this paper, our task necessitates greater domain knowledge in HDL, as opposed to languages like C or Python. For a more comprehensive understanding of the underlying mechanisms that make ICL effective, we refer the reader to <cit.>. ICL for our Verilog-writing task We now demonstrate how we use ICL to build the 64-point FFT module in Verilog. We start by re-shaping the FFT flow graph in an iterative manner for ChatGPT's easier understanding and imitation. In a typical FFT flow graph, such as the one presented in Fig. <ref>, an N-point FFT has log_2N stages. In essence, the signal processing in the subsequent log_2N-1 stages can be perceived as two parallel N/2-point FFT processes. Therefore, as illustrated in Fig. <ref>, using an eight-point FFT example, we can simplify the flow graph into two stages: the first stage consists of N/2 butterfly computations and N/2 complex multiplications, we term these two substages as stage 1-A and stage 1-B, respectively. The second stage encompasses two parallel N/2-point FFT processes. With the new iterative flow graph, we simplify the understanding of FFT and aid in the better comprehension of LLMs. We then analyze the precedence/parallel relationship within the iterative FFT flow graph. It is important to note that, beyond the structure of the flow graph, the available hardware resources can also influence these relationships. For instance, if an FPGA has limited hardware resources, to the point that it can only execute one butterfly computation at a time, the butterfly computations within the same stage (such as the four butterfly computations in Stage 1-A of Fig. <ref>) would have a precedence relationship. This is because they must be executed sequentially, i.e., one after another. On the other hand, if the FPGA holds abundant hardware resources, the butterfly computations within the same stage can be executed in a fully parallel manner. In this paper, we consider a scenario where the FPGA has ample hardware resources so that the precedence/parallelism relationships are solely determined by the flow graph itself. With this assumption, we characterize the precedence/parallelism relationship of an N-point FFT as follows: * Stage 1-A: The N/2 butterfly computations with this stage can be executed in parallel. These computations can be processed simultaneously when triggered by the external enable signal. * Stage 1-B: The N/2 complex multiplications within this stage can be executed in parallel, but their executions are triggered by the completeness of butterfly computations in Stage 1-A. * Stage 2 The two N/2-point FFT in Stage 2 within this stage can be executed in parallel, but their executions are triggered by the completeness of complex multiplications in Stage 1-B. With the simplified precedence/parallel relationships and the iterative FFT flow graph discussed above, we now demonstrate how we generate our question-answer pairs and conduct ICL with the goal of creating a 64-point FFT module using ChatGPT. In the first step, we use ChatGPT to generate two simple IP cores that will be frequently used in the subsequent FFT implementation: the butterfly computation IP core and the complex multiplication IP core. Fig. <ref> elucidates the prompt specifically devised for this task, while Fig. <ref> showcases the code generated by ChatGPT. In the second step, we give the first question-example pair. The example question, as depicted in Fig. <ref>, asks the LLM to generate a four-point FFT IP core, building upon the provided two-point FFT (which is identical to the butterfly computation IP core). Our example answer, as illustrated in Fig. <ref>, employs two butterfly computations, two complex multiplications, and two two-point FFTs to construct a four-point FFT, adhering to the iterative structure delineated in Fig. <ref>. Furthermore, this example answer also demonstrates the methodology of connecting “enable" and “done" signals of sub-modules to effectuate the precedence/parallel relationships outlined above. In the third step, we proceed with the second question-example pair. The example question, presented in Fig. <ref>, asks ChatGPT to develop an eight-point FFT module base on the provided four-point FFT, which is obtained in the first question-answer pair. Our example answer, showcased in Fig. <ref>, outlines how the numerous sub-modules (consisting of four butterfly computations, four complex multiplications, and two four-point FFTs) are interconnected in accordance with the iterative FFT flow graph. Furthermore, we present the method of connecting “enable" and “done" signals once again, reinforcing this knowledge for ChatGPT. In the fourth step, we cease providing examples. Instead, we pose a new question to ChatGPT akin to the previous example question: generate a 16-point FFT predicated on the eight-point FFT provided (i.e., the one we give as the example answer in step three). This time, ChatGPT produces an implementation code that is synthesizable and capable of generating outputs identical to those of a Xilinx 16-point FFT IP core, thereby ensuring functional correctness. The only persisting issue pertains to the absence of twiddle factors, a problem that we intend to address in the succeeding subsection (in the above benchmarking with the Xilinx IP core, we filled in the twiddle factors generated in Subsection B to the code). Finally, we repeat step four in an iterative manner. This is, we ask ChatGPT to generate an N-point FFT with the provided N/2-point FFT (which was generated by ChatGPT in the preceding iteration). We do not stop the iteration until we acquire the desired FFT module. In this paper, as a proof of concept, we terminate at the 64-point FFT and present the generated code as in Fig. <ref> below. We test the code with our 64-point FFT testbench and compare the output of the LLM-written module with the output of a 64-point FFT IP core provided by Xilinx. Experimental results reveal that the Verilog module, written by ChatGPT, is functionally accurate after the above iterative generation process. §.§ Chain-of-Thought (CoT) Prompting for Challenge Two A brief introduction about CoT prompting A typical class of tasks that present challenges to language models is solving mathematical problems, particularly those requiring multi-step reasoning <cit.>. This challenge persisted as a tough problem in the NLP community until the advent of LLMs. In <cit.>, the authors surprisingly discovered that their language model's arithmetic reasoning capability can be dramatically enhanced when the model size scales beyond 100 billion parameters. Furthermore, <cit.> found that guiding an LLM through a human's chain-of-thought in breaking down a multi-step problem into intermediary steps can enable the model in solving complex reasoning problems, which are unattainable with conventional prompting methods. These two inspiring discoveries have inspired a recent surge of research interest in chain-of-thought (CoT) prompting for LLMs. We now give an example to illustrate the concept of CoT prompting. In this example, as shown in Fig. <ref>, the baseline prompt comprises ICL with a single example question-answer pair. In contrast, the CoT prompt extends the example answer to incorporate a chain of thought detailing how the problem should be dissected and tackled. For more examples illustrating the efficacy of CoT prompting, we refer interested readers to <cit.>. From the above example, it is evident that ICL coupled with CoT prompting outperforms the baseline approach. However, it is important to note that we do not mean that a contemporary LLM cannot generate the correct answer using the baseline prompt. Our intention is to use this example to demonstrate the concept of CoT prompting and how it should be employed. In fact, LLMs nowadays have advanced beyond those reported in early studies and can produce correct results for the simple question depicted in Fig. <ref>, even without the assistance of ICL or CoT prompting. However, for more complex tasks, such as the twiddle factor generation tasks we describe in Section <ref>-C, we observe that the challenge of multi-step reasoning persists. That motivates us to integrate CoT prompting within the ICL framework in this study. CoT prompting for our twiddle factors generation task In Section <ref>-C, we elucidated the process of converting twiddle factors into 32-bit sequences for digital circuitry. Here we describe the multi-step transformation process in detail and design the CoT prompt. As an illustration, our prompt employs the generation process of the twiddle factors for an eight-point FFT (i.e., W_8^0, W_8^1, W_8^2, and W_8^3) as examples. And then we ask ChatGPT to generate the twiddle factors for a 16-point FFT. We depict our prompt and the resulting twiddle factor sequences in Fig. <ref> and Table I, respectively. Moreover, we verify that the same CoT prompt (employing W_8^0, W_8^1, W_8^2, and W_8^3 as examples) is applicable for generating twiddle factors for larger-scale FFTs, such as 32-point or 64-point. In other words, we can skip teaching ChatGPT about the factors generation of a 32-point FFT and directly jump to the factor generation of a 64-point FFT, which affirms that the LLM does internalize the crucial knowledge imparted through the CoT prompt (rather than simply parroting the input). Here we do not present the twiddle factor generation process for larger-scale FFTs due to page limitation. We encourage interested readers to give it a try themselves. § CONCLUSION This paper delves into the intersection of Large Language Models (LLMs) and wireless communication technologies, yielding inspiring results in utilizing LLMs to prototype wireless systems. Our research highlights the potential of LLMs in facilitating complex FPGA development within wireless systems. We begin by demonstrating how an LLM can serve as a crucial assistant for FPGA development, providing examples in code refactoring, code reuse, and system validation. Moreover, we showcase LLMs' ability to generate sophisticated Hardware Description Language (HDL) codes for advanced signal-processing algorithms in wireless communication, with a focus on the fundamental Fast Fourier Transform (FFT) processing. By addressing the subtask scheduling problem and multi-step thinking problem through In-context Learning (ICL) and the Chain of Thoughts (CoT) prompting techniques, we successfully generated a 64-point Verilog FFT module using LLMs for the first time. This exploration of LLMs' generalization and imitation capabilities expands their potential applications and underscores their value in the wireless communication domain. IEEEtran
http://arxiv.org/abs/2307.03948v1
20230708101129
Reading Between the Lanes: Text VideoQA on the Road
[ "George Tom", "Minesh Mathew", "Sergi Garcia", "Dimosthenis Karatzas", "C. V. Jawahar" ]
cs.CV
[ "cs.CV" ]
G. Tom et al. Center for Visual Information Technology (CVIT), IIIT Hyderabad, India {george.tom,minesh.mathew}@research.iiit.ac.in, [email protected] Computer Vision Center (CVC), UAB, Spain {sergi.garcia,dimos}@cvc.uab.cat AllRead Machine Learning Technologies Reading Between the Lanes: Text VideoQA on the Road George Tom1 0009-0002-7343-1680 Minesh Mathew1 0000-0002-0809-2590 Sergi Garcia-Bordils2,3 0000-0002-4222-8367 Dimosthenis Karatzas2 0000-0001-8762-4454 C.V. Jawahar10000-0001-6767-7057 August 12, 2023 ============================================================================================================================================================================================= Text and signs around roads provide crucial information for drivers, vital for safe navigation and situational awareness. Scene text recognition in motion is a challenging problem, while textual cues typically appear for a short time span, and early detection at a distance is necessary. Systems that exploit such information to assist the driver should not only extract and incorporate visual and textual cues from the video stream but also reason over time. To address this issue, we introduce RoadTextVQA, a new dataset for the task of video question answering (VideoQA) in the context of driver assistance. RoadTextVQA consists of 3,222 driving videos collected from multiple countries, annotated with 10,500 questions, all based on text or road signs present in the driving videos. We assess the performance of state-of-the-art video question answering models on our RoadTextVQA dataset, highlighting the significant potential for improvement in this domain and the usefulness of the dataset in advancing research on in-vehicle support systems and text-aware multimodal question answering. The dataset is available at http://cvit.iiit.ac.in/research/projects/cvit-projects/roadtextvqahttp://cvit.iiit.ac.in/research/projects/cvit-projects/roadtextvqa § INTRODUCTION In this work, we propose a new dataset for Visual Question Answering (VQA) on driving videos, with a focus on questions that require reading text seen on the roads and understanding road signs. Text and road signs provide important information to the driver or a driver assistance system and help to make informed decisions about their route, including how to reach their destination safely and efficiently. Text on roads can also provide directions, such as turn-by-turn directions or the distance to a destination. Road signs can indicate the location of exits, rest stops, and potential hazards, such as road construction or detours. Reading text and understanding road signs is also important for following traffic laws and regulations. Speed limit signs, yield signs, and stop signs provide important information that drivers must follow to ensure their own safety and the safety of others on the road. VQA is often dubbed as the Turing test for image/video understanding. The early datasets for VQA on images and videos <cit.> largely ignored the need for reading and comprehending text on images and videos, and questions were mostly focus on the visual aspects of the given image or video. For example, questions focused on the type, attributes and names of objects, things or people. However, the text is ubiquitous in outdoor scenes, and this is evident from the fact that nearly 50% of the images in the MS-COCO dataset have text in them <cit.>. Realizing the importance of reading text in understanding visual scenes, two datasets—Scene text VQA <cit.> and Text VQA <cit.> were introduced that focus exclusively on VQA involving scene text in natural images. Two recent works called NewsVideoQA<cit.>, and M4-ViteVQA<cit.> extend text-based VQA works to videos by proposing VQA tasks that exclusively focus on question-answers that require systems to read the text in the videos. Similar to these works that focus on text VQA on videos, our work proposes a new dataset where all the questions need to be answered by watching driving videos and reading the text in them. However, in contrast to NewsVideoQA which contains news videos where question-answer pairs are based on video text (born-digital embedded text) appearing on news tickers and headlines, the text in videos in our dataset are scene text. The text in the road or driving videos are subjected to blur, poor contrast, lighting conditions and distortions. Text while driving goes by fast and tends to be heavily occluded. Often, multiple frames needs to be combined to reconstruct the full text, or a good frame with readable text needs to be retrieved. These difficulties made researchers focus on road-text recognition exclusively, and there have been works that focus exclusively on the detection, recognition and tracking of road text videos <cit.>. On the other hand M4-ViteVQA contains varied type of videos such as sports videos, outdoor videos and movie clips. A subset of these videos are driving videos. In contrast, our dataset is exclusively for VQA on driving videos and contains at least three times more questions than in the driving subset of M4-ViteQA. Additionally, questions in our dataset require both reading road text and understanding road signs, while M4-ViteVQA's focus is purely on text-based VQA. Specifically our contributions are the following: * We introduce the first large scale dataset for road text and road sign VQA containing 10K+ questions and 3K+ videos. * We provide a thorough analysis of the dataset and present detailed statistics of videos, questions and answers. We also establish heuristic baselines and upper bounds that help to estimate the difficulty of the problem. * We evaluate an existing popular VQA model and two SoTA VideoQA models on our dataset and demonstrate that these models fail to perform well on the new dataset since they are not designed to read and reason about text and road signs. § RELATED WORK §.§ VideoQA In video question answering(VideoQA), the goal is to answer the question in the context of the video. Earlier approaches to VideoQA use LSTM to encode the question and videos<cit.>. Several datasets have been created in recent years to assist research in the field of video question answering (VideoQA). Large datasets such as MSRVTT-QA<cit.> contain synthetic generated questions and answers where the questions require only an understanding of the visual scenes. MOVIE-QA<cit.> and TVQA<cit.> are based on scenes in movies and TV shows. Castro et al.<cit.> introduced a dataset with videos from the outside world for video understanding through VideoQA and Video Evidence Selection for interpretability. MOVIE-QA<cit.>, TVQA<cit.>, HowtoVQA69M<cit.> provide explicit text in the form of subtitles. Multiple-Choice datasets<cit.> consist of a pre-defined set of options for answers. When compared to open-ended datasets, they can be considered limiting in the context of real-world applications. Synthetically generated datasets<cit.> contain questions that are generated through processing video descriptions, narration and template questions. MSRVTT-QA<cit.> exploits the video descriptions for QA creation. HowToVQA69M<cit.> uses cross-modal supervision and language models to generate question-answer pairs from narrated videos, whereas ActivityNetQA<cit.> uses template questions to generate the QA pairs. Xu et al. introduced the SUTD-TrafficQA<cit.> dataset and the Eclipse model for testing systems' ability to reason over complex traffic scenarios. The SUTD-TrafficQA<cit.> dataset contains multiple-choice questions that are based on different traffic events. RoadTextVQA is an open-ended dataset that deals with questions related to the text information found in road videos or the signs posted along roads. Recent studies<cit.> on pretraining transformers on other vision and language tasks have shown excellent results for the VideoQA task. Lei et al. <cit.>, in their study, uncovered the bias present in many video question-answering datasets, which only require information from a single frame to answer, and introduced new tasks aimed at training models to answer questions that necessitate the use of temporal information. §.§ VideoQA involving video text NewsVideoQA<cit.> and M4-ViteVQA<cit.> are two recently introduced datasets that include videos with embedded born-digital text and scene text, respectively. Both datasets require an understanding of the text in videos to answer the questions. Embedded text, sometimes called video text in news videos, is often displayed with good contrast and in an easy-to-read style. Scene text in the RoadTextVQA dataset can be challenging to read due to the factors such as occlusion, blur, and perspective distortion. M4-ViteVQA contains videos from different domains, a few of them being shopping, driving, sports, movie and vlogs. The size of RoadTextVQA is more than three times the size driving subset of M4-ViteVQA. Additionally, a subset of questions in RoadTextVQA also requires domain knowledge to answer questions related to road signs. Few recent works<cit.> on vision and language transformers have shown to work well with text-based VQA tasks. Kil et al.<cit.> introduced PreSTU, a pretraining method that improves text recognition and connects the recognized text with the rest of the image. GIT(GenerativeImage2Text)<cit.> is a transformer-based model for vision and language tasks with a simple architecture that does not depend on external OCR or object detectors. §.§ Scene Text VQA Our work, which focuses on VQA requiring text comprehension within videos, shares similarities with other studies dealing with text in natural images, commonly known as Scene Text VQA. The ST-VQA<cit.> and TextVQA<cit.> datasets were the first to incorporate questions requiring understanding textual information from natural images. LoRRa<cit.> and M4C<cit.> utilized pointer networks<cit.> that generate answers from a fixed vocabulary and OCR tokens. In addition, M4C used a multimodal transformer<cit.> to integrate different modalities. TAP<cit.> employed a similar architecture to M4C and incorporated a pretraining task based on scene text, improving the model's alignment among the three modalities. Another study, LaTr<cit.>, focused on pretraining on text and layout information from document images and found that incorporating layout information from scanned documents improves the model's understanding of scene text. § ROADTEXTVQA DATASET This section looks at the data collection and annotation procedure, data analysis, and statistics. §.§ Data Collection The videos used in the dataset are taken from the RoadText-3K<cit.> dataset and YouTube. The RoadText-3K dataset includes 3,000 ten-second road videos that are well-suited for annotation because they have a considerable quantity of text. The RoadText-3K dataset includes videos recorded in the USA, Europe, and India and features text in various languages such as English, Spanish, Catalan, Telugu and Hindi. Each video contains an average of 31 tracks. However, the European subset is excluded from the annotation process for RoadTextVQA as it is dominated by texts in Spanish/Catalan, and the RoadTextVQA is designed specifically for English road-text. In addition to the videos from RoadText-3K, additional dashcam videos were sourced from the YouTube channel J Utah[ <https://www.youtube.com/@jutah>]. 252 videos from USA and UK were selected, and clips with a substantial amount of text were further selected by running a text detector over the video frames. Being a free and open-source text detector popular for scene text detection, we went with EasyOCR<cit.> as our choice of text detector. The RoadText-3K videos have a resolution of 1280x720 with a frame rate of 30 frames per second. To keep the data consistent, the YouTube clips were downsampled to the same resolution and frame rate of 1280x720 at 30fps. Individuals who are proficient in the English language were hired to create the question-answer pairs. To ensure the quality of the applicants, an initial training session was conducted, followed by a filtering mechanism in the form of a comprehensive quiz. The quiz was designed to ensure that the question-answer pairs were created by individuals who had a solid grasp of the English language and a good understanding of the task, thereby enabling us to maintain a high standard of quality in the annotations. The annotation process involved two stages, and a specifically designed web-based annotation tool was used. In the initial stage, annotators add the question, answers and timestamp triads for videos shown to them. All the questions have to be based on either some text present in the video or on any road sign. In cases where a question could have multiple answers in a non-ambiguous way, the annotators were given the option to enter several answers. The timestamp is an additional data point which is collected, and it is the aptest point in the video at which the question is answerable. The annotators were instructed to limit the number of questions to not more than ten per video and to avoid asking any questions related to the vehicle license plate numbers. If there were no possible questions that could be asked from the video, then the annotators were given the option to reject it. In the verification stage, the video and the questions are shown, and the annotators had to add the answers and the timestamps. We made sure that verification is done by an annotator different from the one who has annotated it in the first stage. If the question is incorrect or does not follow the annotation guidelines, it is flagged and rejected. If for a question, there are common answers in the annotation stage and verification stage, then that question is considered valid. All the common answers are considered valid answers to the question. In the verification stage, additional data regarding the question-answers are also collected. The questions are categorically tagged into two distinct classes. Firstly, based on the type of question— text-based or traffic sign-based. The second classification captures whether the answer for a question, i.e., the text that makes up the answer, is present in the video or not. §.§ Data Statistics and Analysis The RoadTextVQA dataset contains 3,222 videos and 10,500 question-answer pairs. Among the 3,222 videos, 1,532 videos are taken from the RoadText-3K dataset and the rest are from YouTube. The data is randomly split into 2,557 videos and 8,393 questions in the train set, 329 videos and 1,052 questions in the test, and 336 videos and 1,055 questions in the validation set. The videos for the test and validation sets were randomly chosen from the RoadText-3K split, as it has ground truth annotations for text tracking. Methods that use OCR data can take advantage of the accurate annotations provided by RoadText-3K. We present statistics related to the questions in RoadTextVQA through <ref>, and <ref>. <ref> shows the most frequent questions and their frequencies. “What is written on the road with white block letters?" is the most recurrent, followed by questions regarding the speed limits on the roads. <ref> provides a comprehensive overview of the question distribution in RoadTextVQA, with the majority of the questions being centred around details of shops located along the road. <ref> depicts the word count in the questions and answers, respectively. The average number of words in the questions in RoadTextVQA is 10.8, while the average number in the answers is 1.45. The average number of words in questions is much higher when compared to other text-based VideoQA datasets, as seen in <ref>. The percentage of unique questions stands at 86.6%, while the percentage of unique answers is 40.7%. <ref> shows the top 30 answers and the number of occurrences. <ref>, in the form of a word cloud, illustrates the most frequently occurring answers and OCR tokens. The most popular answers are “right", “left", “yes", and “no". The most prevalent OCR tokens in the videos are “stop", “only", and “one way". The distribution of the videos in the dataset based on the geographic location where it was captured is shown in <ref>. More than two-thirds of the videos in the dataset are captured from roads in the USA. The majority of questions are grounded on text seen in the video (61.8%), and the rest are based on road signs. Road signs can also contain text, such as speed limit signs or interchange exit signs. 68% of questions have answers that can be found within the text present in the video, while the remaining 32% of questions require an answer that is not a text present in the video. § BASELINES This section presents details of the baselines we evaluate on the proposed RoadTextVQA dataset. §.§ Heuristic Baselines and Upper Bounds We evaluate several heuristic baselines and upper bounds on the dataset. These heuristics and upper bounds are similar to those used in other VQA benchmarks, such as TextVQA<cit.> and DocVQA<cit.>. The following heuristic baselines are evaluated: (i) Random Answer: performance when answers to questions are randomly selected from the train split. (ii) Random OCR token: performance when a random OCR token from the video is picked as the answer. (iii) Majority Answer: performance when the most common answer in the train split is considered as the answer for all the questions. The following upper bounds are evaluated (i) Vocab UB: the upper bound on predicting the correct answer if it is present in the vocabulary of all the answers from the train split. (ii) OCR UB: the upper bound on performance if the answer corresponds to an OCR token present in the video. (iii) Vocab UB + OCR UB: this metric reflects the proportion of questions for which answers can be found in the vocabulary or the OCR transcriptions of the video. §.§ M4C The M4C<cit.> model uses a transformer-based architecture to integrate representations of the image, question and OCR tokens. The question is embedded using a pretrained BERT<cit.> model. Faster R-CNN<cit.> visual features are extracted for the objects detected and the OCR tokens in the image. The representation of an OCR token is formed from the FastText<cit.> vector, PHOC<cit.> vector, bounding box location feature, and Faster R-CNN feature of the token. A multi-head self-attention mechanism in transformers is employed, enabling all entities to interact with each other and model inter- and intra-modal relationships uniformly using the same set of transformer parameters. During answer prediction, the M4C model employs an iterative, auto-regressive decoder that predicts one word at a time. The decoder can use either a fixed vocabulary or the OCR tokens detected in the image to generate the answer. §.§ SINGULARITY The architecture of SINGULARITY<cit.> is made up of three major components: a vision encoder using ViT<cit.>, a language encoder utilizing BERT<cit.>, and a multi-modal encoder using a transformer encoder<cit.>. The multi-modal encoder uses cross-attention to collect information from visual representations using text as the key. Each video or image is paired with its corresponding caption during the pretraining phase, and the model is trained to align the vision and text representations using three losses (i) Vision-Text Contrastive: a contrastive loss which aligns the representations of vision and language encoders, (ii) Masked Language Modeling<cit.>: masked tokens are predicted (iii) Vision-Text Matching: using the multi-modal encoder, predict the matching score of a vision-text pair. We use the SINGULARITY-temporal model, which is pretrained on 17M vision caption pairs<cit.>. The SINGULARITY-temporal model contains a two-layer temporal encoder that feeds its outputs into the multi-modal encoder. SINGULARITY-temporal makes use of two new datasets named SSv2-Template Retrieval, and SSv2-Label Retrieval created from the action recognition dataset Something-Something v2 (SSv2)<cit.>. The pretraining is a video retrieval task using text queries. An additional multi-modal decoder is added for open-ended QA tasks and is initialised from the pretrained multi-modal encoder, which takes the multi-modal encoder's output as input and generates answer text with [CLS] as the start token. §.§ GenerativeImage2Text GIT(GenerativeImage2Text)<cit.> is a transformer-based architecture aimed at unifying all vision-language tasks using a simple architecture pretrained on 0.8 billion image text pairs. GIT consists of an image encoder and a text decoder and is pretrained on a large dataset of image text pairs. The image encoder is a Swin-like<cit.> transformer based on the contrastive pretrained model, which eliminates the need for other object detectors or OCR. As for the text decoder, the GIT uses a transformer with a self-attention and feed-forward layer to generate text output. The visual features and the text embeddings are concatenated and used as inputs to the decoder. GIT is able to gradually learn how to read the scene text with large-scale pretraining and hence achieves SoTA performance on scene-text-related VQA tasks such as ST-VQA. For video question answering, GIT employs a method of selecting multiple frames from the video and separately embeds each frame with a learnable temporal embedding which is initialized as zeros, and the image features are concatenated and used similarly to the image representation. The question and the correct answer are combined and used in the sense of a special caption, and the language model loss is computed solely on the answer and the [EOS] token. § EXPERIMENTS AND RESULTS This section covers the evaluation metrics, the experimental setup, and the experiment results. §.§ Experimental Setup Evaluation metrics. We use two evaluation metrics to evaluate the model's performance: Average Normalized Levenshtein Similarity (ANLS)<cit.> and Accuracy (Acc. (%)). The Accuracy metric calculates the percentage of questions where the predicted answer exactly matches any of the target answers. ANLS, on the other hand, does not award a zero score for all predictions that do not match the ground truth string exactly. The score was originally proposed to act softly on cases where the predicted answer differs slightly from the actual. ANLS measures a similarity(based on the Levenshtein distance) between the prediction and ground truth and normalizes it as a score in the range [0,1]. If the score is less than 0.5, the final ANLS score for the prediction is set to zero. OCR transcriptions. The ground truth annotations were utilized for the videos in the RoadText-3K set, while for the remaining videos, the OCR transcriptions were sourced using the Google Cloud Video Intelligence API. Both RoadText-3K ground truth annotations, and the Google API provide text transcriptions at the line level. We use the line-level text transcriptions as the OCR tokens for the calculation of OCR upper bounds and OCR-based heuristics as given in the <ref>. When a text track gets cut off from the frame or partially occluded by other objects in a video, the Google Cloud Video Intelligence API treats it as a new track, whereas RoadText-3K annotations ignore the partially occluded tracks. This is why in the <ref>, the number of videos vs the number of tracks is a bit inflated for the YouTube clips when compared to RoadText-3K clips. Experimental setup for M4C. The M4C<cit.> model is trained using the official implementation, and the training parameters and implementation details remain consistent with those used in the original paper. We used a fixed vocabulary of size 3926 generated from the train set. The training data consists of image question-answer pairs where the image selected for training is the one on which the questions are based, specifically the timestamp frame. After training, the model is evaluated using two approaches. Firstly, it is tested on the timestamp QA pairs of the test set, and secondly, it is evaluated on the video level by sampling ten frames from the respective video for each QA pair and obtaining the model prediction for every frame individually. The final answer is determined by taking the most common answer from the ten individual frame predictions. Experimental setup for SINGULARITY. We fine-tuned the pretrained SINGULARITY-temporal 17M model on four NVIDIA Geforce RTX 2080 Ti. The fine-tuning process was run for 20 epochs with a batch size of 16, starting with an initial learning rate of 1e-5 and increasing linearly in the first half epoch, followed by cosine decay<cit.> to 1e-6. The other parameters used for training are the same as the official implementation. The video frames were resized to 224x224, and a single frame with random resize, crop and flip augmentations was utilised during training, whereas 12 frames were used during testing. Additionally, we fine-tuned the SINGULARITY model, which has been pretrained on the MSRVTT-QA<cit.> dataset. Experimental setup for GIT. The training process for GIT was carried out using a single Tesla T4 GPU for 20 epochs with a batch size of 2. We use an Adam<cit.> optimizer with an initial learning rate starting at 1e-5 and gradually decreasing to 1e-6 through the use of cosine decay. The GIT model was trained using the official VideoQA configuration used for MSRVTT-QA training. We fine-tuned the pretrained GIT-large model on our dataset, using six frames that were evenly spaced as inputs during both training and testing. In addition, we further fine-tuned the GIT model that was pretrained on the MSRVTT-QA<cit.> dataset. §.§ Results Heuristic baselines and upper bound results are presented in the <ref>. The heuristic baselines yield very low accuracy, which indicates the absence of any bias due to the repetition of answers. Random OCR heuristic gives close to 2% accuracy, meaning that there is enough text present in the video that selecting a random OCR from the video will not yield high accuracy. The OCR upper bound is 36.6% which is low when compared to the percentage of questions which have the answers present in the video. The low OCR UB can be attributed to how the text detection and how ground truth annotation is done. The response to a question may be split into multiple lines within the video, leading to the representation of the answer as separate tokens in the OCR output. This happens because the annotations in the OCR process were carried out on a line level. From the upper bound result of Vocab + OCR UB, we can see that more than three-quarters of the answers are present in either the vocabulary or in the OCR tokens of the video. The results on M4C are shown on <ref>. The frame level results, where we evaluate on the timestamp frame, show an accuracy of 38.20% and the video level results, where we evaluate on ten frames, give an accuracy of 28.92%. The results show that answering the question is still a challenging task, even when we reduce the complexity of the problem by providing the aptest frame for answering the question and ground truth OCR tokens. We show the results after fine-tuning on SINGULARITY and GIT in <ref>. The accuracy of the questions requiring answers to be extracted from the video (AP) is comparatively lower, while the accuracy of the questions where the answer is not present in the video is comparatively higher. Compared to AP, ANP is less complex to answer because it involves a fixed set of answers. In contrast, AP requires dynamic extraction from OCR tokens, resulting in the ANP set having better accuracy than AP. Additionally, fine-tuning the model that has been pretrained on the MSRVTT-QA dataset shows improvement in accuracy across all categories(TB, RSB, AP, and ANP). Fine-tuning GIT results in better performance compared to SINGULARITY. GIT also shows a similar trend when fine-tuned on pretrained MSRVTT-QA dataset. The “answer is present in the videos(AP)" subset has an improvement of 3.9% in accuracy when compared with SINGULARITY, whereas the “answer is not present(ANP)" in the videos subset has a gain of 6.3%. M4C tested on a single frame shows better results compared to VideoQA models. This can be attributed to the fact that we explicitly provide the OCR tokens and the correct frame on which the question is framed to the model. M4C tested on ten frames gives comparable results to GIT. We show some of the qualitative results in <ref>. As the complexity of the scene and the obscurity of the scene text increase, it becomes more and more difficult for the model to predict the correct answer. VideoQA baselines achieve better results on questions that do not require the extraction of answers from the video. § CONCLUSIONS We introduce RoadTextVQA, a new Video Question Answering dataset where the questions are grounded on the text and road signs present in the road videos. Our findings from the baseline models' performance indicate a need for improvement in existing VideoQA approaches for text-aware multimodal question answering. Future work can involve augmenting the dataset by incorporating videos obtained from diverse global locales. Currently, there are recurrent questions and answers due to repeating elements in the videos. Including videos from various locations broadens the diversity of the dataset by providing a more comprehensive range of questions and answers and minimizes any biases within the dataset. To our best knowledge, currently, there are no Visual Question Answering models that explicitly incorporate road signs. Models can integrate road signs as an additional input or pretrain on road sign-description pairs to enhance their ability to respond to questions that require domain knowledge. We believe this work would encourage researchers to develop better models that incorporate scene text and road signs and are resilient to the challenges posed by driving videos. Additionally, drive further research in the area of scene text VideoQA and the development of advanced in-vehicle support systems. § ACKNOWLEDGEMENTS This work has been supported by IHub-Data at IIIT-Hyderabad, and grants PDC2021-121512-I00, and PID2020-116298GB-I00 funded by MCIN/AEI/ 10.13039/501100011033 and the European Union NextGenerationEU/PRTR. splncs04
http://arxiv.org/abs/2307.07610v1
20230714201412
Assessing and Exploiting Domain Name Misinformation
[ "Blake Anderson", "David McGrew" ]
cs.CR
[ "cs.CR" ]
Assessing and Exploiting Domain Name Misinformation Blake Anderson Cisco [email protected] David McGrew Cisco [email protected] August 12, 2023 ===================================================================================== Cloud providers' support for network evasion techniques that misrepresent the server's domain name is more prevalent than previously believed, which has serious implications for security and privacy due to the reliance on domain names in common security architectures. Domain fronting is one such evasive technique used by privacy enhancing technologies and malware to hide the domains they visit, and it uses shared hosting and HTTPS to present a benign domain to observers while signaling the target domain in the encrypted HTTP request. In this paper, we construct an ontology of domain name misinformation and detail a novel measurement methodology to identify support among cloud infrastructure providers. Despite several of the largest cloud providers having publicly stated that they no longer support domain fronting, our findings demonstrate a more complex environment with many exceptions. We also present a novel and straightforward attack that allows an adversary to man-in-the-middle all the victim's encrypted traffic bound to a content delivery network that supports domain fronting, breaking the authenticity, confidentiality, and integrity guarantees expected by the victim when using HTTPS. By using dynamic linker hijacking to rewrite the HTTP Host field, our attack does not generate any artifacts that are visible to the victim or passive network monitoring solutions, and the attacker does not need a separate channel to exfiltrate data or perform command-and-control, which can be achieved by rewriting HTTP headers. Domain Fronting, Censorship Circumvention, TLS Proxy, Command-and-Control § INTRODUCTION Domain name-based intelligence has long been used by the security research community to identify and remediate malware infections and other attacks. For example, Ma et al. extracted domain names from Mirai binaries and then used passive and active DNS datasets to perform DNS expansion in order to construct a graph highlighting the shared infrastructure used by Mirai variants <cit.>. Roberts et al. use domain names in TLS certificates to identify domain impersonation attacks <cit.>. The above investigations relied on the fact that the domain names in DNS responses and TLS certificates accurately represented the identity of the servers that the clients intended to communicate with. This assumption is not necessarily true. Evasive software can misrepresent its servers' domain names in DNS and TLS, a feature utilized by both privacy enhancing tools and malware. Domain fronting is a popular method that applications use to misrepresent their target server's domain name. It leverages shared hosting and HTTPS to present a benign domain in the DNS request, TLS , and TLS , while signaling the target domain in the encrypted HTTP request. Domain fronting is possible when shared hosting providers use a TLS termination proxy, providing themselves visibility into the HTTP . Section <ref> provides an in-depth description of domain fronting and related techniques. Many cloud providers have stated that they no longer support domain fronting <cit.>. To verify these statements and to better understand the domain name misinformation ecosystem, we developed a novel measurement system that identifies candidate sets of domain name/IP address tuples related to each other by increasingly specific measures. In this paper, we first analyze candidate sets based on their autonomous systems. We then use the insight that, when a provider supports domain fronting, the target and front domains are both from a specific set of domains associated with that provider. These sets of candidate domains can be constructed through passive DNS monitoring, and then scanned to characterize what domain misinformation techniques, if any, are supported. We construct DNS-related candidate sets using the domain name and fully qualified domain name (FQDN) returned in DNS CNAME records. As an example, the measurement system identifies candidate sets related by Fastly using the autonomous system, the canonical domain name, and the canonical FQDN. As we demonstrate in Section <ref>, the increasing level of specificity allows one to have a much clearer view into the conditions in which hosting providers support domain name misinformation. Once we have the candidate sets of domain name/IP address tuples, we then investigate pairs of tuples belonging to the same set to identify support for domain name misinformation. For each pair of destinations, several scans are initiated to retrieve baseline values as well as to exercise different techniques such as domain fronting. We present the scanning methodology in Section <ref>. Our results show that many cloud providers support domain fronting; sometimes intentionally, sometimes optionally, and sometimes unwittingly. For example, while you cannot use to front domains hosted by Google App Engine, the set of domains mapping to the canonical name creates an equivalence class of domains that can all be used to front between each other. Complicating the situation further, many popular services are hosted by multiple, unrelated providers. For example, is hosted by at least Azure, Akamai, StackPath, and Limelight. While you cannot perform domain fronting with this domain through Azure or Akamai, you can through StackPath and Limelight. A security model that trusts the domain name, without considering the provider and its support for domain evasion, is vulnerable to evasion. After measuring support for domain name misinformation, Section <ref> shifts the focus towards how an attacker can co-opt domain fronting, which was put forward as a privacy enhancing technology that allows political dissidents of repressive regimes access to an uncensored Internet <cit.>. While these privacy goals of domain fronting are unequivocally altruistic, it would be irresponsible to ignore how these same regimes can leverage domain fronting to stealthily maintain a surveillance state. A feature of modern content delivery networks (CDNs) is the decoupling of a domain's TLS certificate and origin server. When the CDN does not verify that the domain name appearing in the HTTP field is represented in the certificate, the core tenet of trust between the user and origin server is broken. In past examples of domain fronting, broken trust has not been an issue because the user is a willing participant in the deception. But, if an attacker were to modify the HTTP without the victim's knowledge, the end-to-end security guarantees of HTTPS no longer hold. We have developed a proof-of-concept attack that leverages dynamic linker hijacking <cit.> through the Linux trick <cit.> to rewrite the HTTP field immediately preceding the encryption of the HTTP request. While the proof-of-concept requires an attacker to have a presence on the endpoint, there are other ways to achieve the intended functionality, e.g., by using a supply chain compromise <cit.> of popular browsers or TLS libraries. In the attack, the victim first initiates a TLS handshake with the target domain, and the CDN establishes the TLS connection with the target's proper certificate. When the victim makes an HTTP request, the HTTP is overwritten to point to the attacker controlled domain. A CDN that supports domain fronting will then route the HTTP request to the origin server specified by the attacker. The attacker can now view all the decrypted traffic and modify or otherwise censor the decrypted traffic. This behavior clearly violates the authenticity, confidentiality, and integrity guarantees that HTTPS claims to provide. The attacker can optionally create a stealthy command-and-control channel by rewriting request headers and adding HTTP response headers. Unlike simply overriding DNS responses on the endpoint or directly exfiltrating the decrypted data to an attacker-owned server, our attack does not modify the IP address or generate additional network connections, making the attack significantly more stealthy. § BACKGROUND AND RELATED WORK The domain name misinformation techniques discussed in Section <ref> rely on several network protocol standards along with the general mechanics of CDNs, both of which are introduced in this section. We conclude this section by reviewing relevant related work. §.§ Network Protocols The Domain Name System (DNS) <cit.> provides various mechanisms to associate information with domain names, e.g., it can translate human-readable domain names into routable IP addresses. The extension mechanism around DNS, <cit.>, allows for larger message sizes and more advanced handling of DNS requests. In the context of this paper, is important because it facilitates DNS responses that optimize the returned IP addresses based on geography in order to reduce latency. A DNS CNAME record maps an alias domain to the true, canonical name. CNAME records are useful when a single server hosts multiple subdomains, e.g., both and will be aliases of . CNAME records are also useful in the context of CDNs as described below. After the client obtains an IP address via DNS, the client then begins to directly communicate with the server. For our purposes, we assume the client uses Transport Layer Security (TLS) 1.3 <cit.>. After the TCP handshake, the client sends a TLS handshake record specifying its supported cryptographic parameters and supplying some additional data such as the extension, which provides the domain name of the server the client wishes to communicate with. The is particularly useful in virtual hosting environments, like CDNs, to help route the connection to the backend server without the CDN's load balancer having to man-in-the-middle the connection. The server responds with a handshake record selecting a set of cryptographic parameters based on the client's preferences. Older versions of TLS negotiate the remainder of the handshake in the clear, while TLS 1.3 begins to encrypt the handshake records. A TLS 1.3-capable server then sends an encrypted handshake record. This record contains a chain of certificates that allows the client to verify the identity of the server. Wildcard certificates use a wildcard character (*) as a subdomain in either the or field and allows the certificate to secure multiple subdomains belonging to the same domain name. Finally, the client and server finish the key exchange and begin to exchange records encrypted with the negotiated keys. For the purpose of this paper, we assume the client and server exchange encrypted messages using HTTP/1.1 <cit.> or HTTP/2 <cit.>. The intended server's domain name is located in the (HTTP/1.1) or (HTTP/2) field, and the target URI is in the (HTTP/1.1) or (HTTP/2) field. HTTP/3 <cit.> is the latest incarnation of HTTP and it uses the QUIC transport <cit.>. QUIC uses TLS for key negotiation and many of the observations in this paper are directly applicable, but deeper analysis is out-of-scope for the current work. Because any on-path observer can view and potentially modify plaintext DNS traffic, encrypted DNS protocols were developed to take advantage of the above protocols to secure DNS. DNS-over-HTTPS (DoH) <cit.> is one such protocol that maps DNS requests and responses to HTTP and encrypts the connection using TLS. Other encrypted DNS protocols include DNS-over-TLS (DoT) <cit.> and DNS-over-QUIC (DoQ) <cit.>. In summary, from the point-of-view of a passive network observer and the above protocols, domain names would appear in the unencrypted DNS request and response, the TLS handshake record, and the TLS handshake record for non-1.3 versions of TLS. Domain names are opaque in DoH and TLS 1.3 handshake records. §.§ Content Delivery Networks Content delivery networks have the goals of reducing latency and improving redundancy for hosted artifacts, while also protecting against security threats, e.g., denial of service attacks. The CDN's servers are geographically dispersed and placed at strategic locations to reduce latency. If a client requests content from a domain that uses a CDN, the DNS response will typically contain a CNAME record where the IP address belongs to the CDN. The client will then initiate a TLS handshake with the CDN, which hosts the intended domain's certificate. After the TLS handshake, the CDN will proxy the HTTP traffic on behalf of the origin server, which maintains the content requested by the user. If the requested content is not stale, the CDN will return a cached version of the content, and otherwise will request and cache the latest version from the origin server. §.§ Related Work Fifield et al. <cit.> presented the first academic treatment of domain fronting as a censorship circumvention tool. They described 7 CDNs that supported domain fronting at the time of publication. They additionally implemented and studied the deployment of domain fronting in Tor <cit.>, Lantern <cit.>, and Psiphon <cit.>. The authors also examined some detection mechanisms based on network traffic analysis, e.g., packet lengths, and concluded that there were no reliable traffic characteristics that would allow one to detect domain fronting. Wang et al. <cit.> studied the ability to detect several network protocol obfuscators including Tor/meek leveraging Google and Amazon for domain fronting. One of their detection strategies included machine learning classifiers that used entropy-based, timing-based, and packet-header data features. Similarly, Li et al. <cit.> evaluated a convolutional neural network and features based on packet lengths to detect Tor/meek leveraging Azure and Fastly for domain fronting. Both papers found that a well-resourced censor could reliably detect meek using domain fronting with a low false positive rate. Importantly, Wang et al. note, “the detection techniques we explore can be, in turn, easily circumvented in almost all cases with simple updates to the obfuscator" <cit.>. Instead of viewing domain fronting through the lens of a privacy enhancing technology, Dunwoody <cit.> examined meek/Google domain fronting as it is used by a nation-state attacker, APT29 <cit.>, in order to evade detection. Similarly, McLellan et al. <cit.> investigated how the UNC2465 ransomware group used a legitimate Microsoft domain as a front for their hard-coded domain, . § MISINFORMATION ONTOLOGY While we have mainly highlighted domain fronting as a domain name misinformation technique up to this point, there are several related techniques worth noting. In this section, we review the techniques investigated in Section <ref>: domain fronting, domain faking, and domainless fronting. We additionally review two similar techniques that do not cleanly fit the misinformation characterization: wildcard certificate fronting and domain shadowing. For the purposes of this paper, an evasion technique is considered to be domain name misinformation if the encrypted HTTP value does not match the TLS value and is not covered by the TLS certificate's or extension. §.§ Domain Fronting Domain fronting is the misinformation technique that has garnered the most attention from the privacy research <cit.> and incident response <cit.> communities. Figure <ref> illustrates the key features of domain fronting, where the client wishes to communicate with by fronting . Domain fronting starts with a DNS request for some popular, allowed domain, in this example: . This hypothetical domain is hosted by a CDN that supports domain fronting and has an edge device that maps to the IP address, which the DNS server returns. The client then initiates a TLS handshake with the CDN and sets the 's to . Because the CDN controls the certificates for both of the unrelated domains, and , it returns the proper certificate for . After the TLS handshake, the client sends an encrypted record with an HTTP request where the HTTP field is set to . The edge device then decrypts the record and extracts the field. Depending on the cache configuration and state, the edge device will then reach out to the origin server of , ignoring the value presented in the TLS , and return the requested content; successfully evading DNS and TLS-layer enforcement of . Domain fronting is possible because the origin server allows the CDN to host their domain's TLS certificate and to decrypt all traffic destined to the origin server. Domain fronting may be an intended feature or an architectural flaw, i.e., the CDN may not keep state associating the of the original TLS with the value present in the HTTP . If that state is not present or is purposefully ignored, then the CDN can route the HTTP request at its discretion. §.§ Domain Faking Domain faking occurs when the server or edge device returns the same certificate and serves content for domains secured by that certificate irrespective of the value present in the TLS . From the perspective of a passive network observer, domain fronting appears to be legitimate because the edge device returns a valid certificate and serves content for the fronted domain. In contrast, a device supporting domain faking does not return a valid certificate for the fronted domain because it is not authorized to serve content on behalf of the fronted domain. Similarly, a DNS request for the fronted domain will not point to the domain faking device. For domain faking to appear reasonable to passive network observers, the operator needs to leverage recently developed standards to obfuscate the exchanges in Figure <ref> that cannot be modified, i.e., the DNS exchange and the TLS certificate. A client begins domain faking by initiating an encrypted DNS request containing the blocked domain, e.g., by using DNS-over-HTTPS <cit.>. The client then sends a TLS 1.3 with the set to the allowed domain. The server is configured to ignore the extension and returns its default certificate, which is encrypted with TLS 1.3 <cit.>. The client is configured to ignore the returned certificate. The client and server then complete the TLS handshake and begin exchanging encrypted records. Domain faking is successful for two reasons. First, the same entity has some control over the client and server, which allows it to ignore errors that would otherwise result in a TLS handshake failure. Second, domain faking requires the censor to perform significantly more work in the form of either blocking all encrypted/unsanctioned DNS, scanning the server to retrieve its certificate, or maintaining state that maps IP addresses to domains. Telegram <cit.> serves as a real-world example of an application that supported domain faking. It used encrypted DNS, primarily to , and a TLS 1.3 handshake. Telegram set the to , but the visited IP addresses are entirely unrelated to Google's infrastructure, e.g., belongs to the autonomous system. Telegram appears to have deprecated this behavior in November 2022. §.§ Domainless Fronting Fifield et al. <cit.> put forth the concept of domainless fronting, which has many of the same limitations as domain faking. Its main feature is purposefully omitting the extension or leaving it blank. We consider domainless fronting to misinform when it is used to evade censorship, as opposed to when the is omitted due to the client using an obsolete version of TLS. Similar to domain faking, it must either use hard-coded IP addresses or rely on encrypted DNS. TLS 1.3 is helpful to obfuscate the certificate but is not always necessary depending on the hosting infrastructure and configuration. For example, Alibaba's CDN will return a generic certificate with the subject set to , which provides little information, but many Akamai-hosted domains will return an informative certificate. Unsurprisingly, this difference is typically related to whether the hosting provider allows domains to be hosted on static IP addresses. Domainless fronting is appealing because it is relatively simple to configure and does not need to rely on a domain name owned by another entity, reducing the likelihood of collateral damage. Additionally, TLS sessions that naturally omit the are relatively common but may be trending lower. Fifield et al. reported that 16.5% of TLS connections lacked the extension in June 2014 <cit.>, Anderson et al. had that number at ∼10% in the first half of 2019 <cit.>, and we observed that number to be 7.7% in October 2022. While these numbers may not be methodologically comparable, the potential trend is interesting and deserves further investigation. §.§ Odds and Ends Wildcard certificate fronting takes advantage of the default domain names and certificates provided by CDNs and cloud infrastructure providers to allow fronting between subdomains secured by the default certificate, despite there being no relation between the owners of those subdomains. For example, if you create a public S3 bucket without a custom domain, AWS assigns a generic domain name of the form and provides a default TLS certificate which covers and . Google Cloud provides similar mechanics for their storage service, . While wildcard certificates are not inherently a security risk in the general case, wildcard certificates that secure many subdomains are worth investigating. This is especially true for the case of subdomains that are unrelated, i.e., the organizations that own and maintain the resources are distinct entities and are only related by the fact that they pay to use the same infrastructure. We further discuss the security risks of wildcard certificate fronting with domains in the context of our novel attack in Section <ref>. Another related technique is domain shadowing, which was recently introduced by Wei <cit.>. Domain shadowing relies on a popular CDN feature: rewriting the HTTP field. The user first registers a new domain that will be used to access blocked resources, , then binds that domain to the target domain, , and finally creates a rule to rewrite the field from to for incoming requests. Domain shadowing has the clear advantage of the HTTP field not indicating the target domain until it is rewritten, which could potentially evade decrypting firewalls. But, for the current topic, we do not consider it misinformation because the shadowed domain has a one-to-one relationship with the target domain and the shadowed domain would match the TLS certificate when the HTTP request arrived at the CDN. 1.02 § MEASURING DOMAIN NAME MISINFORMATION We developed a scanning methodology to detect the domain name misinformation techniques of Section <ref> and to better understand their support on the Internet. The results are presented in progressively more granular groupings of destinations, first assessing an autonomous system-wide grouping of destinations and finishing with a grouping based on the fully qualified domain name (FQDN) of the DNS canonical name. As we will show, much of the ambiguity around organizations' support for these techniques erodes with more specific groupings. The last subsection finishes by discussing the prevalence of this support among the most popular domains. 1.02 §.§ Methodology Given a pair of related (, )-tuples, one marked as the target and the other as the front, our scanning system is designed to identify if the techniques described in Section <ref> apply. The relationships examined in Section <ref> are based on autonomous systems, and the domain name and FQDN of the canonical name present in the DNS CNAME record. The system generates a candidate set of related destination tuples by grouping them based on these criteria. For each destination tuple, the scanner generates 5 pairs of tuples by randomly selecting other destinations in the candidate set. Table <ref> describes the 5 scans that are executed given a pair of destination tuples, where the target is (, ) and the front is (, ). For each scan, Table <ref> lists the IP address, TLS , and encrypted HTTP used for the connection. and are used to determine if the misinformation techniques were successful and simply scan the target and front IP/domain, respectively. The three remaining scans attempt to use the two destinations to perform domain fronting, domain faking, and domainless fronting, setting the protocol fields as given in Table <ref>. To determine if the misinformation techniques were successful, the scanner collects several response features for each scan: * a JSON representation of the TLS certificate * the HTTP * the full list of HTTP response headers and values * the length of the returned content If either of the TLS certificates associated with the baseline scans secures both domains, e.g., both domains appear in the extension, domain fronting is not applicable, and the results are ignored. If either of the baseline scans returns a non- HTTP , the results are also ignored. This pruning may be overly aggressive, but it helps to remove ambiguity, which made analyzing the results more straightforward. After the above pruning, the analysis considers the non-baseline scans to be successful if the scan: * returns a HTTP , and * the length of the returned content matches that of but not . To better handle dynamic content, the system makes two exceptions to the second criteria. First, if the length of the returned content for a misinformation scan is within 5% of 's length and not within 20% of 's length, it is considered successful. All manually inspected instances of dynamic content satisfying this exception were correct. Second, if the HTTP response header names and ordering for a misinformation scan exactly matches those of and not , it is considered successful. Roughly 87% of the successful misinformation scans maintained ordering for HTTP response headers. In most failure cases, the server responds with either , , or . The scanning code was written in Python and PySpark and was deployed on an Amazon EMR cluster with 300 executors. The scanner uses the Python library to make connections and specified the following HTTP headers for each scan: basicstyle=, breaklines=true, escapeinside=|| headers = 'Host': http_host, 'User-Agent': USER_AGENT, 'Connection': 'close' where is given in Table <ref> and described a Chrome 104 client running on Windows 10. To have more control over the destination IP address, the scanner uses a monkey patch for before each scan that hardcodes the returned IP address: socket.getaddrinfo = (lambda *args: [(socket.AddressFamily.AF_INET, socket.SocketKind.SOCK_STREAM, 6, ”, (dst_ip, 443))]) where is given in Table <ref>. To make the scanning more efficient, the system runs a prefiltering scan immediately before grouping destination tuples and running the scans in Table <ref>. The system scans each destination and filters destinations that do not return a HTTP . Despite this filtering step, some baseline scans would return a non- code. Most of these cases were explained by distributed denial-of-service protections, unsurprising given the parallel nature of the PySpark scanning infrastructure. The system re-ran these failed scans sequentially with a pure Python scanner, and applied the rules listed above to the results. Details of the specific datasets for each experiment are given in the subsections of Section <ref>. To find the initial set of labels, we analyzed passive DNS data collected from ∼80 geographically dispersed sites all belonging to a single multinational enterprise. The passive DNS data was filtered to only include DNS CNAME records. We then grouped alias domain names by the canonical name's domain name and sorted the canonical name by the number of unique alias domain names that map to it. The most common domains found with this method are reported in Section <ref>. We omitted some domains that were related and behaved similarly, e.g., Edgecast has a series of canonical names that begin with a Greek letter and end in , but we only report results for . Given the list of domain names associated with canonical names, we generated both more generic and more specific labels. For the more generic autonomous system labels, we collected all IP addresses in the DNS CNAME records for a given canonical name and mapped those IP addresses to their autonomous systems. We report results for the two most prevalent autonomous systems for each canonical name, which covered almost all observed records. For the more specific canonical name FQDNs, we performed an analysis similar to that of the canonical name's domain name, i.e., we grouped all alias domains by canonical name FQDNs and sorted the FQDNs by their number of unique alias domain names. Some CDNs like Baidu and Fastly have a relatively well-defined, small set of canonical names that service a large number of distinct customers. Other CDNs, like Cloudflare and StackPath, often encode a customer-specific domain or a unique ID as a subdomain of the canonical name. The FQDN measurement also explains the ambiguity of misinformation support when looking at more generic relationships. 1.02 §.§ Results In this section, we present the results of scanning Internet infrastructure to determine support for domain name misinformation. All destination tuples used for scanning in these subsections were collected from the same ∼80 geographically dispersed sites belonging to the single multinational enterprise mentioned above, but the type of data varies as discussed below. We used our open-source tool, <cit.>, to collect the necessary network metadata, which was collected between January 22nd, 2023 and February 21st, 2023, and is referred to as . §.§.§ Autonomous Systems To generate the list of destinations to scan, we extract the TLS value and destination IP address from all passively observed packets containing a TLS from . We then map the IP addresses to their respective autonomous systems, group the destination tuples based on their autonomous system, and filter data that does not belong to a tracked autonomous system. For efficiency reasons, we keep the 100,000 most prevalent destination tuples per autonomous system. These scan results are presented in Table <ref>. The number of observed domains is after the filtering step described in Section <ref>. The cells are highlighted to indicate how much ambiguity there is in the results. Green/red is used when there is less than 5% or greater than 95% support for a given technique. Yellow is used for the remaining range and indicates competing architectures within the same autonomous system, where only some support the misinformation technique. For example, only 62.62% of the domain fronting scans for destinations that map to the autonomous system were successful, despite Fastly being known to support domain fronting. We investigate this discrepancy and similar issues in the following subsections. The relatively high support for domain faking and domainless fronting among most autonomous systems is partly explained by server autonomy, i.e., many of the domains are not associated with a CDN and are responsible for their own server configurations. In this case, most of the IP addresses only map to a single domain. Unfortunately, there is not always an obvious explanation for the support numbers in Table <ref> because grouping destinations by autonomous system conceals a large amount of diversity in the underlying infrastructure, which motivates the following sections. §.§.§ DNS CNAME Domain To address some of the limitations from the previous subsection, we now investigate domain name misinformation support when we group destination tuples based on their canonical name's domain name. We first collect all DNS CNAME records from the dataset, and then extract the alias domain, canonical domain, and IP address of the canonical name from each record. Again, we keep the 100,000 most prevalent destination tuples per canonical domain. The results in Table <ref> seem to be converging towards more clear answers, but there still exists caveats. Some of the support numbers slightly less than 100% for a given technique are explained by network failures in the scanning, but the support close to zero is more difficult to explain without intimate knowledge of the platforms. 's modest support for domainless fronting is also surprising given Cloudflare's dependence on the extension. After investigating specific examples, the primary pattern was related to destinations that were hosted on autonomous systems not related to Cloudflare but did use a canonical name in their DNS records. Microsoft's will sometimes support domain fronting and faking when the domains map to the same IP address. In these cases, they appear to share the same load balancer, but do have unrelated TLS certificates. 1.02 Viewing the scanning results based on canonical domains does make the hosting providers' support for different misinformation techniques very clear in some cases, e.g., alias domains mapping to and will support domain fronting and faking, but will not support domainless fronting. But some popular canonical domains remain unclear, e.g., , , and . §.§.§ DNS CNAME FQDN We are again addressing the previous subsection's limitations by using a more granular grouping of destinations, the canonical name's FQDN. The data preparation was the same as the previous subsection's except for using the canonical name's FQDN. Table <ref> presents the results, where some of the FQDNs have been shortened if it appeared that they were specific to a customer as opposed to specific to the provider. Support, or lack thereof, for the different misinformation techniques has become much clearer. For instance, 's uncertainty for domain fronting support can now be explained by prohibiting fronting between customer-owned properties () and Google-owned properties () but allowing fronting within those groups. Like the above observation, Fastly's mixed support for domain fronting when using to group destinations is explained by looking at the more specific canonical FQDNs. For example, we found that domain fronting, domain faking, and domainless fronting are all possible between alias domains that map to the canonical FQDN. On the other hand, there were some canonical FQDNs that contained customer-specific identifiers as subdomains that did not support any of the misinformation techniques. is interesting due to the large number of distinct services that are mapped to it, including third-party groups that take advantage of Amazon's Elastic Load Balancing (ELB) service. The two FQDNs in Table <ref> are companies that use ELB to host their customers' websites. Their backend environments must be configured differently because of the discrepancy in domainless fronting. Microsoft's was the one outlier whose domain name misinformation support isn't clear. In November 2022, Azure began to prohibit domain fronting for newly created Azure Front Door resources and created a support system for customers to prohibit domain fronting on older resources <cit.>. Azure will discontinue domain fronting for all resources in November 2023. Table <ref> provides a snapshot of this policy's effects, and Azure's fronting support should converge to 0 in November 2023. §.§ Discussion Using candidate sets based on the CNAME FQDN clearly provides the most information, but these sets are limited because they are not always applicable. As discussed above, many popular canonical FQDNs are specific to a single customer, in which case candidate sets constructed through the canonical domain name would be more informative. Candidate sets based on autonomous systems or subnets are useful when CNAME records do not exist for a given domain. While we do know the possibilities and constraints with respect to misinformation techniques for some popular canonical names, there are many caveats that deserve further attention. For example, a small set of Akamai-hosted domains will allow domain fronting, which may be related to custom software stacks like Drupal <cit.>, which we observed running US government sites that allow domain fronting. In any case, the complexity of this type of analysis is likely to continue to grow with the complexity of CDNs and deserves further research. Domain fronting is in part successful due to its selection of a popular domain to use as a front. To better understand popular domains' support for domain fronting, we analyze the freely available Umbrella Popularity List <cit.>, which lists the top-1 million queried domains based on their global DNS infrastructure. Using the popularity list from February 21st, 2023, we ran on each domain from AWS EC2 instances in the and regions. We used the default DNS resolver and <cit.> to collect all DNS responses. 331 thousand domains from the top-1 million list mapped to a canonical name. 133 thousand of the domains mapped to a canonical name that was analyzed in Table <ref>. Using the domain fronting support from Table <ref>, we estimate that at least 19.4% of these 133 thousand and 2.5% of the top-1 million domains can be used as a front. The DNS resolutions across regions were relatively stable, perhaps unsurprising given the global presence of most CDNs. But there were minor differences between the two regions that may be explained by the need for redundancy and load balancing configurations as opposed to geography. The most notable example being , which was ranked 14th in the Umbrella list and is used by Microsoft to update its list of trusted and untrusted root certificates. The canonical name returned in the region was related to Azure, but it was related to Limelight in the region. Within our datasets, we observed the alias domain mapped to canonical names belonging to Azure, Akamai, Limelight, and StackPath, where the latter two CDNs have broad support for domain fronting. § EXPLOITING DOMAIN NAME MISINFORMATION As we have shown in the previous section, domain fronting support among many of the popular CDNs is not particularly rare. We now shift the focus of this paper towards how a well-resourced adversary can exploit domain fronting to break the end-to-end security guarantees of HTTPS and stealthily man-in-the-middle network sessions that connect to a CDN supporting domain fronting. We implemented a proof-of-concept attack against Fastly and AWS domains, which will succeed against any of the providers that we report as supporting domain fronting. Our attack injects a small module into the victim's browser to rewrite selected fields, and then greatly amplifies its effect by implementing the proxying and monitoring functions externally. Our proof-of-concept uses dynamic linker hijacking <cit.>, but many other techniques could be used, including the more generic execution flow hijacking <cit.>, process injection <cit.>, or a supply chain compromise <cit.>. We note that in response to a preprint of this paper, AWS promptly implemented a fix to prevent this attack and domain fronting between CloudFront distribution points is no longer possible. §.§ Threat Model The main goals of the attacker are two-fold: 1) perform a man-in-the-middle attack to catalog or censor a victim's targeted traffic, where “targeted" is defined as traffic destined to interesting sites hosted by CDNs that support domain fronting, e.g., , and 2) establish a stealthy command-and-control channel through HTTP header manipulation. To execute the attack, the attacker only needs to have the ability to 1) modify HTTP headers either through hijacking the execution flow of a process or by successfully executing a supply chain attack, and 2) create an account on a CDN that supports domain fronting. While the first point is far more onerous than the second, we note that dynamic linker hijacking <cit.> is a standard attack technique not uncommon in malicious software and is facilitated by attack tools like metasploit <cit.>. The attacker does not have the ability to direct the victim towards domains that can be intercepted. If the attacker's goal was to man-in-the-middle traffic, the victim's traffic destined to would need to arise organically through the victim's actions. From the point-of-view of establishing a command-and-control channel, this may not be a restriction in practice because a small set of popular domains are visited hundreds of times per day by a given user. In the dataset, domains were visited ∼27 times per day per user. The primary benefits of this attack to the adversary include: 0em * There is no need for a separate channel to perform data exfiltration or command-and-control. As is common in remediation efforts, incident responders heavily rely on IP address and domain name-based indicators of compromise to identify infected hosts (e.g., see <cit.>), which are not present. * From the point-of-view of the victim and passive network observers, there are no abnormal artifacts associated with the network connection that could lead to detection. * Dynamic linker hijacking and supply chain attacks will result in a computationally efficient method to rewrite HTTP requests, lowering the risk of abnormal memory or CPU spikes typically associated with decrypting and processing network traffic. While overriding DNS responses on the endpoint or directly exfiltrating the decrypted data to an attacker-owned server may accomplish similar goals to our attack, both methods would introduce artifacts that make identifying the attacker's actions possible. §.§ Attack Before AWS addressed the proposed attack, we verified that the end-to-end attack worked on AWS CloudFront domains of the form using Firefox, and it should also work with any hosting provider that supports domain fronting or wildcard certificate fronting, and any client application susceptible to execution flow hijacking or supply chain attacks. This section focuses on CloudFront, but we have also verified that the attack works against most Fastly domains, e.g., we were able to successfully intercept Firefox connections to . In our proof-of-concept illustrated in Figure <ref>, the goal of the attacker is to create a stealthy command-and-control channel while also eavesdropping on all the victim's encrypted connections to a domain of the form in a way that is completely opaque to the user and network monitoring tools. The attack is straightforward: 0em * The attacker creates an EC2 instance to act as the origin server and installs a webserver capable of proxying traffic, e.g., . The origin server can run on any hosting provider, preferably closer to the CDN to reduce latency. * The attacker then creates a CloudFront distribution point, configures the above EC2 instance as the origin server, and configures CloudFront to forward all request headers to the origin server. * In the Fastly case, the attacker would then register a domain name of the same length as the domains to be intercepted and configure the CNAME record to point to Fastly. * On the victim's endpoint, the attacker uses dynamic linker hijacking to intercept popular function calls related to encryption, e.g., and . We modified 's <cit.> intercept functionality for this step and provide a code sample in Appendix <ref>. * During interception, if the HTTP contains a pattern consistent with default CloudFront distribution points, the attacker rewrites the to point to the attacker-owned distribution point. * In order to know the victim's original destination, the attacker identifies common HTTP headers such as the field and rewrites the data associated with those headers to include the original destination. The configuration code to extract this value is given in Appendix <ref>. * The attacker optionally rewrites more data in the HTTP request/response headers to facilitate stealthy command-and-control. * When the request reaches the attacker's origin server, the attacker records all relevant information and proxies the request on behalf of the victim. When testing this attack, Firefox would segfault if we attempted to create a larger HTTP request by changing the length of the HTTP or creating additional HTTP headers. The proof-of-concept worked around this constraint by not modifying the length of the initial HTTP request. This constraint is easily overcome if the attacker registers multiple domains of varying length and replaces the HTTP with a domain of appropriate length. From the point-of-view of the victim and non-CDN network monitoring tools, all data features will be legitimate: the TLS contains the victim's intended CloudFront domain, CloudFront returns a valid certificate, the IP address is the same, the response content exactly matches what the victim requested, and the browser/OS would not be able to log the malicious domain. The CDN may have enough information to detect this attack, but we are unaware of any CDN that currently makes this data available to end users. § DISCUSSION The scanning results presented in Section <ref> were meant to be representative, but not necessarily exhaustive. For example, there are many AWS services that map to the canonical name, but we only gave specific results for Elastic Load Balancer. The methodology of Section <ref> can be used for future studies that further characterize domain name misinformation and its support. We have shown that many popular CDNs support domain fronting in Section <ref>, and that malicious actors can abuse that support to man-in-the-middle a victim's traffic as described in Section <ref>. Again, the attack is possible because the unwitting victim is presented a valid certificate by the CDN and the attacker-proxied traffic is what the victim expected. New and developing standards may achieve the same goals as domain fronting without exposing users to the risks of our attack. For example, the Encrypted Client Hello (ECH) <cit.> Internet draft results in an encrypted . When used in combination with DNS-over-HTTPS <cit.> and TLS 1.3 <cit.>, passive network observers would only be able to extract the destination's IP address, which would only provide CDN-level information. Importantly, privacy enhancing technologies would not need to misrepresent the domain name in the extension, and CDNs could more comfortably discontinue support for domain fronting. The connections between an earlier version of ECH, encrypted SNI, and censorship circumvention were previously established <cit.>. §.§ Ethics The study of domain name misinformation is naturally divisive because these techniques obfuscate key data features used by incident response teams to identify malware infections, while at the same time furthering privacy enhancing technologies. It is our hope that these results motivate the security and privacy research community to develop privacy enhancing technologies that are less prone to abuse by malicious actors. While the attack in Section <ref> may not be considered a traditional vulnerability, we do believe that hosting providers should be made aware that their support for domain fronting can facilitate such an attack. To further that goal, we sent a preprint of this paper to each named company in December 2022. Most companies responded within two months. We were able to provide additional data around the attack that helped AWS confirm and fix the root cause in CloudFront. Companies such as Vercel and Fastly acknowledged the attack and said that they are in the process of implementing additional controls. § CONCLUSION Domain fronting, domain faking, and domainless fronting are domain name misinformation techniques, which all have serious implications for security and privacy due to the reliance on domain names in common security architectures and the importance of these techniques in privacy enhancing technologies. We have presented a novel measurement methodology to identify support among cloud infrastructure providers and shown that many content delivery networks and cloud infrastructure providers support domain name misinformation techniques; sometimes intentionally, sometimes optionally, and sometimes unwittingly. We have also presented a straightforward attack that leverages dynamic linker hijacking and domain fronting in a way that would allow malicious actors to stealthily maintain a surveillance state. With our attack, the attacker is able to man-in-the-middle all the victim's encrypted traffic bound to a content delivery network that supports domain fronting, breaking the authenticity, confidentiality, and integrity guarantees expected by the user when using HTTPS. We have successfully demonstrated a working attack on most Fastly domains and domains, and the attack should apply to all domains hosted by a provider that supports domain fronting. plain
http://arxiv.org/abs/2307.04996v1
20230711032454
Empowering recommender systems using automatically generated Knowledge Graphs and Reinforcement Learning
[ "Ghanshyam Verma", "Shovon Sengupta", "Simon Simanta", "Huan Chen", "Janos A. Perge", "Devishree Pillai", "John P. McCrae", "Paul Buitelaar" ]
cs.IR
[ "cs.IR", "cs.AI", "cs.LG", "14J60 (Primary) 14F05, 14J26 (Secondary)", "F.2.2; I.2.7" ]
[email protected] [1] University of Galway Ireland [email protected] University of Galway Ireland [email protected] University of Galway Ireland [email protected] University of Galway Ireland [email protected] University of Galway Ireland [email protected] Fidelity Investments USA [email protected] Fidelity Investments USA [email protected] University of Galway Ireland Personalized recommendations have a growing importance in direct marketing, which motivates research to enhance customer experiences by knowledge graph (KG) applications. For example, in financial services, companies may benefit from providing relevant financial articles to their customers to cultivate relationships, foster client engagement and promote informed financial decisions. While several approaches center on KG-based recommender systems for improved content, in this study we focus on interpretable KG-based recommender systems for decision-making. To this end, we present two knowledge graph-based approaches for personalized article recommendations for a set of customers of a large multinational financial services company. The first approach employs Reinforcement Learning (RL) and the second approach uses the XGBoost algorithm for recommending articles to the customers. Both approaches make use of a KG generated from both structured (tabular data) and unstructured data (a large body of text data). Using the RL-based recommender system we could leverage the graph traversal path leading to the recommendation as a way to generate interpretations (Path Directed Reasoning (PDR)). In the XGBoost-based approach, one can also provide explainable results using post-hoc methods such as SHAP (SHapley Additive exPlanations) and ELI5 (Explain Like I’m Five). We also compared the above approaches with published algorithms for building recommender systems. Our proposed RL-based recommender system achieved 43.76% MAP (MAP@K=10). Our RL-based recommender system outperformed both the XGBoost-based approach and baseline model (Bayesian personalized ranking) by 13.38 and by 32.55 percentage points, respectively, delivering more accurate and personalized article recommendations. Importantly, our approach offers explainable results, promoting better decision-making. This study underscores the potential of combining advanced machine learning techniques with KG-driven insights to bolster experience in customer relationship management. <ccs2012> <concept> <concept_id>10010520.10010553.10010562</concept_id> <concept_desc>Computer systems organization Embedded systems</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010520.10010575.10010755</concept_id> <concept_desc>Computer systems organization Redundancy</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10010520.10010553.10010554</concept_id> <concept_desc>Computer systems organization Robotics</concept_desc> <concept_significance>100</concept_significance> </concept> <concept> <concept_id>10003033.10003083.10003095</concept_id> <concept_desc>Networks Network reliability</concept_desc> <concept_significance>100</concept_significance> </concept> </ccs2012> [500]Artificial Intelligence Machine Learning [300]Artificial Intelligence Recommender Systems Artificial Intelligence Reinforcement Learning Artificial Intelligence XGBoost [100]Natural Language Processing Knowledge Graph Empowering recommender systems using automatically generated Knowledge Graphs and Reinforcement Learning Paul Buitelaar August 12, 2023 ======================================================================================================== § INTRODUCTION The increasing demand for personalized content has led to the development of recommendation systems that can effectively utilize structured information. Knowledge graphs (KGs) have emerged as a promising solution for this challenge, offering improved recommendation performance and explainability due to the inherent comprehensibility of relationships between entities <cit.>. A growing body of research is dedicated to exploring the potential of knowledge graph reasoning in personalized recommendation <cit.>. One line of research focuses on knowledge graph embedding models, such as TransE <cit.> and node2vec <cit.>, which align the knowledge graph in a regularized vector space, identifying the similarity between entities by calculating the distance between their representations <cit.>. However, purely KG embedding-based approaches struggle to uncover multi-hop relational paths, limiting the ability to capture complex relationships between entities. Another line of research investigates path-based recommendation techniques. Gao et al. <cit.> proposed the concept of meta-paths for reasoning over KGs. Although promising, this approach faces challenges when dealing with the numerous types of relations and entities present in large, real-world KGs, making it difficult to explore relationships between unconnected entities. Wang et al. <cit.> developed a path embedding approach for recommendation over KGs that enumerates all qualified paths between every user-item pair, followed by training a sequential RNN model to predict ranking scores for the pairs. While this method improves recommendation performance, it is not feasible to explore all paths for every user-item pair in large-scale KGs due to computational limitations. Recent advances have focused on combining collaborative filtering (CF) with KG embedding techniques to enhance recommendation performance <cit.>. For example, Ai et al. <cit.> proposed a method that incorporated a soft matching algorithm to identify explanation paths between users and items. However, this strategy generates explanations post-hoc through empirical similarity matching between user and item embeddings, providing retrospective rationales for the chosen recommendations rather than deriving explanations from the reasoning process. We argue that an intelligent recommendation agent should explicitly reason over knowledge graphs for decision-making rather than simply embedding the graph as latent vectors for similarity matching. In this paper, we treat knowledge graphs as a flexible structure to maintain the agent's knowledge about users, items, other entities, and their relationships. The agent initiates the process with a user and conducts explicit multi-step path reasoning over the graph, discovering suitable items for recommendation. This approach allows for the reasoning process to be easily interpreted, providing causal evidence for the recommended items. Our goal is not only to select a set of candidate items for recommendation but also to provide the corresponding reasoning paths as interpretable evidence for each recommendation. To address the limitations of previous work, we propose an approach that casts the recommendation problem as a deterministic Markov Decision Process (MDP) over the knowledge graph. We employ a Reinforcement Learning (RL) method, wherein an agent begins with a given user and learns to navigate to potential items of interest. The path in the KG then serves as an explanation for why the item should be recommended to the user. This approach presents three main challenges: measuring the correctness of an item for a user, efficiently exploring promising reasoning paths in the graph, and preserving the diversity of both items and paths during exploration. To tackle these challenges, we propose a KG-driven RL-based approach. The benefit of our approach is that it can also work when reviews or ratings of the items are not available and only click information is available to learn the user preferences. Our experimental results demonstrate that our proposed method consistently outperforms state-of-the-art recommendation techniques, we present qualitative case studies to demonstrate the explainability of our approach, providing insights into the reasoning paths and decision-making processes of the recommendation agent. These case studies showcase the interpretability of our method, allowing users to better understand the rationale behind the recommendations. In summary, our research contributes to the growing body of literature on knowledge graph-based recommendation systems, specifically in the financial domain. By proposing a novel reinforcement learning approach and conducting a comparative study with the XGBoost algorithm, we offer valuable insights into the potential of knowledge graphs for improving the performance and explainability of personalized recommendation systems. Our development of a KG-driven XGBoost recommendation system further demonstrates the versatility and applicability of knowledge graph techniques in the field of recommendation. By developing a KG-driven XGBoost recommendation system alongside our reinforcement learning approach, we aim to showcase the flexibility and potential of knowledge graph-based techniques in addressing a wide range of recommendation scenarios. Our comparative study between the two approaches not only provides insights into their respective strengths and limitations but also highlights the importance of tailoring recommendation algorithms to specific application contexts and requirements. We have made public the source code of both the proposed approaches via a GitHub link [<https://github.com/GhanshyamVerma/Explainable-Recommender-System>.]. Our main contributions are as follows: (1) Automatic KG creation using structured and unstructured data. (2) Use of KG for building an XGBoost-based recommender system that can exploit click information. (3) Use of KG for building an RL-based recommender system that can exploit click information. (4) Explainability module that can explain the results. The rest of the paper is structured as follows. In Section <ref>, we describe the existing methods for building recommender systems. Section <ref> describes the methodology. Section <ref> describes the experimental setup. In Section <ref>, we discuss and compare results in detail. Finally, we conclude in Section <ref>. § RELATED WORK §.§ Collaborative Filtering Collaborative Filtering (CF) has been a cornerstone in the development of recommender systems. Early approaches to CF focused on the user-item rating matrix and predicted ratings using user-based <cit.> or item-based <cit.> collaborative filtering methods. These approaches calculated similarities between users or items to generate recommendations. As dimension reduction methods advanced, latent factor models, such as matrix factorization, gained widespread adoption in recommender systems. Prominent techniques include singular value decomposition <cit.>, non-negative matrix factorization <cit.>, and probabilistic matrix factorization <cit.>. These methods essentially learn a latent factor representation for each user and item to calculate the matching score of user-item pairs. In recent years, deep learning and neural models have further extended collaborative filtering, leading to two main sub-categories: similarity learning and representation learning. The similarity learning approach adopts relatively simple user/item embeddings (e.g., one-hot vectors) and learns a complex prediction network as a similarity function to compute user-item matching scores <cit.>. In contrast, the representation learning approach focuses on learning richer user/item representations, while using a simple similarity function (e.g., inner product) for score matching <cit.>. However, the recommendation results generated by latent factor or latent representation models can be difficult to explain, which has led to a growing interest in explainable recommendation [19, 20]. The challenge of making recommendations more interpretable has driven researchers to explore various techniques and approaches that offer both high-quality recommendations and meaningful explanations for the user-item associations. In response to the challenges posed by the lack of interpretability in traditional collaborative filtering approaches, researchers have started to explore hybrid recommender systems that combine the benefits of CF methods with other techniques, such as knowledge graph-based methods <cit.>. These hybrid systems aim to improve the quality of recommendations while also providing more interpretable and explainable results. Knowledge graphs provide a structured representation of information, making it easier to reason about the relationships between entities and draw meaningful connections. By incorporating knowledge graphs into the recommendation process, researchers can develop systems that offer both high-quality recommendations and interpretable explanations for user-item associations. The field of collaborative filtering-based recommender systems has seen significant advancements over the years, with a growing emphasis on integrating additional sources of information and enhancing interpretability. The exploration of hybrid systems, such as those that combine collaborative filtering with content-based filtering or knowledge graph-based methods, holds promise for the development of more accurate, personalized, and explainable recommendations. §.§ Knowledge Graph-driven Recommender Systems Knowledge Graph-driven Recommender Systems (KGRS) have recently gained attention due to their ability to provide explainable and high-quality recommendations. Researchers have explored different ways to incorporate knowledge graph embeddings into recommender systems to improve recommendation performance and interpretability. One research direction focuses on leveraging knowledge graph embeddings as rich content information to enhance recommendation performance. For example, Zhang et al. <cit.> utilized knowledge base embeddings to generate user and item representations for recommendation purposes. Huang et al. <cit.> employed memory networks over knowledge graph entity embeddings for recommendation. Wang et al. <cit.> proposed a ripple network approach for embedding-guided multi-hop KG-based recommendation, which allows for the exploration of connections between entities in the knowledge graph. Another research direction aims to leverage the entity and path information in the knowledge graph to make explainable decisions. Ai et al. <cit.> incorporated the learning of knowledge graph embeddings for explainable recommendation, but their explanation paths are essentially post-hoc explanations, as they are generated by soft matching after the corresponding items have been chosen. Wang et al. <cit.> proposed an RNN-based model to reason over KGs for recommendation. However, this approach requires enumerating all possible paths between each user-item pair for model training and prediction, which can be impractical for large-scale knowledge graphs. The field of Knowledge Graph-driven Recommender Systems has witnessed significant progress in recent years. Researchers are exploring different approaches to incorporate knowledge graph embeddings and entity relationships to enhance recommendation performance while providing interpretable and explainable results. Future work in this area will likely focus on developing more efficient and scalable methods for reasoning over large-scale knowledge graphs and further improving the quality and explainability of recommendations. Some researchers have focused on leveraging the structural properties of knowledge graphs to improve recommendation performance. For instance, Wang et al. <cit.> developed a graph attention network that incorporates both the relational information and entity features in a knowledge graph for recommendation. This approach allows for more accurate and context-aware recommendations by attending to the most relevant relations and entities for a given user-item pair. In addition to using knowledge graph embeddings, researchers have also explored incorporating external knowledge sources and incorporating user-item interactions into the knowledge graph. Cao et al. <cit.> proposed a unified framework for incorporating user-item interactions and external knowledge sources into the knowledge graph, which improved the quality of recommendations by capturing the complex interplay between these elements. Schlichtkrull et al. <cit.> introduced a relational graph convolutional network (R-GCN) that learns embeddings for both entities and relations in a knowledge graph. This method can be used in a wide range of applications, including recommender systems, by exploiting the rich information present in the knowledge graph structure. The research area of Knowledge Graph-driven Recommender Systems has experienced significant advancements, with researchers exploring various methods to utilize knowledge graph embeddings, external knowledge sources, and user-item interactions to improve the quality and explainability of recommendations. As more efficient and scalable techniques are developed, KGRS will continue to evolve and provide increasingly accurate, personalized, and explainable recommendations. §.§ Reinforcement Learning based Recommender Systems Reinforcement Learning (RL) has garnered considerable interest in the research community, with numerous successful applications in various domains, including recommender systems. Researchers have explored RL-based recommender systems in both non-KG settings and KG settings for a range of tasks. In non-KG settings, RL has been applied to various types of recommender systems, such as ads recommendation <cit.>, news recommendation <cit.>, and post-hoc explainable recommendation <cit.>. These applications have demonstrated the potential of RL to adapt to changing user preferences and generate personalized recommendations based on user interactions. In the context of knowledge graphs, researchers have primarily focused on utilizing RL for tasks such as question-answering (QA). For instance, Xiong et al. <cit.> leveraged reinforcement learning for path-finding in knowledge graphs, while Das et al. <cit.> proposed MINERVA which makes use of a KG and trains a model for question answering. Lin et al. <cit.> introduced RL-based models for KG question answering with reward shaping. These approaches formulate multi-hop reasoning as a sequential decision-making problem, taking advantage of the structure and information present in knowledge graphs. However, to the best of our knowledge, there has been limited research on utilizing RL in knowledge graphs specifically for the task of recommendation, especially when considering the challenge of navigating an extremely large action space as the number of path hops grows. This opens up a promising research direction for developing RL-based recommender systems that can exploit the rich information present in knowledge graphs while efficiently navigating large action spaces to provide personalized and explainable recommendations. Reinforcement learning presents a promising avenue for recommender systems, particularly when combined with the rich information present in knowledge graphs. By exploring novel techniques for managing large action spaces, incorporating graph neural networks, and leveraging transfer learning, researchers can continue to push the boundaries of RL-based recommender systems, providing increasingly accurate, personalized, and explainable recommendations. § METHODOLOGY The problem addressed in this research is to provide a new type of recommendation, called Knowledge Graph Driven Explainable Recommendation (KGDExR), that simultaneously performs item recommendation and path finding based on rich and heterogeneous information in the knowledge graph. The goal is to find a recommendation set of N items for a given user u from a subset of Item entities 𝐈 connected to User entities 𝐔 through relations r_ui in The knowledge graph 𝐆. The recommendation set should be associated with one reasoning path p_j(u,i_n) (2 ≤ j ≤ J) for each pair (u,i_n) of user and recommended item, where j is the number of hops in the path and J is a given integer. The number of recommendations, N, is also given as an input. The knowledge graph 𝐆 is defined as 𝐆 = (e^h,r, e^t), where e^h is the head entity and e^t is the tail entity in the KG. e^h & e^t ∈𝐄, r ∈𝐑, where 𝐄 is the entity set and 𝐑 is the relation set. A j-hop path from entity e_0 to entity e_j is defined as a sequence of j+1 entities connected by j relations, denoted by p_j(e_0,e_j) = { e_0r_1↔ e_1r_2↔ . . . r_j↔ e_j}. The KGDExR problem can be formalized as finding a set of N items {i_n}_n ∈ [𝐍]⊆𝐈 for a given user u and integers J and N, such that each pair (u,i_n) is associated with a reasoning path p_j(u,i_n) (2 ≤ j ≤ J). §.§ KG-Driven Reinforcement Learning based Recommender System We use Markov Decision Process (MDP) framework to address the KGDExR problem. To ensure path connectivity, we supplement the graph 𝐆 with two distinct types of edges. Primarily, reverse edges are included, such that if (e^h,r, e^t) ∈𝐆, then (e^t,r, e^h) ∈𝐆, aiding in the path definition. The state at a given step t, denoted as s_t, is represented as a triplet (e_u, e_s_t, h_t), where e_u∈ U denotes the initial user entity, e_s_t indicates the entity the agent has arrived at step t, and h_t refers to the history before step t. We define the k-step history as the combination of all entities and relations in the previous k steps, i.e., { e_ur_j↔ e_jr_j+1↔ . . . r_j+k-1↔ e_k-1r_j+k↔ e_k}. Given some user u, the initial state is represented as s_0 = (e_u, e_u, ∅) and the terminal state is represented as s_T = (e_u, e_T, h_T). The action space A_t at state s_t is defined as all possible emerging edges from an entity et. Some nodes in the KG can have very large out-degree which can make it inefficient to maintain the large action space. Therefore, we perform an action-pruning step based on a scoring function f((r,e)|u), which maps any relation to a real-valued score conditioned on a given user <cit.>. There is a user-defined integer α that upper bounds the size of the action space. For our experiments, we set the value of α = 3. For a given user, a simple binary reward function is not appropriate as we don't know whether the agent has reached a target item or not. Therefore, the agent needs to find as many reasoning paths as possible. We consider giving a reward to the last state (s_T) of the path. The reward R_T is defined as: R_T = max ( 0, f(u,e_T)/max_i∈ I f(u,i) ), if e_T∈ I, 0, otherwise. In accordance with the underlying properties of the graph, the state in our recommendation system is determined by the entity's position. Given a state s_t = (e_u, e_t, h_t) and an action a_t = (r_t+1, e_t+1), the transition to the next state s_t+1 is characterized by a probability distribution: P[s_t+1 = (e_u, e_t+1, h_t+1) | s_t = (e_u, e_t, h_t), a_t = (r_t+1, et+1)] = 1 However, there is an exceptional case for the initial state s_0 = (e_u, e_u, 0), which introduces stochasticity and depends on the starting user entity. To simplify the model, we assume a uniform distribution for the users, ensuring that each user is equally sampled at the beginning. Building upon our Markov Decision Process (MDP) formulation, our primary objective is to learn a stochastic policy π that maximizes the expected cumulative reward. We define the expected cumulative rewards based on all the paths traversed by a user as below: J(θ ) = 𝔼_e_0∈ u [𝔼_a_1, a_2, ..., a_T∼π_θ(a_t|s_t) [R_T]] To maximize the expected cumulative rewards, we use gradient ascent. The gradients are derived by the REINFORCE <cit.>, i.e., ▽_θ J(θ) ≈▽_θ∑_t R_T log π_θ(a_t|s_t). The final step of our recommendation problem solution involves using a trained policy network to guide the exploration of a knowledge graph. Our objective is to find a set of candidate items and their corresponding reasoning paths for a given user. One approach is to sample paths for each user based on the policy network's guidance. However, this method may lack path diversity because the agent tends to repeatedly search the path with the highest cumulative rewards. To address this, we propose using Path Directed Reasoning (PDR) algorithm, which considers both action probability and reward, to explore candidate paths and recommended items for each user. The process is outlined in Algorithm 1. The algorithm takes inputs such as the KG, the user, and the policy network. The output is a set of T-hop paths for the user, along with their generative probabilities and rewards. Each path ends with an item entity and associated generative probability and reward. Among the candidate paths, there may be multiple paths between the user and an item. To interpret the reasoning behind why an item is recommended to the user, we select the path from the candidate set with the highest generative probability based on the generative probabilities. Finally, we rank the selected interpretable paths based on their path rewards and recommend the corresponding items to the user. ruled Comment/* */ §.§ KG-Driven XGBoost based Recommender System XGBoost (eXtreme Gradient Boosting) <cit.> is an ensemble learning algorithm that has become a popular and effective method for a wide range of machine learning tasks, including classification, regression, and ranking. XGBoost builds a set of decision trees iteratively, using a gradient boosting approach to minimize a user-specified loss function. For a dataset D = { (x_i, y_i) } | (x_i∈ℝ^m, y_i∈ℝ ) that has n observations and m features, the XGBoost model uses Z additive functions for prediction <cit.>. ŷ_i = ∑_z=1^Z f_k (x_i), where f_k ∈ F and F is the space of regression trees which can be defined as: F = {f(x) = w_q(x)} (q: ℝ^m → T, w ∈ℝ^T ), where q is the structure of each tree that maps an observation to the corresponding leaf node in the tree, T represents the number of leaf nodes in the tree, and w represents the leaf weights. For a given observation, the final prediction is computed by taking the sum of all the weights for the corresponding leave nodes. The key idea behind XGBoost is to iteratively add decision trees to the ensemble, with each new tree trained to correct the residual errors of the previous trees. In other words, XGBoost fits the model by adding new trees to the ensemble that improve the overall prediction accuracy, while penalizing trees that are too complex or overfit the data. One of the important features of XGBoost is its support for a wide range of objective functions and evaluation metrics, including common loss functions like squared error and logistic loss, as well as custom loss functions. XGBoost also includes a variety of regularization techniques to prevent overfitting and improve generalization performance, including L1 and L2 regularization terms, tree depth constraints, and early stopping. For our initial experiments, we implemented three rankers within the XGBoost model to predict the ranking of the articles for the users. These are XGBoost ranker <cit.>, CatBoost ranker <cit.>, and LightGBM ranker <cit.>. CatBoost <cit.> is a recent library known for its efficacy in handling categorical features, which employs YetiRank <cit.> as the loss function. LightGBM <cit.> handles categorical features and optimizes the LambdaRank loss. We trained XGBoost ranker  <cit.> with Rank Pairwise loss, utilizing one-hot encoding. During our initial experiments, the XGBoost ranker outperformed the other two rankers. Therefore, we selected the XGBoost ranker for our KG-driven XGBoost-based recommender system approach. We used XGBoost ranker in combination with KGs generated from article text and the other article features to build the XGBoost-based recommender system. The KGs generated are then used as input to the TuckER and TransE to generate 300-dimensional KG embeddings. These embeddings along with the customer demographical data and article features are used to train the KG-driven XGBoost-based recommender system. § EXPERIMENTAL SETUP In this section, we provide information on KG creation, KG embedding generation, and the data sets used in this work. §.§ Automatic KG Generation To automatically generate KGs from the targeted unstructured data sets, we used two approaches. The first approach makes use of external lexical resources, such as ConceptNet <cit.> to connect terms and enrich the taxonomy. The second approach is different in the way that it neither requires any training nor any external resource, but instead uses the knowledge of the domain available within the input data to extract the knowledge. §.§.§ ConceptNet-based approach ConceptNet <cit.> is a knowledge graph that encompasses entities from various domains along with their corresponding relationships. For this study, we specifically focus on three relationship types: IsA, PartOf, and Synonym. The "IsA" relationship signifies hypernymy relations, while "PartOf" represents meronymy relations, and "Synonym" indicates synonymy relations. To generate a dataset for hyponymy relations, we inverted the direction of relations labeled as hypernyms. All other relations in ConceptNet were grouped together as "other." The training dataset was created by including all extracted relationships. The system architecture is based on BERT <cit.>, employing 12 transformer blocks. The embeddings utilized are extracted from the transformer in the 12th layer. Pretrained embeddings from the BERT model "uncased_L-24_H-1024_A-16" are employed, which are readily available in TensorFlow. We named "uKG_CN" to the KG that we generated using the ConceptNet-based approach. §.§.§ Dependency Parsing-based approach The creation of a domain-specific KG with this approach follows a mixed approach based on both the Saffron tool[<https://saffron.insight-centre.org/>] for taxonomy generation and the new algorithm for relation extraction. It uses the syntactic knowledge of sentences in a textual dataset to extract new relations between Saffron terms. After extracting the new relations from the text, we integrate them into the Saffron taxonomy and return a fully formed KG. This approach does not require any training and is domain independent. The dependency parsing-based relation extraction approach extracts relations from the text and exports them as triples (left_relation, relation_type, right_relation). It uses dependency parsing (syntactic analysis of the sentences) on the text to find how terms are syntactically (and by extension semantically) connected within sentences. It takes as input the terms extracted by Saffron <cit.>, as well as the dataset originally used to extract the Saffron terms and extract the taxonomy, and returns a list of triples: term1, relation, term2. The whole implementation is done in Python. We named "uKG_DP" to the KG that we generated using the Dependency Parsing based approach. We have also created a KG, referred to as "uKG", from unstructured data. This KG contains only the article and its relation with the most frequent terms found within the article. To compute the Term Frequency, we utilized TF-IDF. §.§.§ KG creation using both structured and unstructured data (cKG) We have already defined the (KGDExR) problem and provided the definition of a KG in section <ref>. Here, we will illustrate how we constructed KG using both structured and unstructured data (combined data (cKG)). The features of structured data, such as `user', `article', `topic', `product', `topic_tag', `product_tag', `response', etc., serve as the type of nodes or entities in the KG. These entities are connected to other entities through relations such as `has_topic', `has_product', `has_topic_tag', `has_product_tag', and `has_response'. Additionally, we utilized the full text of the article, which represents the unstructured data, to create this KG. Therefore, this KG leverages both structured and unstructured data for its creation. The recommendation process begins with a user, traverses through specific entities and their associated relations, and ultimately leads to an item, which in our case is the recommended article for that user. We have named the KG generated using structured and unstructured data that is the combined data as "cKG". §.§ Knowledge Graph Embeddings In a given KG, each head entity or tail entity can be associated as a point in a continuous vector space. In this work, we use TuckER <cit.> and TransE <cit.> methods to generate KG embeddings. TuckER employs a three-way TuckER tensor decomposition, which computes the tensor T and a sequence of three matrices leveraging the embeddings of entities (E_head and E_tail) and relations (R) between them (G ≈ T ⊗ E_head⊗ R ⊗ E_tail). The underlying idea of TransE is to interpret relations as translations that occur between entities in the knowledge graph. In TransE, each entity and relation is assigned a unique vector representation in the embedding space. The objective of the model is to learn these embeddings in such a way that the translation between the embeddings of a head entity and a relation should be close to the embedding of a tail entity. These methods allow us to create KG embeddings that are used to train our recommender systems. §.§ Data sets The dataset used in this study contains the data of the customers of a large multinational financial services company and the viewpoint articles sent to these customers by the company. The dataset spans from January 30th, 2019 to October 30th, 2019, and contains information of 463 customers who received approximately 80 articles each during this period. The dataset consists of 37,423 rows, detailing individual customer-article interactions. It includes a total of 71 articles, with 66 unique articles, providing details related to the products and services that the financial services company provides. This dataset serves as a valuable resource for researchers and marketers interested in understanding customer behavior and preferences, as well as identifying opportunities for targeted content and marketing strategies. We used this dataset for the evaluation of our KG-driven RL-based approach and KG-driven XGBoost approach for recommending articles to customers. The dataset is divided into training, and test sets with a ratio of 70:30 respectively. We have also made this data set publicly available on a GitHub repository [<https://github.com/GhanshyamVerma/Explainable-Recommender-System>.]. § RESULTS We have produced results using both KG-driven XGboost approach and KG-driven reinforcement learning approach. Table <ref> represents the results obtained using the proposed approaches with the KG embeddings used for the model building. From Table <ref>, we can see that the baseline XGBoost model with sentence transformer embedding [all-MiniLM-L6-v2] achieved a 30.38% MAP score. We observed improvements in performance when we used KG embeddings compared to when KG embeddings were not used (see Tabel <ref>). We constructed two KGs using unstructured data (article text) through Saffron <cit.> as mentioned in Section <ref>. These KGs are "uKG_DP" and "uKG_CN" where u denotes unstructured data, DP denotes dependency parsing and CN denotes ConceptNet. Additionally, we created a KG referred to as "cKG" from both structured and unstructured data, as explained in Section <ref>. The rationale behind using the cKG with RL-based approach is that it helps in generating explainable recommendations using paths in the cKG. For RL based approach we used KG embeddings generated using TransE, as shown in Table <ref>. We also compared the performance of our proposed approaches with state-of-the-art existing recommender systems. The existing recommender systems we used are: BPR (Bayesian personalized ranking), Neighborhood-based Recommender System, NCF (Neural Collaborative Filtering), and XGBoost with sentence embedding. We observed that BPR achieved a MAP score of 11.21%, whereas the KG-driven XGBoost approach (cKG) and KG-driven RL-based approach using the same cKG achieved 34.47% and 43.76% MAP scores, respectively. The KG-driven XGBoost approach with KG generated using ConceptNet achieved a MAP score of 38.98% with a recall of 74.38%. The results suggest that if recall is important for any application, then KG-driven XGBoost with uKG_CN can be considered as an option, as it provides the highest recall. Based on the results, it can be observed that the KG-driven RL-based approach outperformed the BPR, Neighborhood-based Recommender System, NCF, and KG-driven XGBoost approaches when considering the MAP score. Additionally, among all the experiments conducted with KG embeddings, the KG embeddings generated from TransE have proven to capture useful information, resulting in better performance compared to TuckER embeddings. Our KG-driven RL-based approach is explainable. To gain a better understanding of our model's interpretation of the recommendation, we present a case study based on the results obtained from our experiments. In this study, we analyze the path patterns uncovered by our model during the reasoning process, as well as examine different recommendation scenarios. As shown in Figure <ref>, the article highlighted with a blue dashed boundary is the article recommended by our RL-based model to a user. We can see that the recommended article has some similarities with another article already recommended and clicked by that user, therefore the model thinks that this article should be of relevance for that user as the user was interested in such kind of articles before. Furthermore, our RL-based approach enables us to offer the top 10 articles for each user. Additionally, it can provide all the associated articles in the path that lead to the outcome, along with shared products, topics, and the most frequent common terms found in the text of the articles present in the path. Our RL-based approach can provide such paths for each recommended item to a user which explain the results and play an important role in decision-making. To generate post-hoc explanation for KG driven XGBoost-based approach, we used SHAP <cit.> and ELI5[<https://github.com/TeamHG-Memex/eli5>]. SHAP (SHapley Additive exPlanations) is a model-agnsotic method used for explaining the output of machine learning models. It is based on game theoretic concepts and provides an explanation for each feature's contribution to the model's prediction. SHAP values quantify the impact of each feature by assigning a value to it, indicating how much it contributes to the prediction compared to the average prediction. SHAP relies on the concept of Shapley values from cooperative game theory and it considers additive feature importance. Figure <ref> represents the KG-XGBoost [uKG_CN] model's features with their average impact on the model output generated by SHAP. ELI5 (Explain Like I'm 5) is a Python library or framework for explainable machine learning models. ELI5 focuses on understanding the overall behavior and importance of features in making predictions. Eli5 reports feature importance using the "permutation importance" algorithm. Figure <ref> shows the KG-XGBoost [uKG_CN] model's feature importance by assigning weights to the features based on their impact on the model output generated by ELI5. Both SHAP and ELI5 show that click_frequency, kg_26, article_length, kg_32, Kg_3, and Kg_45 are the most important features that contributed most to the model results. Overall, the proposed approaches are helpful in providing insights to understand the recommendations and simultaneously perform better than the existing baseline recommender systems. § CONCLUSION This research paper explores the use of knowledge graphs (KGs) to enhance personalized recommendations in the financial sector. We developed two KG-driven recommender systems for a large multinational financial services company, utilizing reinforcement learning and the XGBoost algorithm, respectively. The first approach employs Reinforcement Learning (RL), while the second utilizes the XGBoost algorithm. The XGBoost-based approach uses KG embeddings generated from both TuckER and TransE, and the RL-based approach uses TransE-generated embeddings. We also performed experiments keeping the KG and the embedding same. The findings suggest that the KG-driven RL-based approach outperforms both the KG-driven XGBoost system and baseline models, delivering more accurate and personalized article recommendations. Additionally, the study emphasizes the importance of reasoning with knowledge for decision-making. Overall, this study highlights the potential of combining advanced machine learning techniques with KG-driven insights to improve customer experience and drive business growth in the investment sector. This publication has emanated from research supported in part by a grant from Science Foundation Ireland under Grant number SFI/12/RC/2289_P2. For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. ACM-Reference-Format
http://arxiv.org/abs/2307.04357v1
20230710055929
Survey-scale discovery-based research processes: Evaluating a bespoke visualisation environment for astronomical survey data
[ "C. J. Fluke", "D. Vohl", "V. A. Kilborn", "C. Murugeshan" ]
astro-ph.IM
[ "astro-ph.IM", "astro-ph.GA" ]
Next generation astronomical surveys naturally pose challenges for human-centred visualisation and analysis workflows that currently rely on the use of standard desktop display environments. While a significant fraction of the data preparation and analysis will be taken care of by automated pipelines, crucial steps of knowledge discovery can still only be achieved through various level of human interpretation. As the number of sources in a survey grows, there is need to both modify and simplify repetitive visualisation processes that need to be completed for each source. As tasks such as per-source quality control, candidate rejection, and morphological classification all share a single instruction, multiple data (SIMD) work pattern, they are amenable to a parallel solution. Selecting extragalactic neutral hydrogen (Hi) surveys as a representative example, we use system performance benchmarking and the visual data and reasoning (VDAR) methodology from the field of information visualisation to evaluate a bespoke comparative visualisation environment: the encube visual analytics framework deployed on the 83 Megapixel Swinburne Discovery Wall. Through benchmarking using spectral cube data from existing Hi surveys, we are able to perform interactive comparative visualisation via texture-based volume rendering of 180 three-dimensional (3D) data cubes at a time. The time to load a configuration of spectral cubes scale linearly with the number of voxels, with independent samples of 180 cubes (8.4 Gigavoxels or 34 Gigabytes) each loading in under 5 minutes. We show that parallel comparative inspection is a productive and time-saving technique which can reduce the time taken to complete SIMD-style visual tasks currently performed at the desktop by at least two orders of magnitude, potentially rendering some labour-intensive desktop-based workflows obsolete. § INTRODUCTION Next generation astronomical surveys will pose challenges for a range of human-centred visualisation and analysis workflows that currently rely on the use of standard desktop display environments. Knowledge discovery activities that were, or perhaps still are, feasible for a human to perform when the quantity (i.e. volume) or rate (i.e. velocity) of data available was low are becoming more reliant on automated or autonomous solutions. While desktop computing has already been augmented through the adoption of supercomputing and cloud-style remote services, the visualisation and display of astronomical data is still strongly dependent on the utilisation of laptop screens or monitors located in the astronomer's office. To address the specific needs of individual astronomers, and astronomical research teams, a collection of data analysis and visualisation tools are required. This includes continuing to take full advantage of existing, well-established options that are able to be scaled-up effectively, along with developing and assessing the potential of novel solutions or systems that either provide extra functionalities, or that can be connected into extensible workflows (e.g. virtual observatory model). §.§ Comparative visualisation Seeing many sources together – comparative visualisation – is an approach that naturally supports pattern-finding (“those galaxies all show similar kinematic properties”) and anomaly detection (“why is that one source so different to everything else?”). Such multi-object comparisons might include quality control activities (e.g. assessing whether a source finder or automated calibration pipeline is functioning as expected by selecting a sample of sources for assessment, which might include fine-tuning to check or verify a machine learning algorithm), investigating outcomes of model-fitting (e.g. examining the residual signal once different types of kinematic models are applied), or any of a range of standard analysis tasks that can be performed based on morphological or environmental selection criteria (e.g. field compared with cluster galaxies, dwarf galaxies versus grand design spirals, or the discovery of novel classes of objects when a new discovery space is opened). We will refer to all such activities as survey-scale discovery-based research processes, as the purpose is to explore data in order to make sense of it [see the model of “sensemaking” presented by <cit.>, and applied in Section <ref>]. Limited scope for comparative visualisation can occur by either loading data into several independent instances of a visualisation tool (usually on the same computing platform) or by switching between individual views of multiple objects, requiring loading and unloading of data. When working with large-scale survey data, desktop-based visualisation strategies may lead to a reduction in the ability for an individual to see patterns across a sizeable portion of the survey. In practice, effective comparative visualisation cannot be achieved by moving between visualisations of one or two objects at a time. At each stage, there is a loss of time to input/output, and a strong reliance on the visual recall abilities of the astronomer [see <cit.> for a related discussion]. Individual instances are unlikely to have linked camera actions (e.g. panning, rotation, zoom, scaling), requiring the use of repetitive interaction processes. Moreover, if performed at the desktop, the small physical display space of a standard monitor is not always conducive to real-time, collaborative inspection for those researchers who prefer, or find it more productive, to work this way. §.§ Single instruction, multiple data work patterns Survey-scale discovery-based research processes, such as those described above, are all highly repetitive, and may need to be completed for each individual source. Many repetitive research processes share a single instruction, multiple data (SIMD) work pattern, and so are amenable to a parallel solution. One approach to the parallelisation of human-centred visualisation and analysis tasks is to share the work out amongst multiple team members [e.g. as occurred while preparing catalogues for the Hi Parkes All Sky Survey – see <cit.> and <cit.>], or further afield via crowd-sourcing of citizen scientists <cit.>. A limitation to these distributed processes is one of consistency in decision-making between team members with diverse skill levels [see, for example, <cit.>]. An investment in training may be required, or a complex task must be abstracted to one of group-consensus classification. Furthermore, while serendipitous discoveries do occur in citizen science activities, that is not the norm. An alternative is to change the viewing paradigm, so that a more suitable mode of parallel inspection by a single researcher, or co-located team, can be achieved. This is the approach we investigate in this work using encube[Long term access to open source software described by <cit.>.]: a visual analytics framework for collaborative and comparative visualisation, designed to work on a multi-monitor tiled display wall and dedicated compute nodes <cit.>. Figure <ref> shows encube operating on the Swinburne Discovery Wall (see Section <ref>), providing simultaneous display of 80 spectral cubes sampled from three extragalactic neutral hydrogen (Hi) surveys (described in more detail in Section <ref>). §.§ The visual data analysis and reasoning methodology In order to best utilise non-standard or novel visualisation systems, it is important to understand their strengths and weaknesses. The suitability of any visualisation approach or environment – software or hardware, standard or bespoke – should be examined or evaluated using appropriate methodologies. Looking to the broader field of information visualisation, such evaluations can include investigation of either the process of visualisation or the nature of visualisation systems and algorithms <cit.>. For our investigation of survey-scale discovery-based research processes, we select the empirical visual data analysis and reasoning (VDAR) methodology. A VDAR evaluation is usually approached via a case study: a cohort of experts assess their ability to derive knowledge about a relevant dataset while using a new visualisation system, software or strategy to perform domain-specific tasks <cit.>. As our relevant dataset, we utilise existing extragalactic Hi survey data (see Section <ref>), available as an ensemble of spectral cubes (two spatial dimensions and one spectral dimension). We consider three representative survey-scale discovery-based research processes that can occur in the preparation and analysis of large-scale extragalactic Hi surveys: * Quality control of individual sources, ensuring that calibrations have been applied correctly and bad channels (e.g. impacted by interference or instrumental features) have been flagged or removed; * Candidate rejection, whereby false-positive detections from automated source finders are identified and removed from the catalogue. This can also help to improve training sets of “non-source” examples for use with machine learning and related automated methods; and * Morphological classification, identifying and sorting sources into categories based on observed structural, kinematic or environmental properties. The classification process may also include anomaly detection, wherein unexpected discoveries are made based on the observed structural properties. Through a mix of visual analytic functionalities, including interactive three-dimensional (3D) volume rendering methods, encube provides ways to explore both spatial and spectral features, which can be matched to other observed or derived parameters. A 3D approach can help to reveal complex kinematic structures or system artefacts that might otherwise appear only in projection using moment maps or position-velocity diagrams. We choose to perform our evaluation with 3D methods as they: (1) are the current defaults within the public encube code; (2) present an upper bound in terms of the computation required for benchmarking purposes; and (3) provide the VDAR user cohort with access to novel comparative sensemaking strategies via the Swinburne Discovery Wall. For other applications, alternative data visualisation modes such as moment maps[A camera projection parallel to any axis of a spectral cube can be used to generate a two-dimensional (2D) projection of the data <cit.>, and hence can be used to generate 2D solution space representations while still retaining access to the full representation of the data in memory for fast calculations using graphics shaders.] or scatter plots could be utilised as they are supported by the underlying visualisation framework. §.§ Overview In this paper, we consider a specific visualisation problem that is not feasible to address using a desktop-based visualisation solution: interactive, comparative visualisation of ≥100 data instances. We evaluate the practicality of using a bespoke visualisation environment (viz. encube and the Swinburne Discovery Wall) for survey-scale discovery-based research processes through: (1) system benchmarking, which provides quantitative information on system performance and scalability; and (2) a visual data analysis and reasoning study. For five different display configurations, supporting simultaneous visualisation of 20, 40, 80, 120 or 180 spectral cubes, selected from representative extragalactic Hi survey datasets, we report benchmarking in terms of the two most critical factors: (1) the time taken to load an ensemble of spectral cubes; and (2) the typical minimum interactive frame rate. Together, these values allow us to estimate the visualisation throughput, V_ tp (sources/hour), that might be achieved by a single user when undertaking SIMD tasks such as quality control, candidate rejection or morphological classification. Compared to the serial case of viewing one data instance at a time on a standard desktop monitor, encube and the Swinburne Discovery Wall could decrease the time taken to complete survey-scale comparative visualisation workflows by a factor of 100 or more. In Section <ref>, we explain the main technical elements of the bespoke visualisation environment. In Section <ref>, we provide background on the extragalactic Hi case study. We evaluate the visualisation environment through system benchmarking (Section <ref>) and via the VDAR evaluation (Section <ref>), which considers three typical discovery-based SIMD activities: quality control, candidate rejection and morphological classification. We present a discussion of our finding in Section <ref>, and present our conclusions in Section <ref>. Further technical and implementation notes can be found in <ref>. Our approach can be generalised to any survey datasets comprising more individual observations or instances than can be comfortably analysed or scrutinised by one investigator on a standard desktop display. This might include two-dimensional images or moment-map projections, optical/infrared spectral cubes (e.g. from integral field spectroscopy), or simulation data products. The comparative visualisation strategies demonstrated here are applicable to any similar SIMD-style activity, and are not restricted to the specific use of encube with the Swinburne Discovery Wall. As an open source solution, users are encouraged to modify the functionality of encube (e.g. in order to provide alternative 2D or 3D visualisation modes or to handle domain-specific data formats) or reconfigure the arrangement of the display environment to suit their own survey-scale discovery-based research needs. § A BESPOKE COMPARATIVE VISUALISATION ENVIRONMENT In this section, we provide a technical overview of the two main components of the bespoke comparative visualisation environment used in this work: (1) the encube framework, which enables visualisation of multiple data instance (in the form of spectral cubes for our case study); and (2) the Swinburne Discovery Wall, a specific instance of a large-area tiled display wall. Encube was conceptualised and developed specifically to support SIMD visualisation and analysis tasks, with an aim to accelerate data-intensive comparative visualisation and discovery workflows. Encube displays multiple individual data visualisations across single or multiple display devices, with interaction coordinated through a user interface on the master node. For related approaches, see the virtual reality implementation of BentoBox <cit.> and the “shelves” metaphor for small-multiples that considers utilisation of immersive space <cit.>. §.§ The encube framework The encube framework <cit.> supports comparative visualisation and analysis of survey data (also referred to as an ensemble in other domains). The primary development emphasis was for structured 3D data: spectral cube data from astronomy and magnetic resonance imaging data from medical imaging. Encube provides an interactive data exploration and analysis experience, employing a strategic mixture of software (data processing, management, visualisation, analysis) and hardware (graphics processing units, computer cluster, displays). Encube is a modular and open-source code base <cit.>, where each module targets a specific set of tasks within a visual analytics workflow: (1) processing and visualisation of data; (2) workflow and communication management; and (3) user interactions. Similar to a microservices-style architecture, the modular design allows individual components to be connected, enhanced or replaced as required, so that encube can be kept compatible with, and scalable to, the requirements of future science operations. For instance, customisable code for 3D visualisation is currently created using the C/C++ languages for good performance with the S2PLOT interactive programming library <cit.>, which builds on the OpenGL[<http://www.opengl.org>] graphics library. From a system architecture standpoint, encube comprises a process layer and an input/output (I/O) layer. The process layer performs data processing tasks (load data, compute statistics, render visualisation), and the I/O layer responds to user inputs and generates visual outputs. Each layer contains units where specified tasks are performed. Depending on the task, a unit can be instantiated once, or multiple times for parallel operation (generally on different compute hardware). In its current form, the encube process layer comprises a single manager unit and one or more process and render units, while the I/O layer contains an interaction unit and one or more display units. Units can communicate between each other in order to pass workflow information across the architecture. The communication pathway between units can be represented as a directed graph [see Figures 2 and 4 of <cit.>]: ↕ ↕ ↓ where the arrows indicate the information flow direction between two unit vertices on the graph. Based on the number of instances of a unit, communication can include serial or parallel messages. We note that peer-to-peer communication within a unit type is not currently implemented (e.g. direct message passing between two interaction units). The manager unit orchestrates the overall software workflow. It first reads a configuration file containing network information about the available compute nodes, characteristics of the tiled visualisation output, along with system metadata and the location of the dataset. This unit also schedules and synchronises the workflow, sharing metadata as well as commands with other neighbouring units. Here, the manager unit acts as a messenger between an interaction unit and a process and render unit. Moreover, given that all commands pass through the manager unit, the workflow history and system state can be recorded (if requested) so that actions can be revised, replicated, or continued later. The interaction unit is where a user interacts with the dataset. In particular, the user can specify which data files to load and visualise, change visualisation parameters (e.g. ray-tracing method), select and organise individual visualisations, and request diagnostic plots. The interaction unit provides a “world in miniature” view of the display setup, mapping regions within the user interface to the physical display. Metadata is presented in a table, which can be sorted by categories. Visualisations are generated after selecting rows of the table, either individually or by ordered batch (e.g. sorted by parameters such as distance, size, etc.). Once data is loaded into memory on a process and render unit, visualisation parameters (e.g. histogram thresholds, spatial cropping, colourmap selection) can be updated in real time to modify one or more visualisations. Global or partial statistical values can also be computed on request for selected data files and gathered to summarise properties of a subset. The process and render unit provides functionalities such as loading data files to GPU memory, computing statistics (e.g. mean, standard deviation, histogram), creating visualisation callbacks (e.g. including responses to input via keyboard, mouse, or the remote user interface), and generating the visualisations through texture-based volume rendering. Finally, a visualisation rendered by a process and render unit is displayed on screen via the display unit. A display unit provides a mapping to one or more physical screens via the configuration file read by the manager unit. §.§ The Swinburne Discovery Wall From its inception, encube was designed for use in high-end visualisation environments comprising multiple off-the-shelf displays, i.e. a tiled display wall (TDW). See <cit.> and <cit.> for detailed investigations of the role of TDWs in astronomy. A TDW provides several advantages over a standalone workstation monitor: many more pixels, a greater display area, and, in some cases, access to additional co-located computing power. Initial deployment and testing of encube was undertaken with the CAVE2^ TM hybrid high-performance computing and visualisation space at Monash University [as reported in <cit.>]. The Monash CAVE2^ TM <cit.> comprised 80 stereoscopic-capable displays, with a cylindrical configuration (330 degrees to allow entry and exit from the physical space) of four rows and 20 columns. Collectively, the environment provided 84 million pixels for two-dimensional display and 42 million pixels in stereoscopic mode. The Monash CAVE2^ TM was linked to a real-time compute cluster with a peak of 100 Tflop/s and 240 GB of GPU memory. Additional development, and the activities presented in this work, utilised the Discovery Wall (Figure <ref>) operated at Swinburne University of Technology. The Swinburne Discovery Wall is a TDW comprising ten Philips BDM4350UC 4K ultra high-definition (4K-UHD) monitors arranged in a matrix of two rows and five columns. The total pixel count is approximately 83 Megapixels and the accessible screen area is just under 5.0 m^2 (see Table <ref>). Each column of the Discovery Wall is connected to a Lenovo ThinkStation P410 Mini Tower (2.8 GHz, 16 GB RAM) with an NVIDIA GTX1080 graphics card (8 GB). The workstations operate with the CentOS[<http://www.centos.org>] Linux operating system (Version 7.4.1708), noting that we use the version of CentOS that was installed on the Discovery Wall when it was commissioned in 2018. The original iteration of the Swinburne Discovery Wall, which operated until November 2021, had one additional column of two 4K-UHD monitors such that the total screen area was 6.0 m^2 and a pixel count closer to 1 million pixels. In December 2021, the Discovery Wall hardware was transferred to a new location, but with insufficient wall-space to accommodate all six columns. Reconfiguration of encube to work on the relocated and reduced-scale Discovery Wall in February 2022 required approximately two minutes to remove references to the sixth Lenovo MiniTower workstation from the encube source and scripts. § CASE STUDY: EXTRAGALACTIC HI ATRONOMY Consider the specific case of extragalactic Hi astronomy, which is based on observations of the 21 cm (1420.40576 MHz) hyperfine spin flip transition of the hydrogen atom. Theoretically predicted by <cit.>, and first detected by <cit.>, <cit.> and <cit.>, the 21 cm line provides a valuable signature of the neutral gas content of galaxies. Apart from being the primary component from which stars are eventually formed, the Hi gas in galaxies is also typically much more extended than their stellar discs [see <cit.>] making it an important tracer of the effects of both internal properties of galaxies, such as feedback and angular momentum <cit.>, as well as environmental processes such as ram pressure and tidal stripping to name a few [see <cit.> and <cit.>]. For these reasons, high spatial and spectral resolution studies of the HI gas distribution in galaxies are paramount for our understanding of galaxy evolution. Historically, extragalactic Hi surveys fall into three broad categories: (1) spectral line observations, using single-dish radio telescopes; (2) spatial mapping with multi-beam receivers <cit.>, whereby it became feasible to undertake spectral-line surveys at a large scale <cit.>; and (3) high-resolution spectral cube observations, utilising aperture synthesis. §.§ Extragalactic neutral hydrogen surveys The number of sources available from Hi surveys is undergoing a step-change. New wide-field and deep surveys have been enabled through instruments and facilities including: * The APERture Tile In Focus (APERTIF) upgrade to the Westerbork Synthesis Radio Telescope (WRST) – see <cit.>, with Hi survey descriptions in <cit.>, <cit.> and <cit.>; * The Australian Square Kilometre Array Pathfinder (ASKAP) – see <cit.> and Hi survey descriptions for the Widefield ASKAP L-band Legacy All-sky Blind SurveY (WALLABY) in <cit.> and <cit.>; and * MeerKAT <cit.>, with local <cit.> and ultra-deep <cit.> Hi surveys planned. The scale and rate of data collection from these programs provide a first opportunity to prepare for the future of Hi astronomy that will occur with the Square Kilometer Array (SKA). Using WALLABY as an example, these surveys will produce three main categories of data: * Large-scale survey cubes. Over a period of five years, WALLABY is expected to cover up to 1.4π sr of the sky with ∼ 550 full-resolution spectral cubes. Each cube is anticipated to have 4200 × 4200 spatial pixels and 7776 spectral channels, requiring ∼ 600 Gigtabytes (GB) per cube. The total data storage required for WALLABY will exceed 1 Petabyte. * Small-scale source cubelets. By running the Source Finding Application <cit.> on the survey cubes, candidate source cubelets can be extracted and stored separately, or simply have the coordinates of their bounding boxes within the survey cubes stored [see <cit.> for an overview, and <cit.> for a comparison of Hi source finders]. As source cubelets take up only a small fraction of the survey cubes, this is a much more manageable data volume to work with. Estimates of the number of Hi detections from WALLABY exceed 200,000 sources. Approximately 15–20 % of these sources are expected to be spatially resolved (i.e. where the spatial distribution of Hi is visible, which is anticipated to require at least 3-4 resolution elements or synthesised beams across the source). * Catalogues of derived data products. Along with the key parameters (e.g. position, velocity dispersion, Hi flux) generated by source finders such as SoFiA and Selavy <cit.>, further automated processing and analysis tasks can provide additional data. This includes activities such as disk-based model fitting [e.g. TiRiFiC <cit.>, ^ 3DBAROLO <cit.>, or 2DBAT, <cit.>, and see also the description of the WALLABY Kinematic Analysis Proto-Pipeline (WKAPP) in <cit.>], computation of integral properties (e.g. total Hi mass, star formation rates), or cross-matching with optical/infrared catalogues. Each of these data products will aid the development of insight and improved understanding of Hi's role in galaxy formation and evolution. §.§ Visualisation-dominated workflows The data-intensive demands of new Hi surveys has motivated the development of a number of customised tools for interactive qualitative and quantitative spectral cube visualisation <cit.>. Moving beyond the well-established and widely-utilised solutions such as Karma[https://www.atnf.csiro.au/computing/software/karma] <cit.> and CASA[https://casa.nrao.edu] [the Common Astronomy Software Applications package; <cit.>], alternatives for desktop-based visualisation and analysis include AstroVis <cit.>, SlicerAstro <cit.>, FRELLED [<cit.> using the free, open-source Blender animation software], FITS3D <cit.>, Shwirl <cit.>, and CARTA[https://cartavis.org/] <cit.>. <cit.> prototyped a solution using the Unity[https://unity.com] real-time 3D engine, which can be deployed on a desktop or operate with a variety of advanced display technologies. With their iDAVIE solution, <cit.> have successfully moved spectral cube visualisation and analysis into interactive and immersive virtual reality environments. Finally, targeting data products that greatly exceed the processing capabilities of standard desktop computers, <cit.> achieved real-time interactive visualisation of Terabyte-scale spectral cubes using a high-performance solution with graphics processing units (GPUs) and the GraphTIVA framework. For most of these examples, the workflow for visualisation and analysis of the gas in galaxies emphasises the study of one galaxy at a time. When the data volume is low and the data rate is slow, a great deal of human time can be dedicated to examining individual data cubes or source cubelets. While highly appropriate in an era of small surveys, this serial processing presents a bottleneck for knowledge discovery once the ASKAP and MeerKAT surveys scale up to include many thousands of spatially resolved sources. The transformation of a survey cube to a subset of source cubelets, and ultimately, a reliable, science-ready catalogue of data products can be encapsulated as a workflow. Parts of the workflow are expected to be fully automated [e.g. the Apercal calibration pipeline for Apertif surveys <cit.> or ASKAPSoft for ASKAP <cit.>]. Other stages will rely on some level of human intervention, either through computational steering (selecting parameters for the workflow, setting thresholds on source finders, etc.) or data visualisation for analysis and discovery. §.§ Survey data While future applications of the comparative visualisation strategies examined here may include the Hi surveys to be conducted with ASKAP and MeerKAT, we perform the benchmarking and VDAR evaluations using data from three extant Hi surveys that targetted nearby spiral and irregular galaxies: * WHISP: Westerbork Observations of Neutral Hydrogen in Irregular and Spiral Galaxies[http://wow.astron.nl], undertaken with the Westerbork Synthesis Radio Telescope <cit.>; * THINGS: The Hi Nearby Galaxy Survey[https://www2.mpia-hd.mpg.de/THINGS/Data.html] comprising high-spectral and high-spatial resolution data from the National Radio Astronomy Observatory Very Large Array <cit.>; and * LVHIS: The Local Volume Hi Survey[https://www.atnf.csiro.au/research/LVHIS/LVHIS-database.html], which obtained deep Hi line and 20-cm radio continuum observations with the Australia Telescope Compact Array <cit.>. We categorise the survey data products in terms of: (1) the number of sources (N_ s) in each survey catalogue; (2) the typical dimensionality of the data cubes (measured as spatial or spectral pixels); (3) the number of voxels (in Megavoxels or Mvox); and (4) the storage size (in Megabytes or MB) for an individual cube. For all three datasets, the spectral cubes were stored (and loaded into encube) using the Flexible Image Transport System (FITS) format <cit.>. See Table <ref> for further details, where we present the minimum, maximum and median values for the dimensions, voxel counts and storage sizes for the WHISP, THINGS and LVHIS catalogues. To simplify both the benchmarking investigation and VDAR evaluation, we make several minor modifications to the datasets in their published forms: * WHISP: Initial inspection of a sub-set of WHISP galaxies revealed that many of the spectral cubes have high levels of flux (relative to the peak source flux) at either end of the spectral band. Rapid identification of such systematic effects is an example of the type of SIMD quality control activity that comparative visualisation can address (see Section <ref>). For all of the WHISP cubes, we created new FITS files where we set the data values in the first eight and last eight spectral channels to zero. This does not change the load times for the mock surveys but does improve the default visualisation via texture-based volume rendering. * THINGS: We did not use the spectral cube for NGC 3031 (M81) in our benchmarking. As NGC 3031 is a nearby grand design spiral in Ursa Major, the spectral cube is much larger than other galaxies in the sample with 2201 × 2201 spatial and 178 spectral channel pixels. The file size of 3.45 GB is approximately half of the available memory on a GTX1080 GPU. Such a large source would not be typical of new extragalactic sources discovered with blind surveys. * LVHIS: A spectral cube data for NGC 5128 (LVHIS 048) was not available from the survey web-site, and we note a replication of data between sources LVHIS 014 and LVHIS 016, which are both identified as the dwarf irregular galaxy AM 0319-662. Removing LVHIS 016 and LVHIS 048 from the samples leaves us with N_ s = 80. § BENCHMARKING COMPARATIVE WORKFLOWS In this section, we report on benchmarking activities undertaken with the implementation of encube on the Swinburne Discovery Wall. §.§ Benchmarks Previous system benchmarks reported in <cit.> were performed with the Monash CAVE2^ TM. For deployment on the Swinburne Discovery Wall, we report: (1) the total (i.e. parallel) load time, T_ Load, for a configuration displaying N_ cube spectral cubes; and (2) the steady-state minimum frame rate, F_ rate, in frames/second. We consider both the frame rate per column, looking for variations in performance, along with the overall mean, standard deviation, and median of F_ rate. Frame rate quantities are calculated from the S2PLOT displays on columns 2 to 5 (see Figure <ref>). Column 1 is used for additional management and coordination tasks, and in order to access the user interface in the web browser, the S2PLOT display is not resized over both 4K-UHD monitors. The higher F_ rate values reported for column 1 show the overall reduced graphics workload when data is visualised on one 4K-UHD monitor instead of two. We obtained a total of 54 independent benchmarks for five different configurations (Sets A–E), displaying N_ cube = 20, 40, 80, 120 or 180 spectral cubes in total using the per-column configurations summarised in Table <ref>. The main limiting factors on N_ cube are the available GPU memory (8 GB/GPU for each of the five NVIDIA GTX1080 GPUs of the Swinburne Discovery Wall) and the number of columns of monitors. A simple upgrade path to improve performance is to replace these five older-generation GPUs with higher-memory alternatives. The benchmark configurations were generated comprising either spectral cubes from a single survey (denoted as [W]HISP, [T]HINGS or [L]VHIS) or from the combination of the three input surveys (denoted as [C]ombination). For scenarios where N_ cube exceeds the survey size, N_ s (see Table <ref>), random sampling with replacement is used to generate an appropriately-sized data set. For the combination survey, random sampling with replacement is used to generate a mock survey that is roughly equally split between the three input catalogues. Figure <ref> demonstrates the use of the two different colour-mapping methods for a mock LVHIS survey with 180 spectral cubes. The top panel uses a heat-style colour map, while the bottom map colours based on the relative velocity with respect to the middle spectral channel, which is assumed to be the kinematic centre. To mitigate the impact of memory caching on measurements of T_ Load, we generated three independent combinations of spectral cubes for each of the W, T, L and C configurations. A single benchmark value of T_ Load was obtained for each of the three alternatives, along with the measurements of F_ rate. For the 80-cube instance, we note that all LVHIS cubes are used, but they are randomly assigned between the five columns of the Discovery Wall for each benchmark instance. We did not generate configurations with N_ T > 80 as these data volumes exceed the memory capacity of the GPUs. The THINGS galaxies are the highest-resolution spectral cubes considered in this study, and are not as representative of the typical resolved or partially-resolved new detections that will arise from ASKAP or MeerKAT Hi surveys. Due to the presence of differing numbers of key-value pairs in the FITS headers, there is slight variation (see Table <ref>) in the ratio between V_ Store (the total data volume in GB) and N_ vox (the total number of voxels in Gigavoxels) for the 54 independent survey configurations. The result of a least-squares fit to the these two quantities was: V_ Store = 4.07 N_ vox- 0.084 , with the mean and sample standard deviation between measured and modelled values for V_ Store calculated to be -9.4 × 10^-6 GB and 0.13 GB respectively. For simplicity, we can approximate V_ Store∼ 4 N_ vox as expected for a data format using four bytes per voxel. §.§ Procedure All of the spectral cubes are stored on the workstation associated with column 1 of the Swinburne Discovery Wall (the Master Node - see Figure <ref>), and the other workstations access this data through a network file sytem (NFS) mount (see <ref>). Consequently, we expect that the limiting factors on T_ Load are: (1) the network bandwidth between each Process and Render workstation and the Master; (2) the read time from the NFS-mounted drive; and (3) the processing overheads due to pre-computation of statistical parameters, as noted at the end of <ref>. The following procedure was used to conduct each of the benchmark trials: * The set of spectral cubes is randomly selected either without replacement (when N_ cube≤ N_ s) or with replacement, and a database file is generated in the comma-separated variable (CSV) format required by encube. * Symbolic links are generated to each of the N_ cube spectral cubes, to minimise the duplication of data on the Master workstation. * Modifications to the encube configuration file (keyword-value pairs using JavaScript Object Notation[JSON: https://www.json.org/json-en.html]) are made, specifically the number of rows and columns of S2PLOT panels per column of the Discovery Wall, the total number of panels per workstation, and the names of the workstations. * Encube is launched from the Master workstation using the JSON configuration file, with calls to start the software on the Process and Render nodes. Socket connections are established between the Master and the Process and Render nodes, and a port is opened for connection to the user interface (UI). * The encube UI is activated as a web-page in the Firefox browser on the Master machine. The UI displays the database of spectral cube files. The required files are selected and timing for T_ Load commences on mouse-clicking the Load button. * Timing ends when all spectral cubes are displayed. As timing is performed by hand, all times are rounded up to the nearest whole second to account for the timekeeper's reaction time. * For the subset of configurations where frame rates are also recorded on a per-column basis, an autospin signal is triggered from the UI which causes all of the spectral cubes to rotate around the vertical axis. At each of the five keyboards attached to the columns (see Figure <ref>), the d key is pressed, activating the S2PLOT graphics debug mode, which reports the instantaneous frame rate (measured over a moving window of 5 seconds duration). After each spectral cube has completed several complete rotations, the lowest measured frame rate is recorded. This presents the worst-case scenario, as the frame rate is a strong function of both the viewing angle of a spectral cube and the fraction of the screen that is mapped to data voxels. * Once benchmark quantities have been recorded, a signal to stop the encube instances is initiated from the UI, and all of the processes are stopped from the Master workstation. It takes approximately 60 seconds for all nodes to release their socket connections ready for the next full iteration of the procedure. The outcomes of the benchmarks are reported as follows: * A statistical summary (mean, sample standard deviation, and median) of T_ Load for the three independent instances of each survey configuration is presented in the final two columns of Table <ref>. * The survey load time is plotted as a function of the storage volume in the left-hand panel of Figure <ref>. All 54 independent benchmarks for T_ Load are presented, with symbols for WHISP (squares), THINGS (circles), LVHIS (triangles) and the Combination survey (diamonds). * Individual values, and statistical characterisation of F_ rate is presented in Table <ref>. A subset of 21 configurations was considered here: Set A, with N_ cube = 20 and Set E, with N_ cube = 180. * The minimum frame rates for each of columns 2-5 for Set A (circles) and Set E (triangles) is plotted in the right-hand panel of Figure <ref> as a function of the mean memory per GPU on the Discovery Wall. A linear relationship exists between T_ Load (s) and V_ Store (GB), with a least squares fit result: T_ Load = 8.07 V_ Store + 4.58 . The mean and sample standard deviation between measured and modelled values for T_ Load were calculated to be 5.6 × 10^-4 seconds and 13.9 seconds respectively. The Pearson correlation coefficient between T_ Load and V_ Store was r = 0.98. For completeness, we find: T_ Load = 32.83 N_ vox + 4.063 with N_ vox in Gigavoxels. We discuss the implications of our benchmarking activities in Sections <ref> to <ref>. In the next section, we provide details of our VDAR evaluation. § VISUAL DATA ANALYSIS AND REASONING STUDY <cit.> <cit.> proposed a taxonomy for understanding and evaluating visualisation methods. We select the VDAR approach to examine typical survey-scale discovery-based research processes, relevant for current and future extragalactic Hi surveys. VDAR includes methodologies for evaluating the effectiveness or efficacy by which a visualisation tool helps to generate domain-specific actionable knowledge or understanding. VDAR methods, which often are based on case studies, investigate “the tool used in its intended environment with realistic tasks undertaken by domain experts” <cit.>, with an emphasis on the process rather than measurements of outcomes. Our user group for the VDAR study comprises only the authors of this work. This cohort includes domain experts (i.e. Hi astronomers with relevant experience in the observation, analysis and visualisation of spectral cubes), as required with the VDAR methodology. We assert that these experiences are representative of the broader Hi research community. Alternative evaluation methodologies for visualisations and visualisation systems <cit.> that we did not pursue include Evaluating Collaborative Data Analysis (CDA), which focuses on the process of collaboration and how it is supported by a visualisation solution, and User Performance (UP), which uses controlled experiments to measure, for example, the time taken for different users to complete tasks. As a point of comparison, <cit.> used the UP methodology to measure task performance when novice and expert participants completed an object identification activity using either a standard desktop monitor or a TDW. To provide relevant scenarios for the VDAR study, we consider three important SIMD processes that may be required when analysing extragalactic Hi survey data: (1) quality control of individual candidate spectral cubes; (2) candidate rejection, whereby false-positive detections from automated source finders are rejected; and (3) morphological classification, identifying and sorting sources into categories based on observed structural or kinematic properties. These three processes currently require some level of visual inspection [which may include the use of either projected moment maps or 3D visualisation methods, depending on the workflow preferences of the researcher(s) involved] in order to produce reliable, science-ready catalogues from large-scale, next-generation surveys. It is important to note that our VDAR study does not intend to demonstrate new knowledge about any of the three input Hi surveys – WHISP, THINGS, and LVHIS – as all have been well-studied in many other contexts. They stand in as proxies for future Hi survey data products that are, potentially, being viewed for the very first time by members of the research team. As such, there may be unexpected, or unexplained, features that are present in the data products, necessitating appropriate follow-up actions once they have been identified. Alternatively, the comparative visualisation stage may reveal that all is well with automated calibration or processing steps (e.g. model-fitting) at an early stage of science operations, thus serving its purpose. For a related example where the use of an alternative display technology evolves throughout the lifetime of an astronomical research project, see Section <ref>. §.§ Quality control When an Hi source finding pipeline is applied to a large-scale survey cube, the output is a set of individual source cubelets. Prior to their use in further analysis, there is value in performing by-eye quality control, to ensure that there are no significant issues with the data quality. This step would be expected to include looking for: (1) bad channels; (2) calibration errors such as poor continuum subtraction; (3) objects that have not been correctly extracted, such as extended sources that exceed the boundaries of the extracted cubelet; and (4) radio frequency interference. The VDAR study we performed to understand the quality control process relates to our observation when first visualising a sub-set of WHISP galaxies with encube. As noted in Section <ref>, spectral channels at both ends of the band-pass contain excess flux. We illustrate this issue in the top panel of Figure <ref>, using an 80-cube configuration. The excess flux is visible in 77 of the cubes displayed. This is seen as the strong blue and red features in each cube, making it difficult to see the WHISP galaxies themselves. With encube, it is immediately clear that a quality control issue is present and is impacting a sizeable portion of the survey. From Table <ref>, it takes less than 90 seconds to load the 80 WHISP cubes, and then less than 60 seconds to identify the 3 cases that do not appear to be affected. Performing this task in a serial fashion would require individual loading and inspection of spectral cubes: it would take much longer than 150 seconds to determine the extent of the quality control issue in order to take an appropriate action. Our solution was to replace data values in the first eight and last eight channels of each WHISP spectral cube. This has the desired effect, revealing the kinematic structures of the sources (see the lower panel of Figure <ref>). There will be an additional quantity of time required to resolve any quality control issue. In this case, we needed to write and execute a C-language program using the CFITSIO[https://heasarc.gsfc.nasa.gov/fitsio/] <cit.> library to create modified FITS-format data cubes for the WHISP galaxies. For a future Hi survey, it may require modification or re-tuning of an automated calibration pipeline. However, this time is independent of whether the quality control visualisation is approached in a serial or parallel fashion. Indeed, comparative visualisation provides a more rapid demonstration that the intervention had the desired effect. Our approach to comparative quality control with encube is consistent with the model of sensemaking presented by <cit.>. Here, our use of the Discovery Wall has two dimensions: (1) a foraging loop, organising data, searching for relations, and gathering evidence; and (2) a sensemaking loop, where alternative hypotheses are posed and examined, leading to a presentation of the outcomes. In the foraging loop, we determine that a quality control issue exists, as the initial volume renderings are not consistent with the expected profiles of Hi-detected sources. This issue impacts a significant number of spectral cubes in the sample (77 out of 80). Through physical navigation (i.e. moving to different locations near the Discovery Wall), the viewer can change their attention from a single object to an ensemble in order to gather evidence regarding the possible cause of the failed visualisations. In the sensemaking phase, we decide that a first course of action is to remove the impact of the excess flux in all spectral cubes, and visualise the outcomes. Further investigation could include selecting the subset of those spectral cubes most strongly impacted, in order to determine the cause(s) of the excess flux. §.§ Candidate rejection An unwanted outcome of automated source finders is the generation of false-positive detections. This is particularly true in their early phase of operation of new survey programs, when source finders may not have been tuned optimally to the specific characteristics of the data. But false-positives may persist throughout the lifetime of a survey. One way to improve the accuracy of source-finders is to raise the acceptance threshold, so that fewer candidates make it through the processing pipeline for further inspection and analysis. This approach reduces the discovery space, with many interesting objects remaining undetected. By lowering the acceptance criteria, more false candidates will need to be reviewed and ultimately rejected. This can be a particularly labour intensive phase. Visual inspection is the simplest way to distinguish between true sources and false detections, but may require an appropriate level of expertise. Here, again, quality control processes will be crucial, as individual cubelets may suffer from anomalies from processing, calibration, or interference. Our bespoke visualisation environment permits rapid inspection and comparison of many sources at the same time, improving the way that decisions are made regarding the nature of candidates. The VDAR study we performed to understand the candidate rejection process was to: * Load one of the 80-cube combination surveys (Set C), with T_ Load∼ 150 seconds. The combination survey includes a high proportion of spatially resolved galaxies from the THINGS and LVHIS catalogues. * Visually inspect every source, looking for the spatially resolved galaxies, and then identifying which of these did not immediately match the expected template of a grand design spiral galaxy. It took less than three minutes to visually inspect all 80 cubes. While some resolved, non-spiral galaxies were very easy to identify, others require additional time in order to reach a decision. Here, the use of the volume rendering technique allows for individual sources, or sets of sources, to be rotated such that either the spatial or kinematic structure can be used to reach a decision. Figure <ref> shows columns 2–5 of the Swinburne Discovery Wall, with labels under the image used to identify five sources of interest (A-E): * Source A (THINGS, NGC 3077) is spatially resolved, but shows a disrupted Hi structure. NGC 3077 is connected to a larger neighbouring spiral galaxy, M81, by an Hi bridge <cit.>; * Source B (LVHIS, ESO 245-G007) shows a “tube-like” feature (readily apparent when rotating the spectral cube) surrounding a central, somewhat spatially unresolved object; * For source C (WHISP, UGC01178), there is no visible flux, which is likely due to a poor choice of the default visualisation parameters; * Source D (LVHIS, AM 0319-662) comprises two Hi detections, with the more prominent source offset from the centre of the cube. The central LVHIS source is a dwarf irregular galaxy, a companion to NGC 1313 at the lower right of the cube <cit.>; and * Source E (THINGS, NGC5236) is a spiral galaxy, but the overall blue feature extending across the source indicates some additional processing may be required. In particular, this can be explained as this source, Messier 83, is known to have an HI diameter much larger than the VLA primary beam with which it was observed in the THINGS project. The overview provided by many small-multiples rapidly highlight this source's distinctive feature, which was not present in any of the other 79 sources in this sample. Identification of these five “anomalous” cases occurs rapidly, when the viewer is able to both see a large sample (i.e. comparative visualisation, by stepping back from the Discovery Wall) and investigate an individual object in more detail (by moving closer to view, or interact with, an object of interest). To close the loop on candidate rejection, a minor modification to encube would allow each spectral cube to be tagged in real time as a true or false detection, which would then be fed back to the source finder to improve the true detection rate. §.§ Morphological classification Once a catalogue of robust detections has been gathered, the nature of the sources must be considered. For previously known objects, a morphological classification has likely already occurred. For new discoveries, an initial classification can be provided. For future Hi surveys conducted with wide-field interferometric imaging, the extended structure of many sources will be visible. This includes detecting the presence of low column density features such as bridges, tails, etc. Consequently, visual morphological classification of complete, unbiased, sub-populations of sources will be possible. Indeed, with a statistically significant population of Hi galaxies, selected in an unbiased (i.e. blind survey) fashion, it becomes possible to develop new morphological categories – beyond the standard Hubble classification – that may correlate with the local or global environment or integral properties, such as the Hi mass. The morphological classification process shares many similarities with the candidate rejection phase, and we appeal to the same VDAR study as in Section <ref>. The two features of our bespoke visualisation environment that provide an alternative approach to morphological classification, at scale, are: (1) the use of volume rendering, which allows each spectral cube to be rotated around any axis, providing immediate access to both spatial and kinematic information; and (2) the comparative nature of the display configuration, which makes it easy to go back-and-forth between specific objects in order to reach a decision regarding the classification. This might mean a change in the outcome of an initial or even pre-existing classification, or the recognition that a new sub-class of objects had been identified. § DISCUSSION In this Section, we interpret the benchmarking results obtained with encube on the Swinburne Discovery Wall. By considering survey sizes, data load times, visualisation configurations and interaction frame rates, we estimate the visualisation throughput, which we present in terms of the number of sources that could be examined in a given period of time. As a reflection on the role for bespoke visualisation environments in astronomy, we also discuss the evolution of advanced visualisation systems when used in astronomical research projects. §.§ Load times In order to be a useful adjunct to desktop-based visualisation methods, an alternative display solution needs to provide an appropriate level of computational performance. Regardless of whether a single spectral cube or multiple cubes are to be visualised, there is an unavoidable overhead while the data is transferred from its storage location into the computer memory. While this latency may not be as noticeable when working with a single cube, there is a cumulative loss of time when working with large surveys. This effect increases if individual cubes are loaded multiple times for comparative tasks. The most important factors in the load time are the network and internal transfer bandwidths and the volume of data. Our benchmarking results revealed a strong positive correlation between T_ Load and V_ Store across a range of storage volumes from 1.17 GB to 34.73 GB. This is consistent with our expectation that each of: (1) the data access and load phase, where each Process and Render node must transfer data via the NFS mount to the Master node; (2) the pre-computation performed for each spectral cube; and (3) the initial transfer of data to the GPU for texture-based volume rendering have O(N) algorithmic behaviour. If any one of these processes imposed a bottleneck for the increasing total data volume, we would expect to see deviations away from the linear scaling. With the Swinburne Discovery Wall hardware, we can load 180 spectral cubes drawn from: (1) the LVHIS survey in under 2 minutes; (2) the WHISP survey in under 3 minutes; and (3) combinations of WHISP, THINGS and LVHIS cubes in under 5 minutes. Using the median T_ Load for WHISP-only surveys in Table <ref>, we can consider alternative configurations that reach the same total number of data cubes, but through multiple loads of smaller quantities at a time. An additional overhead here is that we need to wait T_ Socket = 60 seconds for the Process and Render nodes to release their socket connections before the next configuration can be loaded. Expected total load times (rounded up to the nearest half minute) are as follows: * Nine sets of 20 WHISP cubes will load in 11.5 minutes (9 × 21 + 8 * T_ Socket = 669 s); * Four sets of 40 WHISP cubes plus one set of 20 WHISP cubes will load in 7.0 minutes (4 × 38 + 1 × 21 + 4 * T_ Socket = 413 s); and * Two sets of 80 WHISP cubes plus one set of 20 WHISP cubes will load in 5.0 minutes (2 × 73 + 1 × 21 + 2 * T_ Socket = 287 s). By increasing the total number of cubes displayed on the Discovery Wall, we benefit from parallelisation across the Process and Render nodes during the pre-computation phase and we do not experience the system latency imposed by T_ socket. The advantage of using the 4K UHD monitors is that we retain a reasonable image resolution per source even when there are 18 spectral cubes per individual monitor (36 cubes per column) of the Discovery Wall. §.§ Frame rates Once a configuration of spectral cubes has been loaded and displayed on the Discovery Wall, the most important metric is the frame rate. The higher the frame rate, the smoother the interaction experience when modifying the location of the camera (e.g. when controlling the visualisation of all the spectral cubes simultaneously via the user interface). For encube, there are several key observations that we make: * The frame rate depends on the size of the S2PLOT window, such that expanding over both 4K-UHD monitors per Process and Render node decreases the frame rate. This is seen in the per-column frame rates in Table <ref>, where F_1 values (the Master node) are generally higher than those of the other four columns (F_2 to F_5). In order to display the user interface in the web browser on the Master node, we do not extend the S2PLOT window across both monitors. * There are variations in the frame rate as a function of viewing angle, which depends on the relative number of voxels along each axis of a cube [see, for comparison, Figure 5 of <cit.>]. By reporting the lowest measured frame rates after each cube has undergone several complete rotations, we are presenting worst-case outcomes on interactivity. * Frame rates can decrease when zooming in on details. The amount of processing work performed by the GPU depends on the fraction of screen pixels that contain visible data. When zoomed out, a larger percentage of each panel comprises non-data (i.e. background) pixels. We did not record the effect on frame rates as the default configurations for 180 cubes presents a comparable ratio of data to total pixels as occurs when zooming in on with one of the lower N_ cube configurations. Setting a target of 10 frames/s as an indicator of reasonable interactivity with the data cubes, we exceed this for all of the 20-cube mock surveys (mean and median frame rates in Table <ref>), and for configurations of 180 sources selected entirely from the WHISP and LVHIS surveys. For the 180-cube combination configuration, which includes a randomly-selected sample of 60 THINGS cubes, the mean and median frame rates fall below 5 frames/s. Here, the higher frame rates measured for spectral cubes assigned to the fifth column of the Discovery Wall (column F_5 in Table <ref>) occur as only 5-6 out of 36 spectral cubes were randomly selected from the THINGS survey. If we had “perfect” randomness in the construction of the mock survey samples, we would expect 12 THINGS galaxies assigned to each column. Instead, columns two to four are required to perform much more processing than column five per screen refresh (more memory or total voxels per GPU), resulting in the lower frame rates for (F_2 – F_4) when a single GPU is driving two 4K UHD monitors. §.§ Throughput One of the key metrics we wish to ascertain is the visualisation throughput, V_ tp, which is the number of source cubelets that can be inspected in a given period of time, measured in units of sources/hour. For a single user, it is not expected that a peak V_ tp could be sustained throughout an entire day, but it is reasonable to assume that rates of 25-50% of V_ tp might be achievable for extended periods of time. This is compatible with a work pattern for quality control or source-finding candidate rejection where the candidates from the latest large-scale survey cube(s) are assessed daily. §.§.§ Multi-object workflows To estimate the throughput for a multi-object workflow, we consider two scenarios using the combination mock survey: * An 80-cube configuration. The full dataset loads in around T_ Load = 160 seconds (mean load time plus one standard deviation). An initial inspection can occur in T_ Inspect = 180 seconds (see Section <ref>). If we assume 25% of sources require additional action, and the recording of that action takes 60 seconds, then T_ Action = 1200 seconds. * A 180-cube configuration. The full dataset loads in T_ Load = 300 seconds. The time required for the initial inspection is assumed to scale linearly with the number of sources, such that T_ Inspect∼ 405 seconds. With 25% of sources requiring a 60-second action to be recorded, then T_ Action = 2700 seconds. The total time required for the completion of a SIMD process with encube is then: T_ SIMD = T_ Load + T_ Inspect + T_ Action + T_ Socket where T_ Socket, introduced in Section <ref>, is a system latency. Using the values proposed for these four quantities, we suggest that T_ SIMD(80 ) = 1600 seconds (26.7 minutes) and T_ SIMD(180 ) = 3465 seconds (58 minutes). Taken together, we estimate that V_ tp = 160-180 sources/hour seems reasonable for the completion of one of the three SIMD tasks we have considered in our VDAR study. Moreover, we have assumed only a single astronomer completing the task, whereas the large-format workspace of the Discovery Wall comfortably accommodates a small group working together. §.§.§ Comparison with single-object workflows As a point of comparison, we consider a single-object workflow, i.e. one source is loaded and visualised at a time with encube and using the Swinburne Discovery Wall hardware. A relationship between the single object load time and the FITS filesize was determined using a minimal sample of representative spectral cubes from each of the WHISP, THINGS and LVHIS datasets. We select the cubes with the smallest and largest filesizes, along with a cube that had the median file size (see Table <ref>). We measure load times for visualisation with encube running only on the head node, where the data is stored, and on a remote machine over the network via the NFS mount. We used a manual timing method with a reaction time error of 0.5 seconds. As shown in Figure <ref>, we find minimal differences in load times from the local disk (filled circles) or via the remote NFS mount (open circles). Performing a least squares fit to the combined data, we obtain: T_ Load = 37.71 V_ Store - 1.04 seconds with a Pearson correlation coefficient between T_ Load and V_ Store calculated to be r = 0.997. Using the average and median sample survey file sizes from Table <ref>, we compare the single-object and multi-object load times for the 80-cube WHISP, THINGS, LVHIS and combination configurations – see Table <ref>. The ratio of the single-to-multi object load times was calculated for each configuration, showing a 4-5 times speed-up in load times using the five compute nodes of the Swinburne Discovery Wall. This is not surprising for the nearly-perfect parallelism expected in this stage of the workflow, but with a slight input/output bottleneck at the head node where all of the data is stored. §.§.§ Estimates for future extragalactic Hi surveys In Figure <ref>, we estimate and compare the throughput for multi-object and single-object SIMD workflows. In addition to the LVHIS and WHISP extragalactic Hi, we obtain preliminary results for the APERTIF and WALLABY surveys; these values are indicative only of future analysis that is yet to be completed. We base our throughput predictions on 10,000 APERTIF sources (in the velocity range 1,000 to 10,000 km/s) with a mean storage volume of 0.62 MB/source cubelet[K.Hess, private communication] and 210,000 sources in WALLABY with a mean storage volume of 3 MB/source cubelet.[Analysis by author CM] The time to inspect each source is highly dependent on the SIMD task. For the candidate rejection VDAR activity (Section <ref>), we performed an initial visual scan across 80 spectral data cubes displayed on the Swinburne Discovery Wall in three minutes or 2.25 seconds/cube. This is achievable once all cubes have been loaded using physical navigation to rapidly move around the display space. With the continual cognitive set-shifting required for a lone astronomer to load and inspect one cube at a time, regardless of the display and visualisation software used, it may take 10-30 seconds per cube even at peak performance. Moreover, the single-object workflow removes the opportunity to perform comparisons, or rapid revisits to double check that a previously-viewed source had been inspected adequately. For each survey, we consider three scenarios with different follow-up action times: (1) T_ Action = 0, such that inspection occurs but no additional actions are required for all sources; (2) T_ Action = 30 s/source for 10% of sources; and (3) T_ Action = 60 s/source for 25% of sources. Symbols are used in Figure <ref> to differentiate between the inspection times, with T_ Inspect = 3 s/source for a multi-object workflow (filled circle) and T_ Inspect = 10 s/source (open triangle) and T_ Inspect = 30 s/source (plus symbol) for single-object workflows. For large survey sizes, N_S, these components of T_ SIMD dominate over T_ Load regardless of whether a single-object or multi-object workflow is used. The minor contribution from T_ Socket has been omitted. In all of the scenarios we considered, the estimated throughput with a multi-object workflow exceeds that of a single-object workflow. §.§ Evolution of visualisation solutions Astronomers have developed their craft over centuries by using a combination of singular, bespoke facilities for data gathering (e.g. dedicated observatories and supercomputers) supported by widely-available, general purpose resources for data analysis and visualisation (e.g. desktop and laptop computers in the digital era). We assert that a complementary role exists for dedicated advanced visualisation facilities that can provide a very different experience to that of the everyday. In the same way that astronomers do not expect to operate their own personal 64-metre radio telescope or 8-metre class optical/infrared telescope, there should not be an expectation, or need, for all astronomical institutions to operate a local advanced visualisation facility. What is more important is that when such facilities are available, there is a community of interested and potential users who are able to take advantage of them. As astronomical teams prepare themselves for the next phase of petascale and exascale data collection, new visualisation strategies that enable and enhance survey-scale discovery-based research processes will be required. Our VDAR evaluation demonstrates how comparative visualisation (implemented using encube and the Swinburne Discovery Wall) could be applied to SIMD visual analysis tasks that would not otherwise be feasible using a standard desktop configuration. Until a survey project is underway, the exact configuration of software and hardware that provides the most productive approach to advancing scientific knowledge may not be known. As the projects develop, familiarity with the strengths and weaknesses of the instrumentation and software-pipelines will also grow. The strategies for analysis and visualisation adopted during the first year of data collection may not be the same as those deemed essential in the years that follow. Some approaches to analysis and visualisation become essential throughout the lifetime of the individual research project where they were first adopted, perhaps spreading further into the discipline to become ubiquitous. Other alternatives may be relevant for a short period of time, or may only need to be accessed by a few members of a research team, but provide a much-needed distinctive perspective that serves to accelerate discovery. By presenting alternatives to current ways of working, astronomers can consider for themselves whether a combination of options will assist them at various stages of their research workflow. As an illustrative example of the evolution in the use of display environments, we look to the real-time, multi-wavelength Deeper Wider Faster (DWF) fast transient detection program <cit.>, where the Swinburne Discovery Wall – used as a TDW without encube – has also played an important role. As an international collaboration, DWF operations rely on a core team of co-located human inspectors with access to suitable visualisation software and hardware to support their decision-making processes during high-intensity, real-time observing campaigns. Through identification of potential fast or short-lived transient events, the DWF team determines whether there is a need to trigger immediate follow-up observations (e.g. target of opportunity spectroscopic observations with one of the Keck Observatory telescopes). Informed by a user performance study that investigated potential roles for TDWs in supporting inspection of very high pixel-count images by individuals or small teams <cit.>, a TDW became a necessary component of the display ecology used in the DWF project. The TDW replaced an initial inefficient visualisation workflow (used during pilot observations in 2015), where the research team used laptop screens and desktop monitors to inspect each of the 60 CCD frames (4096 × 2048 pixels) per field imaged with the Dark Energy Camera [DECam; <cit.>]. Over successive observing campaigns, as reported by <cit.>, the role and configuration of the TDW changed in response to user requirements and feedback. The visual inspection tasks performed by DWF team members were modified due to improvements in scientific understanding of the categories of fast transients that were being identified in real-time (and by extension those categories that could be analysed after the short-duration observing campaigns had concluded), along with enhancements to the automated pipelines <cit.>. In turn, improvements of the automated pipeline were directly informed by the knowledge the team acquired through using the TDW. At the time of writing, while no longer essential in the DWF context, the Swinburne Discovery Wall continues to play a role during real-time DWF campaigns. At critical stages of the development of DWF, however, the TDW was a solution that was “fit for purpose” and supported team-based visual discovery tasks that were not feasible to conduct with a standard desktop-bound approach. § CONCLUSIONS The expected growth in both the volume and velocity of data from future astronomical surveys necessitates a move away from serial workflows. The comparative visualisation approach we have investigated here via benchmarking and a VDAR evaluation is not intended to replace existing alternatives, but provides a demonstration of a complementary workflow that addresses some existing – and emerging – challenges in the size and scale of astronomical surveys. Within our case study context of extragalactic Hi surveys, we anticipate that both the short and longer term use of automated pipelines will retain a stage of visual inspection and classification. We suggest that this can be achieved more successfully, and more rapidly, using a method that is not about inspecting one object at a time. As we have shown here, the encube framework operating on a tiled display wall presents a compelling alternative mode for SIMD activities. We have considered tasks that are highly repetitive, yet may need to be performed on all sources detected within a survey. Examples here include quality control, candidate rejection, and morphological classification. In all cases, as identified through our VDAR studies, encube encouraged a sensemaking process <cit.> with a foraging phase and a sensemaking loop. The comparative nature of the display – comfortably visualising 180 spectral cubes at a time, using the Swinburne Discovery Wall configuration of ten 4K-UHD monitors – supports the rapid identification of features affecting multiple source cubelets while also presenting immediate access to both the spatial and spectral data for individual objects (through our use of volume rendering). A few hours interacting with data with encube on the Discovery Wall could replace weeks to months of work at the desktop – without diminishing the importance of the follow-up detailed analysis that the desktop supports. We estimate a throughput of 160-180 sources/hour could be inspected using the configuration that we assessed. Both encube and the Swinburne Discovery Wall are easily modifiable and scalable, in the sense that additional columns of monitors plus computers can be added to increase the number of sources displayed at a time. Implementation of our solution at another institution requires access to: the open-source software<cit.>; one or more Linux-based computers; (ideally) multiple monitors; and an appropriate network connection between the process and render nodes and the master node where the data set is stored. Customised visualisation and analysis approaches will evolve over time as surveys progress. They should be employed during those periods that are particularly labour-intensive, while assisting in the identification of additional processes that can be fully or partly automated. Finding the appropriate balance between human inspection and automated detection may help to maximise the overall discovery potential of a workflow <cit.>. § ACKNOWLEDGEMENTS We acknowledge the Wurundjeri People of the Kulin Nation, who are the Traditional Owners of the land on which the research activities were undertaken. Christopher Fluke is the SmartSat Cooperative Research Centre (CRC) Professorial Chair of space system real-time data fusion, integration and cognition. SmartSat CRC's activities are funded by the Australian Government's CRC Program. We acknowledge the generous support of the Eric Ormond Baker Charitable fund, which helped to establish the Discovery Wall and the remote observing facility at Swinburne University of Technology. We are extremely grateful to David Barnes and Amr Hassan for their technical advice and encouragement during early phases of this work, and to Kelley Hess for assisting with understanding the preliminary APERTIF Hi survey results. This paper made use of data from: WHISP, Westerbork Observations of Neutral Hydrogen in Irregular and Spiral Galaxies <cit.>; THINGS, The Hi Nearby Galaxy Survey <cit.>; and LVHIS, The Local Volume Hi Survey <cit.>. § IMPLEMENTATION NOTES §.§ Technical matters In this section, we highlight some additional features of the implementation of encube on the Swinburne Discovery Wall. One workstation is assigned the role of the Master Node, where the manager unit and interaction unit are deployed. All five workstations act as Process and Render nodes. Figure <ref> illustrates the connections and communication pathways between the Master node and each of the Process and Render nodes. Encube is launched from a Linux terminal on the Master node, which activates the program instance on each of the Process and Render nodes. Each program instance: (1) creates and opens a socket for communication with the Master node; (2) and makes application programming interface (API) calls in C code to the S2PLOT library for interactive graphical elements. Relevant content from the configuration file hosted on the Master node is passed to the Process and Render nodes. Once the socket connections have been established, the user interface is accessed through a Web browser accessing localhost on the Master node (see Figure <ref>). S2PLOT allows for the creation of independent regions of the graphics display window, referred to as panels. For simplicity, panels are presented in encube as a uniformly tiled matrix of rows and columns. The 3D geometry within an S2PLOT panel can be controlled by selecting the panel and using the attached mouse to rotate the data cube or the keyboard to zoom in or out. As each display column of the Discovery Wall is independent, it is possible to use the keyboard and mouse associated with a column in order to work with a local subset of data (see Figure <ref>). Alternatively, the location, orientation and view direction of the virtual camera can be set for each panel using an API call. This method is used when interacting with the user interface on the Master node, so that the virtual camera is updated simultaneously for all of the panels. Each Process and Render node requests and loads relevant data files from the Master node, using a drive that is accessible using the network file system (NFS). Once each Process and Render node has loaded the required data, the spectral cube is visualised using 3D texture-based volume rendering. Here, an S2PLOT callback function is associated with each panel, and once per refresh cycle, the volume rendering is generated based on the current virtual camera position. 3D texture-based rendering provides a compromise between lower-fidelity two-dimensional texture image stacks (also implemented in S2PLOT) or computationally-demanding ray-shooting. For simplicity of operation, two different colour-mapping options are provided: intensity-based, whereby a heat-style colour map is assigned from the minimum to the maximum voxel value for each spectral cube, and velocity-based mapping <cit.>. Here, the velocity data is utilised along with the voxel values, in order to provide cues as to whether neutral Hi gas is blue-shifted or red-shifted along the spectral axis with respect to the centre of the cube (assumed to be equivalent to the centre-of-mass for most systems). While completing the benchmarking and VDAR evalauation activities (described in Sections <ref> and <ref>), we chose not to invest development time to make some cosmetic changes to the encube user interface. In particular, the world in miniature component of the interface (see Figure <ref>) was not ideal when the number of spectral cubes visualised exceeded 40. This temporarily limits the ability to use some of the features of encube, such as the ability to select and swap cubes between any of the displays in real-time. However, the overall functionality and performance of the encube process and render components is not impeded. In the implementation of encube that we benchmarked, there were some additional processing steps performed that add to the time taken to load each spectral cube. These comprise several independent complete passes through the spectral cube to calculate statistical parameters, compare actual data values with those recorded in the spectral cube metadata, and generation of a histogram of data values for each spectral cube. Each of these processes have algorithmic linear scaling depending only on the number of voxels in the spectral cube. Consequently, they introduce a multiplicative factor on the time to load all of the spectral cubes. Such pre-computation is a design choice that allows the CPU memory to be freed once data is loaded onto a GPU. Accessing these values has O(1) complexity later during interactive analysis. §.§ Future enhancements While working with encube during the VDAR evaluation, we identified several additional features or enhancements that could extend the framework's suitability for comparative visual analysis of large-scale extragalactic Hi surveys: * Add an on-screen scale indicator. As all spectral cubes are scaled to a unit cube for convenience, the physical size of individual objects was lost. * Within the user interface, allow selection or sorting of the source list by any metadata attribute, such as size, total Hi mass, or distance. * Access and display detailed metadata of a selected object or set of objects. During the present work, a trivial modification was made to toggle visibility of the name of each object within its S2PLOT display panel. * Improve the creation of the on-screen configuration, allowing more flexibility in how data is assigned to the available display space. For example, a non-uniform arrangement of panels per column, which could allow individual spectral cubes to be visualised at increased levels of detail or cubes with different sizes (e.g. spatial pixel coverage or rest-frame physical dimensions) could be presented at the same scale as demonstrated in Figure <ref>. * Include support for additional data types to be loaded and displayed, including spectral cubes from different wavelength regimes or observing modes (e.g. optical integral field units), overlay of two-dimensional images, or visualisation of one-dimensional spectra. * Provide a mechanism by which annotations could be recorded regarding individual sources, preferably through the use of speech-to-text capture and conversion. * Support interactive masking of channels via the user interface for selected subsets of cubelets, so that the issues identified with the WHISP sample could have been resolved in real-time. Such modifications could then be embedded into the dataset, by exporting the modified spectral cubes for future automated, or human, analysis.
http://arxiv.org/abs/2307.03969v2
20230708125936
Impact of noise on inverse design: The case of NMR spectra matching
[ "Dominik Lemm", "Guido Falk von Rudorff", "O. Anatole von Lilienfeld" ]
physics.chem-ph
[ "physics.chem-ph" ]
University of Vienna, Faculty of Physics, Kolingasse 14-16, AT-1090 Vienna, Austria University of Vienna, Vienna Doctoral School in Physics, Boltzmanngasse 5, AT-1090 Vienna, Austria University Kassel, Department of Chemistry, Heinrich-Plett-Str.40, 34132 Kassel, Germany [email protected] Departments of Chemistry, Materials Science and Engineering, and Physics, University of Toronto, St. George Campus, Toronto, ON, Canada Vector Institute for Artificial Intelligence, Toronto, ON, M5S 1M1, Canada Machine Learning Group, Technische Universität Berlin and Institute for the Foundations of Learning and Data, 10587 Berlin, Germany Despite its fundamental importance and widespread use for assessing reaction success in organic chemistry, deducing chemical structures from nuclear magnetic resonance (NMR) measurements has remained largely manual and time consuming. To keep up with the accelerated pace of automated synthesis in self driving laboratory settings, robust computational algorithms are needed to rapidly perform structure elucidations. We analyse the effectiveness of solving the NMR spectra matching task encountered in this inverse structure elucidation problem by systematically constraining the chemical search space, and correspondingly reducing the ambiguity of the matching task. Numerical evidence collected for the twenty most common stoichiometries in the QM9-NMR data base indicate systematic trends of more permissible machine learning prediction errors in constrained search spaces. Results suggest that compounds with multiple heteroatoms are harder to characterize than others. Extending QM9 by ∼10 times more constitutional isomers with 3D structures generated by Surge, ETKDG and CREST, we used ML models of chemical shifts trained on the QM9-NMR data to test the spectra matching algorithms. Combining both and shifts in the matching process suggests twice as permissible machine learning prediction errors than for matching based on shifts alone. Performance curves demonstrate that reducing ambiguity and search space can decrease machine learning training data needs by orders of magnitude. Impact of noise on inverse design: The case of NMR spectra matching O. Anatole von Lilienfeld August 12, 2023 =================================================================== § INTRODUCTION Current development times of novel molecular materials can span several decades from discovery to commercialization. In order for humanity to react to global challenges, the digitization<cit.> of molecular and materials discovery aims to accelerate the process to a few years. Long experiment times severely limit the coverage of the vastness of chemical space, making the development of self driving laboratories for autonomous robotics experimentation crucial for high throughput synthesis of novel compounds (Fig.<ref> a))<cit.>. To keep the pace of automated synthesis, fast and reliable characterization of reaction products through spectroscopic methods is required, an often manual, time intense and possibly error prone task. One of the most common methods to elucidate the structure of reaction products are nuclear magnetic resonance (NMR) experiments.<cit.> Through relaxation of nuclear spins after alignment in a magnetic field, an NMR spectrum, characteristic of local atomic environments of a compound, i.e. functional groups, can be recorded. In particular, and NMR experiments are routinely used by experimental chemists to identify the chemical structure or relevant groups just from the spectrum. For larger compounds, however, the inverse problem of mapping spectrum to structure becomes increasingly difficult, ultimately requiring NMR of additional nuclei, stronger magnets, or more advanced two-dimensional NMR experiments<cit.>. Computer-assisted structure elucidation algorithms aim to iteratively automatize the structure identification process<cit.>. Current workflows include repeated predictions of chemical shifts for candidate structure inputs through empirical or ab initio methods<cit.>. Albeit accurate even in condensed phase through use of plane-waves <cit.> or QM/MM setup <cit.>, the cost of density functional theory (DFT) calculations severely limits the number of candidate structures that can be tested, leaving the identification of unknown reaction products out of reach for all but the smallest search spaces. Data driven machine learning models leveraging experimental or theoretical NMR databases<cit.> provide orders of magnitude of speedup over ab initio calculations, reaching 1-2 ppm mean-absolute-error (MAE) w.r.t. experiment or theory, respectively<cit.>. However, while the stoichiometry of the reaction product is usually known, e.g. through prior mass spectrometry experiments, the number of possible constitutional isomers exhibits NP hard scaling in number of atoms, quickly spanning millions of valid molecular graphs already for molecules of modest size (Fig.<ref> b)). As such, the inverse problem of inferring the molecular structure from an NMR spectrum still poses a major challenge even for rapid solvers. Recent machine learning approaches tackle the inverse problem using a combination of graph generation and subsequent chemical shift predictions for candidate ranking<cit.>. First explored by Jonas<cit.>, a Top-1 ranking with 57% reconstruction success-rate was achieved using deep imitation learning to predict bonds of molecular graphs. Sridharan et al.<cit.> used online Monte Carlo tree search to build molecular graphs resulting in a similar Top-1 ranking of 57.2%. Huang et al.<cit.> relied on substructure predictions from which complete graphs can be constructed, reaching 67.4% Top-1 accuracy by ranking substructure profiles instead of shifts. A commonality between all algorithms is the subsequent ranking of candidates using spectra matching or other heuristics. Consequently, even though the correct query compound could be detected early, similar candidates might be ranked higher, making the ranking process as critical as the candidate search itself. In this work, we analyse the effectiveness of the NMR spectra matching task encountered in the inverse structure elucidation problem. As stagnating improvements<cit.> in chemical shift predictions due to limited public NMR data aggravate candidate rankings, results suggest that both the prediction error of machine learning models and the number of possible candidates are crucial factors for elucidation success. By systematically controlling the size of chemical search space and accuracy of chemical shifts, we find that higher error levels become permissible in constrained search spaces. Moreover, results indicate that increasing the uniqueness through including both and shifts in the matching process, rather than relying on a single type of shift, significantly reduces ambiguity and enhances error tolerance. To evaluate the spectra matching task throughout chemical compound space, we systematically control the accuracy of 1D and chemical shifts of the 20 most common stoichiometries in QM9-NMR<cit.> by applying distinct levels of Gaussian white noise. Note that while we focus on DFT based 1D NMR in this work, future studies could include experimental data and 2D NMR information. Comparisons amongst stoichiometries suggest that chemical spaces with increasing amounts of heteroatoms and number of constitutional isomers are harder to characterize than others. To test the spectra matching method on a large search space, we extended QM9-NMR to 56k C_7O_2H_10 constitutional isomers. Controlling the chemical shift accuracy through machine learning models trained at increasing training set sizes, performance curves again indicate a trade-off between search space and accuracy. Hence, as less accurate shift predictions become useful, results show that machine learning training data needs can be reduced by multiple orders of magnitude. § THEORY & METHODS §.§ NMR Spectra Matching Consider a query or spectrum with a set of N possible candidate constitutional isomer spectra. We chose the squared euclidean distance as a metric to rank candidate spectra against the query spectrum (see SI Fig.3 for comparison against other metrics): d(δ_q, δ_i) = ∑_j=1^n (δ_q,j - δ_i,j)^2, with δ being a sorted spectrum of n chemical shifts (or ), q being the query, i being the i-th of N candidates, and j being the j-th chemical shift in a spectrum, respectively. To use both and shifts simultaneously for spectra matching, a total distance can be calculated as follows: d_combined = d(δ^13C_q, δ^13C_i) + γ· d(δ^1H_q, δ^1H_i), with γ=64 being a scaling factor determined via cross-validation (see SI Fig.1) to ensure similar weighting. Final rankings are obtained by sorting all candidates by distance. The Top-1 accuracy is calculated as the proportion of queries correctly ranked as the closest spectrum, respectively. §.§ Elucidation performance curves To analyse the spectra matching elucidation accuracy, we systematically control the number of possible candidates N and the accuracy of chemical shifts, respectively. For each constitutional isomer set, we choose 10% as queries and 90% as search pool, respectively. Next, we randomly sample N spectra from the search pool, including the query spectrum. Each sample size is drawn ten times and the Top-1 accuracy averaged across all runs. To control the accuracy of chemical shifts, we apply Gaussian white noise (up to 1 or 10 σ for and , respectively) or use the machine learning error as a function of training set size (c.f. SI Fig.5 for learning curves). For each N and chemical shift accuracy, results are presented as elucidation performance curves (c.f. Fig.<ref> a-b)), showing the elucidation success as a function of chemical shift accuracy in terms of mean absolute error (MAE). §.§ Chemical Shift Prediction We relied on kernel ridge regression (KRR) for machine learning and chemical shifts as presented in Ref.<cit.>. We use a Laplacian kernel and the local atomic Faber-Christensen-Huang-Lilienfeld (FCHL19<cit.>) representation with a radial cutoff<cit.> of 4 . The kernel width and regularization coefficient have been determined through 10-fold cross-validation on a subset of 10'000 chemical shifts of the training set. §.§ Data The QM9-NMR<cit.> dataset was used in this work, containing 130'831 small molecules up to nine heavy atoms (CONF) with chemical shieldings at the mPW1PW91/6-311+G(2d,p)-level of theory. We used the 20 most common stoichiometries (Fig.<ref> b)), having a minimum of 1.7k constitutional isomers available in the dataset. To extend the QM9-NMR C_7O_2H_10 constitutional isomers space, we generated 54'641 SMILES using Surge<cit.>. 3D structures have been generated using ETKDG<cit.> and CREST<cit.> using GFN2-xTB/GFN-FF. Adding the structures to QM9, a total pool size of 56.95k C_7O_2H_10 isomers was obtained. For the training of chemical shift machine learning models, we selected C_8OH_12, C_8OH_10, C_8OH_14, C_7O_2H_8 and C_7O_2H_12 constitutional isomers, yielding a total of 143k and 214k training points, respectively. § RESULTS & DISCUSSION §.§ Spectra matching accuracy with synthetic noise To analyse the influence of noise and number of candidates on the elucidation success, we applied Gaussian noise to and shifts of C_7O_2H_10, C_5N_3OH_7 and C_8OH_14 constitutional isomers, respectively. Fig.<ref> a-b) depicts a sigmoidal shaped trend of Top-1 elucidation accuracies at increasing candidate pool sizes N_QM9 as a function of mean absolute error (MAE). Note that increasing the maximum candidate pool size leads to an offset of the trend towards less permissible errors. A possible explanation is the correlation of the density of chemical space with increasing numbers of candidate spectra N<cit.>. As shift predictions need to become more accurate, limiting N through prior knowledge of the chemical space could be beneficial. Similar findings have been reported by Sridharan et al.<cit.>, noting that brute force enumerations of chemical space lead to worse rankings than constrained graph generation. Note that while the trends in and elucidation are similar, less error is permissible when using shifts. To further reduce the ambiguity, we include both and shifts into the matching problem as per Eq.<ref>. Results suggest 50% and ∼150% more permissible and errors when both spectra are considered in the matching process (Fig.<ref> c)). Similar to how chemists solve the elucidation problem, the inclusion of more distinct properties increases the uniqueness and can improve the elucidation success. §.§ Extrapolating the search space Due to the limited amount of constitutional isomers in databases compared to the number of possible graphs faced during inverse design (Fig.<ref> b)), assessing the chemical shift accuracy for successful elucidation is severely limited. As such, we extrapolate elucidation performance curves to obtain estimates about chemical shift accuracies in candidate pool sizes larger than QM9. We fit each elucidation performance curve (Fig.<ref> a-b)), respectively, using a smoothly broken power law function: f(x) = (1+ (x/x_b)^d)^α with x_b controlling the upper bend and offset, d changing the curvature and α changing the tilt of the function (see SI Fig.2), respectively. The parameters of Eq.<ref> as a function of N can again be fitted using a power law function (see SI Fig.2) and extrapolated to the total number of graphs N_Surge, respectively. Results of the extrapolation (Fig.<ref> a-b) dashed) indicate significant differences in elucidation efficiency among stoichiometries. For instance, C_8OH_14 queries are potentially easier to elucidate than C_5N_3OH_7 structures. Possible reasons are the limited number of C_8OH_14 graphs compared to millions of C_5N_3OH_7 isomers. Moreover, the number of heteroatoms of the C_5N_3OH_7 stoichiometry might hamper the characterization when only relying on or , respectively. Hence, to solve the inverse structure elucidation problem using experimental data of compounds larger than QM9, reducing ambiguities through including both and shifts as well as to reduce the candidate space is critical for elucidation success. §.§ Trends in chemical space To analyse the elucidation efficiency throughout chemical space, we applied the Gaussian noise and extrapolation procedure to the 20 most common stoichiometries in QM9 (Fig.<ref> b)). Fig.<ref> a) shows the MAE required for 95% elucidation success as a function of N_Surge. Results suggest that less error is permissible for stoichiometries with large N_Surge and fewer carbon atoms. As such, using only shifts might not be sufficient to fully characterize the compound. Again, similar to how chemists use multiple NMR spectra to deduct chemical structures, additional information such as shifts are beneficial to extend the information content. In Fig. <ref> b), the error permissiveness of spectra matching using only (see SI Fig.4 for ) versus combining both and is being compared, revealing a linear trend between both. Note that the C_7NOH_7 stoichiometry shows the smallest benefit from adding additional information. Interestingly, a hierarchy for C_7NOH_X stoichiometries of different degrees of unsaturation is visible, indicating an inverse correlation between number of hydrogens and MAE (Fig. <ref> b) green). Similar hierarchies are also observed for other stoichiometries such as C_7O_2H_X and C_8OH_X (Fig. <ref> b) blue and orange). On average, the combination of and for spectra matching increases the error permissiveness of and by 85% and 261% (see SI Fig.4), respectively. §.§ Comparison to machine learned shift predictions To test the elucidation performance using machine learning predictions, we trained and KRR models at increasing training set sizes (see SI Fig.5 for learning curves) and predicted chemical shifts of 56k C_7O_2H_10 constitutional isomers. Results again show similar trends as observed with Gaussian noise (Fig.<ref> a-b)), however, indicate more permissive accuracy thresholds. For instance, KRR predictions at 2 ppm MAE can identify 64% of queries rather than only 17% suggested by the Gaussian noise experiment. The difference could be explained due the systematic, non uniform nature of the QM9<cit.> chemical space, influencing the shape and extrapolation of elucidation performance curves in Fig.<ref>. Moreover, Gaussian noise is applied to all shifts at random compared to possibly more systematic machine learning predictions. Note that the trade-off between error and N is consistent and that the exact parameters will depend on the machine learning model and the finite sampling of constitutional isomer space. To model possible experimental noise on query spectra, we apply Gaussian noise to query spectra and evaluate the elucidation performance of the best performing machine learning model (see insets in Fig.<ref> a-b)). Results indicate a halving of elucidation accuracy when the query spectrum contains up to 2 ppm MAE_Q in and 0.15 ppm MAE in error, respectively. Thus, in the presence of experimental measurement noise even higher prediction accuracies might be necessary. Combining both and spectra for matching improves the elucidation performance up to 90% (Fig.<ref> e)). Again, the combination of spectra for elucidation highlights the effectiveness of reducing the ambiguity of the matching problem by including additional properties. Investigating potential strategies to reduce the constitutional isomer search space, we constrained N based on functional groups (see SI Table 1). Randomly selecting functional groups present in each query, N can be reduced by 50% and 62% on average (see Fig.<ref> d) inset for distributions), respectively. Results in Fig.<ref> c-d) indicate an increase of the elucidation accuracy by 5% in and up to 10% for , respectively, in agreement with the elucidation performance in Fig.<ref> a-b). Note that the knowledge of two functional groups only led to marginal improvements. However, fragmentation could be more beneficial for larger compounds than present in QM9<cit.>, as reported by Yao et al.<cit.>. Using both and shifts on the reduced search space only lead to marginal improvements of 0.5% over the results of the full search space. §.§ Balancing search space and accuracy We use performance curves to analyse the relationship between the elucidation performance of C_7O_2H_10 queries, machine learning prediction errors and candidate pool sizes N. The systematic decay of performance curves (Fig.<ref> red and blue) again demonstrates that constraining N with prior knowledge allows for less accurate shift predictions to be applicable. Extrapolating the performance curves indicates a machine learning MAE of 0.93 ppm to correctly rank 90% of queries out of 56k possible candidates (Fig.<ref> red), 0.02 ppm lower than suggested by Gaussian noise. To reach an MAE of 0.93 ppm, four million training instances are required (Fig.<ref> orange). Using both and shifts requires two orders of magnitude less training data (Fig.<ref> blue). As such, facing expensive experimental measurements and ab initio calculations, more effective inverse structure elucidation could be achieved by balancing machine learning data needs through reduced search spaces and incorporation of additional properties. § CONCLUSION We have presented an analysis of the effectiveness of the NMR spectra matching task encountered in the inverse structure elucidation problem. By systematically controlling the predictive accuracy of and chemical shifts, we found consistent trends throughout chemical compound space, suggesting that higher errors become permissible as the number of possible candidates decreases. Note that while we relied on 1D ab initio NMR data, similar analysis could be performed using 1D or 2D experimental spectra. Applications to the most common constitutional isomers in QM9 highlight that chemical spaces with many heteroatoms are harder to characterize when only relying on a single type of chemical shift. Using both and chemical shifts increases the error permissiveness by 85% and 261% on average, respectively. Machine learning predictions for 56k C_7O_2H_10 compounds showed that using both or shifts increased elucidation success to 90% compared to only 64% and 36% when used alone, respectively. The usefulness of the analysis is expressed via performance curves, showing that training demands can be reduced by orders of magnitude compared to relying on specific shifts alone. We believe that as the accuracy of machine learning models to distinguish spectra is limited, constrained search spaces or inclusion of more distinct properties are necessary to improve candidate rankings. Rather than solely relying on more accurate models, future approaches could include explicit knowledge of chemical reactions, functional groups or data from mass spectrometry, infrared- or Raman spectroscopy<cit.>, respectively. Finally, explicitly accounting for atomic similarities and chemical shift uncertainties via the DP5 probability might further increase the confidence in structure assignments<cit.>. § ACKNOWLEDGEMENT O.A.v.L. has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 772834). O.A.v.L. has received support as the Ed Clark Chair of Advanced Materials and as a Canada CIFAR AI Chair. Icons in Fig.<ref> from DBCLS, Openclipart and Simon Dürr from bioicons.com under CC-BY 4.0 and CC0, respectively. § DATA & CODE AVAILABILITY The QM9-NMR dataset is openly available at <https://moldis.tifrh.res.in/data/QM9NMR>. The code and additional data used in this study is available at <https://doi.org/10.5281/zenodo.8126380>. § CONFLICT OF INTEREST The authors have no conflict of interest. § REFERENCES ieeetr
http://arxiv.org/abs/2307.04510v1
20230710121242
An analysis of least squares regression and neural networks approximation for the pricing of swing options
[ "Christian Yeo" ]
q-fin.MF
[ "q-fin.MF" ]
=1 plain theoremTheorem[section] Proposition[theorem]Proposition definition[theorem]Definition lemma[theorem]Lemma remark[theorem]Remark .pdf,.png countlist countlist[2][] #2 * #1#1 1,2]Christian Yeo [1]Sorbonne Université, Laboratoire de Probabilités, Statistique et Modélisation, UMR 8001, case 158, 4, pl. Jussieu, F-75252 Paris Cedex 5, France [2]Engie Global Markets, 1 place Samuel Champlain, 92400 Courbevoie, France equationsection An analysis of least squares regression and neural networks approximation for the pricing of swing options [ ========================================================================================================== Least Squares regression was first introduced for the pricing of American-style options, but it has since been expanded to include swing options pricing. The swing options price may be viewed as a solution to a Backward Dynamic Programming Principle, which involves a conditional expectation known as the continuation value. The approximation of the continuation value using least squares regression involves two levels of approximation. First, the continuation value is replaced by an orthogonal projection over a subspace spanned by a finite set of m squared-integrable functions (regression functions) yielding a first approximation V^m of the swing value function. In this paper, we prove that, with well-chosen regression functions, V^m converges to the swing actual price V as m → + ∞. A similar result is proved when the regression functions are replaced by neural networks. For both methods (least squares or neural networks), we analyze the second level of approximation involving practical computation of the swing price using Monte Carlo simulations and yielding an approximation V^m, N (where N denotes the Monte Carlo sample size). Especially, we prove that V^m, N→ V^m as N → + ∞ for both methods and using Hilbert basis in the least squares regression. Besides, a convergence rate of order 𝒪(1/√(N)) is proved in the least squares case. Several convergence results in this paper are based on the continuity of the swing value function with respect to cumulative consumption, which is also proved in the paper and has not been yet explored in the literature before for the best of our knowledge. Keywords - Swing options, stochastic control, least squares regression, convergence analysis, neural networks approximation, dynamic programming equation. § INTRODUCTION Swing contracts <cit.> are commonly used in commodity derivatives trading to manage commodity supply. These contracts allow the holder to purchase amounts of energy on specific dates (called exercise dates), subject to constraints. The pricing <cit.> of such a contract is a challenging problem that involves finding a vector that represents the amounts of energy purchased through the contract, while maximizing the gained value. This problem is doubly-constrained (exercise dates constraint and volume constraints) and its pricing had been addressed using two groups of methods in the literature. One group concerns methods that are based on the Backward Dynamic Programming Principle (BDPP) <cit.>, which determines the swing price backwardly from the expiry of the contract until the pricing date. In the BDPP-based approach, at each exercise date, the swing value is determined as the maximum of the current cash flows plus the continuation value, which is the (conditional) expected value of future cash flows. To compute the continuation value, nested simulations may be used, but this can be time-consuming. Alternatively, an orthogonal projection over a vector space spanned by a finite set of squared-integrable functions may be used, based on the idea of the least squares regression method introduced by Longstaff and Schwartz <cit.>. This method was initially introduced for the pricing of American-style options <cit.> and had been then used for some stochastic control problems <cit.> and especially in the context of swing contract pricing <cit.>. Despite being widely used by practitioners, in the context of swing pricing, this method has received little study in terms of convergence. The paper <cit.> analyzes the convergence of general regression methods in the context of stochastic control problems. While swing contracts pricing is, by nature, a stochastic control problem, such contracts involves specificities whose analysis goes beyond the scope covered in the paper <cit.>. Note that this paper focuses on the pricing of swing contracts within the firm constraints framework, where the contract holder cannot violate volume constraints. In this framework, the set of admissible controls at each exercise date depends on the cumulative consumption up to that date. Additionally, in the BDPP-based approaches, the optimal control at one exercise date depends on the estimated value of the swing contract at the next exercise date, which in turns is defined as a supremum. Thus, the error propagation through the BDPP meets uniform convergence issue. Taking into account the latter fact, to meet the framework studied in <cit.>, cumulative consumption may need to be included as a state variable along with the Markov process driving the underlying asset price. However, this can be challenging to implement as it requires to know the joint distribution of the underlying asset price and the cumulative consumption. This difficulty is perceptible in <cit.> where, in the context of storage pricing (contracts whose pricing is closed to that of swing contracts), the authors have used uniform sampling for cumulative consumption as a proxy. Furthermore, in <cit.> strong assumptions had been made, such as the boundedness of regression functions, which do not hold in practice. Therefore, in this paper, we aim to analyze the convergence of least squares regression for the specific problem of swing options pricing. Besides, we do not restrict ourselves to least squares method and analyze an alternative method which consist in approximating the continuation value, not by an orthogonal projection but, using neural networks. Both methods for approximating the swing contract price are analyzed in a common framework. To achieve this, we proceed as in previous works <cit.> by proving some convergence results into two main steps. We first replace the continuation value by either an orthogonal projection over a well-chosen basis of regression functions or by neural network. We demonstrate that the resulting swing value function, as an approximation of the actual one, converges towards the actual one as the number of functions in the regression basis or the number of units per hidden layer (in the neural network) increases. Furthermore, practically, a Monte Carlo simulation has to be performed. This is needed to compute the orthogonal projection coordinates in the least squares method; which generally has no closed form while it serves as input for training the neural network. This leads to a second level of approximation, a Monte Carlo approximation. In this paper, we prove that, under some assumptions, this second approximation converges to the first one for both studied methods. Moreover, in the least squares method, a rate of order 𝒪(N^-1/2) (N being the size of the Monte Carlo sample) of the latter convergence is proved. Several results in this paper depend on the continuity of the swing value function with respect to the cumulative consumption, which is a crucial result that has not yet been proved for the best of our knowledge. We establish this continuity result using Berge's maximum theorem, which is commonly used to analyze the regularity of optimal control and optimal value functions in parametric optimization contexts. Additionally, proving the continuity of the value function with respect to the cumulative consumption also serves as another proof of the existence of an optimal control, which was previously demonstrated differently in <cit.>. §.§ Organization of the paper Section <ref>. provides general background on swing contracts. We thoroughly discuss its pricing and show one of the main results concerning the continuity of the swing value function. Section <ref>. We describe how to approximate the swing value function using either least squares regression or neural networks and fix notations and assumptions that will be used in the sequel. Section <ref>. We state the main convergence results of this paper as well as some other technical results concerning some concentration inequalities. §.§ Notations We endow the space ℝ^d with the Euclidean norm denoted by |·| and the space of ℝ^d-valued and squared-integrable random variables 𝕃^2_ℝ^d(ℙ) with the canonical norm || · ||_2. ⟨·, ·⟩ will denote Euclidean inner-product of ℝ^d. We denote by |·|_sup the sup-norm on functional spaces. 𝕄_d,q(ℝ) will represent the space of matrix with d rows, q columns and with real coefficients. When there is no ambiguity, we will consider |·| as the Frobenius norm; the space 𝕄_d,q(ℝ) will be equipped with that norm. For m ≥ 2, we denote by 𝔾L_m(ℝ) the subset of 𝕄_m,m(ℝ) made of non-singular matrices. For a metric space (E, d) and a subset A ⊂ E, we define the distance between x ∈ E and the set A by, d(x, A) = y ∈ Ainf d(x,y). We denote by d_H(A, B) the Hausdorff metric between two closed, bounded and non-empty sets A and B (equipped with a metric d) which is defined by d_H(A, B) = max(a ∈ Asup d(a, B), b ∈ Bsup d(b, A)). Let E be a real pre-Hilbert space equipped with a inner-product ⟨·, ·⟩ and consider x_1, …, x_n some vectors of E. The Gram matrix associated to x_1, …, x_n is the symmetric non-negative matrix whose entries are (⟨ x_i, x_j ⟩)_1 ≤ i, j ≤ n. The determinant of the latter matrix, the Gram determinant, will be denoted by G(x_1, …, x_n) := (⟨ x_i, x_j ⟩)_1 ≤ i, j ≤ n. § SWING CONTRACT In the first section, we establish the theoretical foundation for swing contracts and their pricing using the Backward Dynamic Programming Principle. Additionally, we prove some theoretical properties concerning the set of optimal controls that is involved in the latter principle. §.§ Description Swing option allows its holder to buy amounts of energy q_k at times t_k , k = 0, ...,n-1 (called exercise dates) until the contract maturity t_n = T. At each exercise date t_k, the purchase price (or strike price) is denoted K_k and can be constant (i.e K_k = K, k = 0,...,n-1) or indexed on a formula. In the indexed strike setting, the strike price is calculated as an average of observed commodity prices over a certain period. In this paper, we only consider the fixed strike price case. However the indexed strike price case can be treated likewise. In addition, swing option gives its holder a flexibility on the amount of energy he is allowed to purchase through some (firm) constraints: * Local constraints: at each exercise time t_k, the holder of the swing contract has to buy at least q_min and at most q_max i.e, q_min≤ q_k≤ q_max, 0 ≤ k ≤ n-1. * Global constraints: at maturity, the cumulative purchased volume must be not lower than Q_min and not greater than Q_max i.e, Q_n = ∑_k = 0^n-1 q_k∈ [Q_min, Q_max] , with Q_0 = 0 and 0 ≤ Q_min≤ Q_max < +∞. At each exercise date t_k, the achievable cumulative consumption lies within the following interval, 𝒯_k := [Q^down(t_k) , Q^up(t_k) ], where {[ Q^down(t_0) = 0,; Q^down(t_k) = max(0, Q_min - (n-k) · q_max), k ∈{1,…,n-1},; Q^down(t_n) = Q_min, ]. {[ Q^up(t_0) = 0,; Q^up(t_k) = min(k · q_max, Q_max) , k ∈{1,…,n-1},; Q^up(t_n) = Q_max. ]. Note that in this paper we only consider firm constraints which means that the holder of the contract cannot violate the constraints. However there exists in the literature alternative settings where the holder can violate the global constraints (not the local ones) but has to pay, at the maturity, a penalty which is proportional to the default (see <cit.>). The pricing of swing contract is closely related to the resolution of a backward equation given by the Backward Dynamic Programming Principle. §.§ Backward Dynamic Programming Principle (BDPP) Let (Ω, ℱ, {ℱ_t }, ℙ) be a filtered probability space. We assume that there exists a d-dimensional (discrete) Markov process (X_t_k)_0 ≤ k ≤ n and a measurable function g_k : ℝ^d →ℝ such that the spot price (S_t_k)_0 ≤ k ≤ n is given by S_t_k = g_k(X_t_k). Throughout this paper, the function g_k will be assumed to have at most linear growth. The decision process (q_k)_0 ≤ k ≤ n-1 is defined on the same probability space and is supposed to be ℱ_t_k^X- adapted, where ℱ_t_k^X is the natural (completed) filtration of (X_t_k)_0 ≤ k ≤ n. In the swing context, at each time t_k, by purchasing a volume q_k, the holder of the contract makes an algebraic profit ψ(q_k, X_t_k) := q_k·(g_k(X_t_k) - K). Then for every non-negative ℱ_t_k-1^X- measurable random variable Q_k (representing the cumulative purchased volume up to t_k-1), the price of the swing option at time t_k is V_k(X_t_k, Q_k) = _(q_ℓ)_k ≤ℓ≤ n-1∈𝒜_k, Q_k^Q_min, Q_max𝔼(∑_ℓ=k^n-1 e^-r_ℓ(t_ℓ - t_k)ψ(q_ℓ, X_t_ℓ) | X_t_k), where the set 𝒜_k, Q^Q_min, Q_max of admissible decision processes is defined by 𝒜_k, Q^Q_min, Q_max = {(q_ℓ)_k ≤ℓ≤ n-1, q_t_ℓ : (Ω, ℱ_t_ℓ^X, ℙ) ↦ [q_min, q_max], ∑_ℓ = k^n-1 q_ℓ∈[(Q_min-Q)_+, Q_max-Q] } and the expectation is taken under the risk-neutral probability and r_ℓ are interest rates over the period [t_0, t_n-1] that we will assume to be zero for the sake of simplicity. Problem (<ref>) appears to be a constrained stochastic control problem. It can be shown (see <cit.>) that for all k=0,…,n-1 and for all Q_k ∈𝒯_k, the swing contract price is given by the following backward equation, also known as the dynamic programming equation: {[ V_k(x, Q_k) = q ∈ Adm(t_k, Q_k)supψ(q, x) + 𝔼(V_k+1( X_t_k + 1, Q_k + q) | X_t_k = x ),; V_n-1(x, Q_n-1) = q ∈ Adm(t_n-1, Q_n-1)supψ(q, x), ]. where Adm(t_k, Q_k) is the set of admissible controls at time t_k, with Q_k denoting the cumulative consumption up to time t_k-1. Note that, if our objective is the value function, that is V_k(x, Q_k) for any x ∈ℝ defined in (<ref>), then the set Adm(t_k, Q_k) reduces to the following interval, ℐ_k+1(Q_k) := [max(q_min, Q^down(t_k+1) - Q_k), min(q_max, Q^up(t_k+1) - Q_k) ]. But if our objective is the random variable V_k(X_t_k, Q_k), then, for technical convenience, the preceding set Adm(t_k, Q_k) is the set of all ℱ_t_k^X-adapted processes lying within the interval ℐ_k+1(Q_k) defined in (<ref>). A straightforward consequence of the latter is that the optimal control at a given date must not be anticipatory. It is worth noting the bang-bang feature of swing contracts proved in <cit.>. That is, if volume constraints q_min, q_max, Q_min, Q_max are whole numbers (this corresponds to the actual setting of traded swing contracts) and Q_max - Q_min is a multiple of q_max - q_min, then the supremum in the BDPP (<ref>) is attained in one of the boundaries of the interval ℐ_k+1(Q_k) defined in (<ref>). In this discrete setting, at each exercise date t_k, the set of achievable cumulative consumptions 𝒯_k defined in (<ref>) reads, 𝒯_k = ℕ∩[Q^down(t_k), Q^up(t_k)], where Q^down(t_k) and Q^up(t_k) are defined in (<ref>). In this discrete setting, the BDPP (<ref>) remains the same. The main difference lies in the fact that, in the discrete setting the supremum involved in the BDPP is in fact a maximum over two possible values enabled by the bang-bang feature. From a practical standpoint, this feature allows to drastically reduce the computation time. Note that this paper aims to study some regression-based methods designed to approximate the conditional expectation involved in the BDPP (<ref>). We study two methods which are based on least squares regression and neural network approximation. In the least squares regression, we will go beyond the discrete setting and show that convergence results can be established in general. To achieve this, we need a crucial result which states that the swing value function defined in equation (<ref>) is continuous with respect to cumulative consumption. The latter may be established by relying on Berge's maximum theorem (see Proposition <ref> in Appendix <ref>). We may justify the use of this theorem through the following proposition, which characterizes the set of admissible volume as a correspondence (we refer the reader to Appendix <ref> for details on correspondences) mapping attainable cumulative consumption to an admissible control. Denote by 𝒫([q_min, q_max]) the power set of [q_min, q_max]. Then for all k =0, ...,n-1 the correspondence Γ_k (𝒯_k, |·|) →(𝒫([q_min, q_max]), d_H ) Q ↦ Adm(t_k, Q) is continuous and compact-valued. Let k = 0,...,n-1. We need to prove the correspondence Γ_k is both lower and upper hemicontinuous. The needed materials about correspondences is given in Appendix <ref>. We rely on the sequential characterization of hemicontinuity in Appendix <ref>. Let us start with the upper hemicontinuity. Since the set [q_min, q_max] is compact, then the converse of Proposition <ref> in Appendix <ref> holds true. Let Q ∈𝒯_k and consider a sequence (Q_n)_n ∈ℕ∈𝒯_k^ℕ which converges to Q. Let (y_n)_n ∈ℕ be a real-valued sequence such that for all n ∈ℕ, y_n lies in the correspondence Γ_k(Q_n). Then using the definition of the set of admissible control we know that q_min≤ y_n ≤ q_max yielding (y_n)_n is a real and bounded sequence. Thanks to Bolzano-Weierstrass theorem, there exists a subsequence (y_ϕ(n))_n ∈ℕ which is convergent. Let y = lim_n → +∞ y_ϕ(n), then for all n ∈ℕ, y_ϕ(n)∈ Adm(t_k, Q_ϕ(n)) ⟺max(q_min, Q^down(t_k+1) - Q_ϕ(n)) ≤ y_ϕ(n)≤min(q_max, Q^up(t_k+1) - Q_ϕ(n)). Letting n → + ∞ in the preceding inequalities yields y ∈Γ_k(Q). Which shows that Γ_k is upper hemicontinuous at an arbitrary Q. Thus the correspondence Γ_k is upper hemicontinuous. For the lower hemicontinuity part, let Q ∈𝒯_k, (Q_n)_n ∈ℕ∈𝒯_k^ℕ be a sequence which converges to Q and y ∈Γ_k(Q). Note that if y = max(q_min, Q^down(t_k+1) - Q) (or y = min(q_max, Q^up(t_k+1) - Q)) then it suffices to consider y_n = max(q_min, Q^down(t_k+1) - Q_n) (or y_n = min(q_max, Q^up(t_k+1) - Q_n)) so that y_n ∈Γ_k(Q_n) for all n ∈ℕ and lim_n → +∞ y_n = y. It remains the case y ∈Γ_k(Q) (where A denotes the interior of the set A). Thanks to Peak point Lemma [see Theorem 3.4.7 in <https://www.geneseo.edu/ aguilar/public/assets/courses/324/real-analysis-cesar-aguilar.pdf> or in <https://proofwiki.org/wiki/Peak_Point_Lemma>] one may extract a monotonous subsequence (Q_ϕ(n))_n. Two cases may be distinguished. * (Q_ϕ(n))_n is a non-decreasing sequence. In this case, for all n ∈ℕ, Q_ϕ(n)≤ Q. Since y ∈Γ_k(Q) and Q ↦min(q_max, Q^up(t_k+1) - Q) is a non-increasing function, it follows y < min(q_max, Q^up(t_k+1) - Q) ≤min(q_max, Q^up(t_k+1) - Q_ϕ(n)) for all n ∈ℕ. Moreover since y > lim_n → +∞max(q_min, Q^down(t_k+1) - Q_ϕ(n)) ↓max(q_min, Q^down(t_k+1) - Q), one may deduce that there exists n_0 ∈ℕ such that for all n ≥ n_0, y ≥max(q_min, Q^down(t_k+1) - Q_ϕ(n)). Therefore it suffices to set y_n = y for all n ≥ n_0 so that (y_n)_n ≥ n_0 is a sequence such that lim_n → +∞ y_n = y and y_n ∈Γ_k(Q_ϕ(n)) for all n ≥ n_0. * (Q_ϕ(n))_n is a non-increasing sequence. Here for all n ∈ℕ, we have Q_ϕ(n)≥ Q so that y ≥max(q_min, Q^down(t_k+1)-Q_ϕ(n)). Following the proof in the preceding case, one may deduce that there exists n_0 ∈ℕ such that for all n ≥ n_0, y ≤min(q_max, Q^up(t_k+1) - Q_ϕ(n)). Thus it suffices to set a sequence (y_n)_n ≥ n_0 identically equal to y. This shows that the correspondence Γ_k is lower hemicontinuous at an arbitrary Q. Thus Γ_k is both lower and upper hemicontinous; hence continuous. Moreover, since for all Q ∈𝒯_k, Γ_k(Q) is a closed and bounded interval in ℝ, then it is compact. This completes the proof. In the following proposition, we show the main result of this section concerning the continuity of the value function defined in (<ref>) with respect to the cumulative consumption. Let us define the correspondence C^*_k by, C^*_k : Q ∈𝒯_k ↦_q ∈ Adm(t_k, Q)ψ(q, x) + 𝔼(V_k+1(X_t_k + 1, Q + q) | X_t_k = x ). Note that the correspondence C^*_k is the set of solutions of the BDPP (<ref>). Then we have the following proposition. If for all k = 1,...,n-1 X_t_k∈𝕃_ℝ^d^1(ℙ), then for all k=0,...,n-1 and all x ∈ℝ^d, * The swing value function Q ∈𝒯_k ↦ V_k(x, Q) is continuous. * The correspondence C^*_k (defined in (<ref>)) is non-empty, compact-valued and upper hemicontinuous. Let x ∈ℝ^d. For technical convenience, we introduce for all 0 ≤ k ≤ n-1 an extended value function 𝒱_k(x, ·) defined on the whole real line 𝒱_k(x, Q) :={[ V_k(x, Q) if Q ∈𝒯_k = [Q^down(t_k), Q^up(t_k) ],; V_k(x, Q^down(t_k)) if Q < Q^down(t_k),; V_k(x, Q^up(t_k)) if Q > Q^up(t_k). ]. Note that V_k(x, ·) is the restriction of 𝒱_k(x, ·) on 𝒯_k. Propagating continuity over the dynamic programming equation is challenging due to the presence of the variable of interest Q in both the objective function and the domain in which the supremum is taken. To circumvent this issue, we rely on Berge's maximum theorem. More precisely, we use a backward induction on k along with Berge's maximum theorem to propagate continuity through the BDPP. For any Q ∈𝒯_n-1, we have 𝒱_n-1(x, Q) = q ∈ Adm(t_n-1, Q)supψ(q, x) and ψ(·, x) is continuous since it is linear in q (see (<ref>)). Thus applying Lemma <ref> yields the continuity of 𝒱_n-1(x, ·) on 𝒯_n-1. Moreover, as 𝒱_n-1(x, ·) is constant outside 𝒯_n-1 then it is continuous on (- ∞, Q^down(t_n-1)) and (Q^up(t_n-1), +∞). The continuity at Q^down(t_n-1) and Q^up(t_n-1) is straightforward given the construction of 𝒱_n-1. Thus 𝒱_n-1(x, ·) is continuous on ℝ. Besides, for all Q ∈ℝ |𝒱_n-1(X_t_n-1, Q)| ≤Q ∈𝒯_n-1sup|V_n-1(X_t_n-1, Q)| ≤ q_max·(|S_t_n-1| + K ) ∈𝕃_ℝ^1(ℙ). We now make the following assumption as an induction assumption: 𝒱_k+1(x, ·) is continuous on ℝ and there exists a real integrable random variable G_k+1 (independent of Q) such that, almost surely, |𝒱_k+1(X_t_k+1, Q) | ≤ G_k+1. This implies that (q, Q): [q_min, q_max] ×ℝ↦ψ(q, x) + 𝔼(𝒱_k+1(X_t_k+1, Q+q) | X_t_k = x ) is continuous owing to the theorem of continuity under integral sign. Thus owing to Proposition <ref> one may apply Berge's maximum theorem and we get that 𝒱_k(x, ·) is continuous on ℝ. In particular V_k(x, ·) is continuous on 𝒯_k and the correspondence C_k^* is non-empty, compact-valued and upper hemicontinuous. This completes the proof. As a result of the preceding proposition, one may substitute the sup in equation (<ref>) with a max. It is worth noting that this provides another proof for the existence of optimal consumption in addition to the one presented in <cit.>. Furthermore, our proof, compared to that in <cit.>, does not suppose integer volumes. Having addressed the general problem in equation (<ref>), we can now focus on solving it which requires to compute the continuation value. § APPROXIMATION OF CONTINUATION VALUE This section is focused on resolving the dynamic programming equation (<ref>). The primary challenge in solving this backward equation is to compute the continuation value, which involves a conditional expectation. A straightforward approach may be to compute this conditional expectation using nested simulations, but this can be time-consuming. Instead, the continuation value may be approximated using either least squares regression (as in <cit.>) or neural networks. Notice that, it follows from the Markov assumption and the definition of conditional expectation that there exists a measurable function Φ_k+1^Q such that 𝔼(V_k+1(X_t_k + 1, Q) | X_t_k) = Φ_k+1^Q(X_t_k), where Φ_k+1^Q solves the following minimization problem, Φ∈ℒ^2inf||𝔼(V_k+1(X_t_k + 1, Q) | X_t_k) - Φ(X_t_k) ||_2, where ℒ^2 denotes the set of all measurable functions that are squared-integrable. Due to the vastness of ℒ^2, the optimization problem (<ref>) is quite challenging, if not impossible, to solve in practice. It is therefore common to introduce a parameterized form Φ_k+1(· ; θ) as a solution to problem (<ref>). That is, we need to find the appropriate value of θ in a certain parameter space Θ such that it solves the following optimization problem: θ∈Θinf||𝔼(V_k+1(X_t_k + 1, Q) | X_t_k) - Φ_k+1(X_t_k; θ) ||_2. Solving the latter problem requires to compute the continuation value whereas it is the target amount. But since the conditional expectation is an orthogonal projection, it follows from Pythagoras' theorem, ||V_k+1(X_t_k + 1, Q) - Φ_k+1(X_t_k; θ) ||_2^2 = ||V_k+1(X_t_k + 1, Q) - 𝔼(V_k+1( X_t_k + 1, Q) | X_t_k)||_2^2 + ||𝔼(V_k+1( X_t_k + 1, Q) | X_t_k) - Φ_k+1(X_t_k; θ) ||_2^2. Thus any θ that solves the preceding problem (<ref>) also solves the following optimization problem θ∈Θinf||V_k+1(X_t_k + 1, Q) - Φ_k+1(X_t_k; θ) ||_2. Thus in this paper and when needed, we will indistinguishably consider the two optimization problems. In the next section we discuss the way the function Φ_k+1(· ; θ) is parametrize depending on whether we use least squares regression or neural networks. Moreover, instead of superscript as in (<ref>) we adopt the following notation: Φ_k+1^Q(·) := Φ(·; θ_k+1(Q)) where θ_k+1(Q) ∈Θ solves the optimization problem (<ref>) or equivalently (<ref>). We also dropped the under-script as the function Φ will be the same for each exercise date, only the parameters θ_k+1(Q) may differ. §.§ Least squares approximation In the least squares regression approach, the continuation value is approximated as an orthogonal projection over a subspace spanned by a finite number of squared-integrable functions (see <cit.>). More precisely, given m ∈ℕ^* functions e^m(·) = (e_1(·),...,e_m(·) ), we replace the continuation value involved in (<ref>) by an orthogonal projection over the subspace spanned by e^m(X_t_k). This leads to the approximation V_k^m of the actual value function V_k which is defined backwardly as follows, {[ V^m_k(X_t_k, Q) = _q ∈ Adm(t_k, Q)ψ(q, X_t_k) + Φ_m(X_t_k; θ_k+1, m(Q+q) ),; V^m_n-1(X_t_n-1, Q) = V_n-1(X_t_n-1, Q) = _q ∈ Adm(t_n-1, Q)ψ(q, X_t_n-1), ]. where Φ_m is defined as follows, Φ_m(X_t_k; θ_k+1, m(Q) ) = ⟨θ_k+1, m(Q), e^m(X_t_k) ⟩ with θ_k+1, m(Q) ∈Θ_m = ℝ^m being a vector whose components are coordinates of the orthogonal projection and lies within the following set 𝒮_k^m(Q) := _θ∈Θ_m||V_k+1^m(X_t_k + 1, Q) - ⟨θ, e^m(X_t_k) ⟩||_2. Solving the optimization problem (<ref>) leads to a classic linear regression. In this paper, we will assume that e^m(·) forms linearly independent family so that the set 𝒮_k^m(Q) reduces to a singleton parameter θ_k+1, m(Q) is uniquely defined as: θ_k+1, m(Q) := (A_m^k )^-1·𝔼(V_k+1^m(X_t_k + 1, Q)e^m(X_t_k) ). Note that without the latter assumption, 𝒮_k^m(Q) may not be a singleton. However, in this case, instead of the inverse matrix (A_m^k )^-1, one may consider the Moore–Penrose inverse or pseudo-inverse matrix (A_m^k)^† yielding a minimal norm. In equation (<ref>) we used the following notation 𝔼(V_k+1^m(X_t_k + 1, Q)e^m(X_t_k) ) := [ 𝔼(V_k+1^m(X_t_k + 1, Q)e_1(X_t_k) ); 𝔼(V_k+1^m(X_t_k + 1, Q)e_2(X_t_k) ); ⋮; 𝔼(V_k+1^m(X_t_k + 1, Q)e_m(X_t_k) ) ]∈ℝ^m, where A_m^k := ((A_m^k)_i, j)_1 ≤ i, j ≤ m is a (Gram) matrix with entries ⟨ e_i(X_t_k), e_j(X_t_k) ⟩_𝕃^2(ℙ) = 𝔼(e_i(X_t_k) e_j(X_t_k) ) 1 ≤ i, j ≤ m. In practice, to compute vector θ_k+1, m(Q) we need to simulate N independent paths (X_t_0^[p], ...,X_t_n-1^[p])_1 ≤ p ≤ N and use Monte Carlo to evaluate the expectations involved (see equations (<ref>) and (<ref>)). This leads to a second approximation which is a Monte Carlo approximation. For this second approximation, we define the value function V_k^m, N from equation (<ref>) where we replace the expectations by their empirical counterparts {[ V^m, N_k(X_t_k, Q) = _q ∈ Adm(t_k, Q)ψ(q, X_t_k) + Φ_m(X_t_k; θ_k+1, m, N(Q+q) ),; V^m, N_n-1(X_t_n-1, Q) = V_n-1(X_t_n-1, Q) ]. with θ_k, m, N(Q) = (A_m, N^k )^-11/N∑_p=1^N V^m, N_k+1(X_t_k+1^[p], Q)e^m(X_t_k^[p] ), using the notation 1/N∑_p=1^N V^m, N_k+1(X_t_k+1^[p], Q)e^m(X_t_k^[p] ) := [ 1/N∑_p=1^N V^m, N_k+1(X_t_k+1^[p], Q)e_1(X_t_k^[p] ); 1/N∑_p=1^N V^m, N_k+1(X_t_k+1^[p], Q)e_2(X_t_k^[p] ); ⋮; 1/N∑_p=1^N V^m, N_k+1(X_t_k+1^[p], Q)e_m(X_t_k^[p] ) ]∈ℝ^m and A_m, N^k := ((A_m, N^k)_i, j)_1 ≤ i, j ≤ m is a m × m (Gram) matrix whose components are 1/N∑_p=1^N e_i(X_t_k^[p]) e_j(X_t_k^[p]) 1 ≤ i, j ≤ m. This paper investigates a modified version of the least squares method proposed in <cit.>. In their approach, the value function at each time step is the result of two steps. First, they compute the optimal control which is an admissible control that maximizes the value function (<ref>) along with Monte Carlo simulations. Then, given the optimal control, they compute the value function by summing up all cash-flows from the considered exercise date until the maturity. Recall that we proceed backwardly so that, in practice, it is assumed that at a given exercise date t_k, we already have determined optimal control from t_k+1 to t_n-1; so that optimal cash flows at theses dates may be computed. However, our method directly replaces the continuation value with a linear combination of functions, and the value function is the maximum, over admissible volumes, of the current cash flow plus the latter combination of functions. The main difference between both approaches lies in the following. The value function computed in <cit.> corresponds to actual realized cash flows whereas the value function in our case does not. However, as recommended in their original paper <cit.>, after having estimated optimal control backwardly, a forward valuation has to be done in order to eliminate biases. By doing so, our method and that proposed in <cit.> correspond to actual realized cash flows. Thus both approximations meet. Our convergence analysis of the least squares approximation will require some technical assumptions we state below. §.§.§ Main assumptions ℋ_1^LS: For all k=0,…,n-1, the sequence (e_i( X_t_k))_i ≥ 1 is total in 𝕃^2(σ(X_t_k) ). ℋ_2^LS: For all k=0,…,n-1, almost surely, e_0(X_t_k),…,e_m(X_t_k) are linearly independent. This assumption ensures the Gram matrix A_m^k is non-singular. Moreover, this assumption allows to guarantee the matrix A_m, N^k is non-singular for N large enough. Indeed, by the strong law of large numbers, almost surely A_m, N^k → A_m^k ∈𝔾L_m(ℝ) (as N → +∞) with the latter set being an open set. ℋ_3, r: For all k = 0, …, n-1, the random vector X_t_k has finite moments at order r. ℋ_3, ∞ will then denote the existence of moments at any order. ℋ_4, r^LS: For all k = 0, …, n-1 and for all j = 1, …, m the random variable e_j (X_t_k) has finite moments at order r. Likewise, ℋ_4, ∞^LS will then denote the existence of moments at any order. If assumption ℋ_3, ∞ holds, one may replace assumption ℋ_4, r^LS by an assumption of linear or polynomial growth of functions e_j(·) with respect to the Euclidean norm. Before proceeding, note the following noteworthy comment that will be relevant in the subsequent discussion. Specifically, we would like to remind the reader that the continuity property of the true value function V_k with respect to cumulative consumption, as stated in Proposition <ref>, also applies to the approximated value function V_k^m involved in the least squares regression. If we assume that ℋ_3, 2r and ℋ_4, 2r^LS hold true for some r ≥ 1, then one may show, by a straightforward backward induction, that the functions Q ∈𝒯_k+1↦𝔼(|V_k+1^m(X_t_k+1, Q)e^m(X_t_k)|^r ) or V_k+1^m(X_t_k+1, Q) are continuous. If only assumption ℋ_3, r holds true then V_k+1(X_t_k+1, ·) is continuous and there exists a random variable G_k+1∈𝕃_ℝ^r(ℙ) (independent of Q) such that V_k+1(X_t_k+1, · ) ≤ G_k+1. Instead of using classic functions as regression functions and projecting the swing value function onto the subspace spanned by these regression functions, an alternative approach consists in using neural networks. Motivated by the function approximation capacity of deep neural networks, as quantified by the Universal Approximation Theorem (UAT), our goal is to explore whether a neural network can replace conventional regression functions. In the following section, we introduce a methodology based on neural networks that aims to approximate the continuation value. §.§ Neural network approximation The goal of a neural network is to approximate complex a function Φ : ℝ^d →ℝ^ℓ by a parametric function Φ(· ; θ) where parameters θ (or weights of the neural network) have to be optimized in a way that the distance between the two functions Φ and Φ(·; θ) is as small as possible. A neural network can approximate a wide class of complex functions (see <cit.>). A neural network is made of nodes connected to one another where a column of nodes forms a layer (when there are more than one layer in the neural network architecture we speak of a deep neural network). The outermost (see diagram <ref>) are the input and output layers and all those in between are called the hidden layers. The connection between the input and output layers through hidden layers is made by means of linear functions and activation functions (non-linear functions). From a mathematical point of view, a neural network can be written as x ∈ℝ^d ↦Φ(x; θ) := ψ∘ a_I^θ_I∘ϕ_q_I-1∘ a_I-1^θ_I-1∘…∘ϕ_q_1∘ a_1^θ_1(x) ∈ℝ^ℓ, where I is the number of hidden layers representing the depth of the neural network. Each layer has weights 𝒲 and bias b. For all 2 ≤ i ≤ I, x ∈ℝ^q_i-1↦ a_i^θ_i(x) = 𝒲_i · x + b_i ∈ℝ^q_iwithθ_i = (𝒲_i, b_i) ∈ℝ^q_i-1× q_i×ℝ^q_i, and x ∈ℝ^d↦ a_1^θ_1(x) = 𝒲_1 · x + b_1 ∈ℝ^q_1withθ_1 = (𝒲_1, b_1) ∈ℝ^d × q_1×ℝ^q_1. q_1, …, q_I are positive integers denoting the number of nodes per hidden layer and representing the width of the neural network. (ϕ_q_i)_1 ≤ i ≤ I-1 are non-linear functions called activation functions and are applied component wise. ψ is the activation function for the output layer. For the sake of simpler notation, we embed all the parameters of the different layers in a unique high dimensional parameter θ = (θ_1, …, θ_I ) ∈ℝ^N_q with N_q = ∑_i = 1^I q_i-1· (1 + q_i) (with q_0 = d). In order to study neural network approximation, we take the same notations as in <cit.>. We denote by 𝒩𝒩_∞ the set of all neural networks of form (<ref>). Then we consider, for some integer m ≥ 1, 𝒩𝒩_m the set of neural networks of form (<ref>) with at most m nodes per hidden layer and bounded parameters. More precisely, we consider Θ_m = {ℝ^d×ℝ^m×( ℝ^m×ℝ^m)^I-2×ℝ^m×ℝ : |θ| ≤γ_m } which denotes the set of all parameters (bounded by γ_m) of a neural network with at most m nodes per hidden layer. (γ_m)_m ≥ 2 is an increasing and non-bounded (real) sequence. Thus 𝒩𝒩_m is defined as the set of all neural networks which parameters lie in Θ_m, 𝒩𝒩_m = {Φ(·; θ) : ℝ^d →ℝ; θ∈Θ_m }. Note that 𝒩𝒩_∞ = ⋃_m ∈ℕ𝒩𝒩_m. In this paper, we consider the approximation of the continuation value using neural network. This leads to an approximated value function V_k^m backwardly defined by {[ V_k^m(X_t_k, Q) = _q ∈ Adm(t_k, Q)ψ(q, X_t_k) + Φ_m(X_t_k; θ_k + 1, m(Q + q) ),; V_n-1^m(X_t_n-1, Q) = V_n-1(X_t_n-1, Q), ]. where Φ_m(·; θ) denotes a function lying within 𝒩𝒩_m with θ∈Θ_m. Thus θ_k + 1, m(Q) belongs to the following set 𝒮_k^m(Q) := _θ∈Θ_m||V_k+1^m(X_t_k+1, Q) - Φ_m(X_t_k; θ) ||_2. To analyze the convergence of the neural network approximation we will rely on their powerful approximation ability. The latter is stated by the Universal Approximation Theorem. Assume that the activation functions in (<ref>) are not constant and bounded. Let μ denote a probability measure on ℝ^d, then for any I ≥ 2, 𝒩𝒩_∞ is dense in 𝕃(ℝ^d, μ). As stated in <cit.>, Theorem <ref> can be seen as follows. For any (real) squared-integrable random variable Y defined on a measurable space, there exists a sequence (θ_m)_m ≥ 2∈∏_m = 2^∞Θ_m such that lim_p→∞||Y - Φ_m (X; θ) | |_2 for some ℝ^d-valued random vector X. Thus, if for all m ≥ 2, θ_m solves θ∈Θ_minf||Φ_m(X; θ) - Y ||_2, then the sequence (Φ_m(X; θ_m) )_m ≥ 2 converges to 𝔼(Y | X) in 𝕃^2(μ). The universal approximation capacity of neural networks had been widely studied in the literature <cit.>. Some quantitative error bounds have been proved when the function to approximate is sufficiently smooth. A brief overview is presented in the following remark. When the weighted average of the Fourier representation of the function to approximate is bounded, an error bound of the convergence in Remark <ref> of order 𝒪(m^-1/2) had been shown in <cit.>. It may appears that the dimension of the problem does not degrade the convergence rate but as discussed by the authors, this may be hidden in the Fourier representation. In <cit.> it has been proved that, when the activation functions are infinitely continuously differentiables and the function to approximate is p-times continuously differentiable and Lipschitz, then the sup-norm of the approximation error on every compact set is bounded by a term of order 𝒪(m^-(p+1)/d). For a more detailed overview on quantitative error bounds, we refer the reader to <cit.>. Note that, as in the least squares method, in practice, we simulate N independent paths (X_t_0^[p], ...,X_t_n-1^[p])_1 ≤ p ≤ N and use Monte Carlo approximation to compute the swing value function. For that purpose, we backwardly define the value function V_k^m, N by, {[ V_k^m, N(X_t_k^[p], Q) = _q ∈ Adm(t_k, Q)ψ(q, X_t_k^[p]) + Φ_m(X_t_k^[p]; θ_k+1, m, N(Q + q) ),; V_n-1^m, N(X_t_n-1^[p], Q) = V_n-1(X_t_n-1^[p], Q), ]. where θ_k+1, m, N(Q) lies within the following set, 𝒮_k^m, N(Q) := _θ∈Θ_m1/N∑_p = 1^N|V_k+1^m, N(X_t_k+1^[p], Q) - Φ_m(X_t_k^[p]; θ) |^2. Note that sets 𝒮_k^m(Q) or 𝒮_k^m, N(Q) (respectively defined in equations (<ref>) and (<ref>)) generally does not reduces to a singleton. Thus hereafter, the notation θ_k+1, m(Q) or θ_k+1, m, N(Q) will denote an element of the corresponding set 𝒮_k^m(Q) or 𝒮_k^m, N(Q). § CONVERGENCE ANALYSIS We conduct a convergence analysis by following a similar approach as in <cit.>. Our initial focus is to establish a convergence result as the architecture used to approximate the continuation value increases. By architecture, we mean either regression functions (in the context of least squares approximation) or neural networks units per layer. Then, we fix the value of m (representing the architecture's size) and examine the associated Monte Carlo approximation. Let us start with the first step. §.§ Convergence with respect to the number of approximation functions We focus on the approximations (<ref>) and (<ref>) of the BDPP (<ref>). In this section, we do not restrict ourselves to the bang-bang setting. That is, for both approximation methods, we consider arbitrary volume constraints (not limited to integers). §.§.§ Least squares approximation We start by analyzing the first approximation in the least squares setting (<ref>). We show the convergence of the approximated value function V_k^m as m tends to infinity. To state this property we need the following result. Let m be a positive integer. Assume ℋ_2^LS and ℋ_3, 2 hold true. Then, for all k=0,…,n-2, the function Q ↦||Φ_m(X_t_k; θ̃_k+1, m(Q)) - 𝔼(V_k+1(X_t_k+1, Q)| X_t_k) ||_2 is continuous on 𝒯_k+1, where Φ_m is defined in (<ref>) and θ̃_k+1, m(Q) solves the theoretical optimization problem θ∈Θ_minf||V_k+1(X_t_k + 1, Q) - Φ_m(X_t_k; θ) ||_2. Keeping in mind relation (<ref>), it suffices to prove that the functions, Q ↦||V_k+1(X_t_k+1, Q) - 𝔼(V_k+1(X_t_k+1, Q)| X_t_k) ||_2^2 and Q ↦||V_k+1(X_t_k+1, Q) - Φ_m(X_t_k; θ̃_k+1, m(Q)) ||_2^2 are continuous. Let us start with the first function. Let Q ∈𝒯_k+1 and consider a sequence (Q_n)_n which converges to Q. We know (as pointed out in Remark <ref>) that assumption ℋ_3, 2 entails that V_k+1(X_t_k+1, ·) is continuous and there exists G_k+1∈𝕃_ℝ^2(ℙ) (independent of Q) such that V_k+1(X_t_k+1, ·) ≤ G_k+1. Thus the Lebesgue dominated convergence theorem implies that, lim_n → +∞||V_k+1(X_t_k+1, Q_n) - 𝔼(V_k+1(X_t_k+1, Q_n)| X_t_k) ||_2^2 = ||V_k+1(X_t_k+1, Q) - 𝔼(V_k+1(X_t_k+1, Q)| X_t_k) ||_2^2 yielding the continuity of the function defined in (<ref>). We now prove the continuity of the second function defined in (<ref>). Using assumption ℋ_2^LS, it follows from Proposition <ref> that, ||Φ_m(X_t_k; θ̃_k+1, m(Q)) - V_k+1(X_t_k+1, Q) ||_2^2 = G (V_k+1(X_t_k+1, Q), e_1(X_t_k), …, e_m(X_t_k) )/G( e_1(X_t_k), …, e_m(X_t_k) ) where G(x_1, …, x_n) denotes the Gram determinant associated to the canonical 𝕃^2(ℙ) inner product. Since assumption ℋ_3, 2 entails the continuity of V_k+1(X_t_k+1, ·), then owing to the continuity of the determinant, one may conclude that Q ∈𝒯_k+1↦||Φ_m(X_t_k; θ̃_k+1, m(Q)) -V_k+1(X_t_k+1, Q) ||_2^2 is continuous as a composition of continuous functions. This completes the proof. The preceding proposition allows us to show our first convergence result stated in the following proposition. Under assumptions ℋ_1^LS, ℋ_2^LS and ℋ_3, 2, we have for all 0 ≤ k ≤ n-1, lim_m → +∞Q ∈𝒯_ksup||V^m_k(X_t_k, Q) - V_k( X_t_k, Q)||_2 = 0. We proceed by a backward induction on k. We have, almost surely, V^m_n-1(X_t_n-1, Q) = V_n-1(X_t_n-1, Q) for any Q ∈𝒯_n-1 and therefore the proposition holds true for k = n-1. Let us suppose it holds for k+1. For all Q ∈𝒯_k using the inequality |i ∈ Isup a_i - i ∈ Isup b_i| ≤i ∈ Isup |a_i-b_i|, we get, |V^m_k(X_t_k, Q) - V_k(X_t_k, Q)|^2 ≤_q ∈ Adm(t_k, Q)|Φ_m(X_t_k; θ_k+1, m(Q+q)) - 𝔼(V_k+1(X_t_k+1, Q + q) | X_t_k ) |^2. Taking the expectation in the previous inequality yields, ||V^m_k(X_t_k, Q) - V_k(X_t_k, Q) ||_2^2 ≤𝔼(_q ∈ Adm(t_k, Q)|Φ_m(X_t_k; θ_k+1, m(Q+q)) - 𝔼(V_k+1(X_t_k+1, Q + q) | X_t_k ) |^2). To interchange the essential supremum with the expectation, we rely on the bifurcation property. For all q ∈ Adm(t_k, Q), consider A_k^m(Q, q) := |Φ_m(X_t_k; θ_k+1, m(Q+q)) - 𝔼(V_k+1( X_t_k+1, Q + q) | X_t_k )|^2. Then for all q_1, q_2 ∈ Adm(t_k, Q) define the following random variable q_A^* = q_1 ·1_{A_k^m(Q, q_1) ≥ A_k^m(Q, q_2)} + q_2 ·1_{A_k^m(Q, q_1) < A_k^m(Q, q_2)}. It follows from the definition of Φ_m in (<ref>) and that of the conditional expectation that A_k^m(Q, q) is σ(X_t_k)-measurable for all q ∈ Adm(t_k, Q). Thus using (<ref>) yields q_A^*∈ Adm(t_k, Q) and A_k^m(Q, q_A^*) = max(A_k^m(Q, q_1), A_k^m(Q, q_2) ). Therefore one may use the bifurcation property in (<ref>) and we get, ||V^m_k(X_t_k, Q) - V_k(X_t_k, Q) ||_2^2 ≤q ∈ Adm(t_k, Q)sup||Φ_m(X_t_k; θ_k+1, m(Q+q)) - 𝔼(V_k+1(X_t_k+1, Q + q) | X_t_k ) ||_2^2 ≤ 2 q ∈ Adm(t_k, Q)sup||Φ_m(X_t_k; θ_k+1, m(Q+q)) - Φ_m(X_t_k; θ̃_k+1, m(Q+q)) ||_2^2 +2q ∈ Adm(t_k, Q)sup||Φ_m(X_t_k; θ̃_k+1, m(Q+q)) - 𝔼(V_k+1( X_t_k+1, Q + q) | X_t_k ) ||_2^2 where in the last inequality, we used Minkowski inequality. θ̃_k+1, m(Q+q) solves the theoretical optimization problem (<ref>). Note that in the latter problem, we introduced the actual (not known) value function V_k+1 unlike in equation (<ref>). This is just a theoretical tool as the preceding optimization problem cannot be solved since we do not know the actual value function V_k+1. Thus taking the supremum in (<ref>) yields, Q ∈𝒯_ksup||V^m_k(X_t_k, Q) - V_k(X_t_k, Q) ||_2^2 ≤ 2 Q ∈𝒯_k+1sup||Φ_m(X_t_k; θ_k+1, m(Q)) - Φ_m(X_t_k; θ̃_k+1, m(Q)) ||_2^2 +2Q ∈𝒯_k+1sup||Φ_m(X_t_k; θ̃_k+1, m(Q)) - 𝔼(V_k+1(X_t_k+1, Q) | X_t_k) ||_2^2, where we used the fact that, for all Q ∈𝒯_k and all q ∈ Adm(t_k, Q) we have Q + q ∈𝒯_k+1. Besides, recall that Φ_m(X_t_k; θ̃_k+1, m(Q)) and Φ_m(X_t_k; θ_k+1, m(Q)) are orthogonal projections of V_k+1(X_t_k+1, Q) and V_k+1^m(X_t_k+1, Q) on the subspace spanned by e^m(X_t_k). Then knowing that the orthogonal projection is 1-Lipschitz, we have Q ∈𝒯_k+1sup||Φ_m(X_t_k; θ_k+1, m(Q)) - Φ_m(X_t_k; θ̃_k+1, m(Q))||_2^2 ≤Q ∈𝒯_k+1sup||V^m_k+1(X_t_k+1, Q) - V_k+1(X_t_k+1, Q)||_2^2. Thanks to the induction assumption, the right hand side of the last inequality converges to 0 as m → + ∞, so that, Q ∈𝒯_k+1sup||Φ_m(X_t_k; θ_k+1, m(Q)) - Φ_m(X_t_k; θ̃_k+1, m(Q))||_2^2 0. It remains to prove that lim_m → +∞Q ∈𝒯_k+1sup||Φ_m(X_t_k; θ̃_k+1, m(Q)) - 𝔼(V_k+1( X_t_k+1, Q) | X_t_k) ||_2^2 = 0. To achieve, this we rely on Dini's lemma whose assumptions hold true owing to the three following facts. §.§.§ Pointwise convergence It follows from assumption ℋ_1^LS that, for any Q ∈𝒯_k+1, lim_m → +∞||Φ_m(X_t_k; θ̃_k+1, m(Q)) - 𝔼(V_k+1( X_t_k+1, Q) | X_t_k) ||_2^2 = 0. §.§.§ Continuity The continuity of Q ↦||Φ_m(X_t_k; θ̃_k+1, m(Q)) - 𝔼(V_k+1( X_t_k+1, Q) | X_t_k) ||_2^2 is given by Proposition <ref> under assumptions ℋ_2^LS and ℋ_3, 2. §.§.§ Monotony Denote by F_m^k := ( e_1(X_t_k), …, e_m(X_t_k) ). Then it is straightforward that for any m ≥ 1, F_m^k ⊆ F_m+1^k. So that, ||Φ_m(X_t_k; θ̃_k+1, m(Q)) - 𝔼(V_k+1( X_t_k+1, Q) | X_t_k) ||_2^2 = Y ∈ F_m^kinf||𝔼(V_k+1( X_t_k+1, Q) | X_t_k) - Y||_2^2 ≥Y ∈ F_m+1^kinf||𝔼(V_k+1( X_t_k+1, Q) | X_t_k) - Y||_2^2 = ||Φ_m+1(X_t_k; θ̃_k+1, m+1(Q)) - 𝔼(V_k+1( X_t_k+1, Q) | X_t_k) ||_2^2. Thus the sequence, (||Φ_m(X_t_k; θ̃_k+1, m(Q)) - 𝔼(V_k+1( X_t_k+1, Q) | X_t_k) ||_2^2 )_m ≥ 1 is non-increasing. From the three preceding properties, one may apply Dini lemma yielding the desired result (<ref>). Finally, combining (<ref>) and (<ref>) in (<ref>) yields, lim_m → +∞Q ∈𝒯_ksup||V^m_k(X_t_k, Q) - V_k(X_t_k, Q) ||_2^2 = 0. This completes the proof. §.§.§ Neural network approximation We now consider the approximation of the continuation value when using neural network. We prove a similar result as in Proposition <ref>, when the number of units per hidden layer increases. To achieve this, we need the following assumptions. ℋ_1^𝒩𝒩: For every m ≥ 2, there exists q ≥ 1 such that for every θ∈Θ_m, Φ_m(·; θ) has q-polynomial growth uniformly in θ. ℋ_2^𝒩𝒩: For any 0 ≤ k ≤ n-1, a.s. the random functions θ∈Θ_m ↦Φ_m(X_t_k; θ) are continuous. Owing to the Heine theorem, the compactness of Θ_m yields the uniform continuity. Assume ℋ_1^𝒩𝒩, ℋ_2^𝒩𝒩 and ℋ_3, 2q (with q involved in assumption ℋ_1^𝒩𝒩) hold true. Then, for all 0 ≤ k ≤ n-1, lim_m → +∞Q ∈𝒯_ksup||V^m_k(X_t_k, Q) - V_k( X_t_k, Q)||_2 = 0. We proceed by a backward induction on k. For k = n-1, we have V^m_n-1(X_t_n-1, Q) = V_n-1(X_t_n-1, Q) and therefore the proposition holds true. Let us suppose it holds for k+1. In the spirit of the beginning of the proof of Proposition <ref>, we have for all Q ∈𝒯_k using the inequality: |i ∈ Isup a_i - i ∈ Isup b_i| ≤i ∈ Isup |a_i-b_i| and triangle inequality, ||V^m_k(X_t_k, Q) - V_k(X_t_k, Q) ||_2^2 ≤𝔼(_q ∈ Adm(t_k, Q)|Φ_m(X_t_k; θ_k+1, m(Q+q)) - 𝔼(V_k+1( X_t_k+1, Q + q) | X_t_k) |^2 ). Then we aim to apply the bifurcation property. For all q ∈ Adm(t_k, Q), consider, A_k^m(Q, q) = |Φ_m(X_t_k; θ_k+1, m(Q+q)) - 𝔼(V_k+1( X_t_k+1, Q + q) | X_t_k)|^2. Then for all q_1, q_2 ∈ Adm(t_k, Q) define q_A^* = q_1 ·1_{A_k^m(Q, q_1) ≥ A_k^m(Q, q_2)} + q_2 ·1_{A_k^m(Q, q_1) < A_k^m(Q, q_2)}. Using the definition of the conditional expectation and since activation functions are continuous (assumption ℋ_2^𝒩𝒩), A_k^m(Q, q) is σ(X_t_k)-measurable for all q ∈ Adm(t_k, Q). Moreover, q_A^*∈ Adm(t_k, Q) and A_k^m(Q, q_A^*) = max(A_k^m(Q, q_1), A_k^m(Q, q_2) ). Thus using the bifurcation property and taking the supremum in (<ref>) yields, Q ∈𝒯_ksup||V^m_k(X_t_k, Q) - V_k(X_t_k, Q) ||_2^2 ≤Q ∈𝒯_k+1sup||Φ_m(X_t_k; θ_k+1, m(Q)) - 𝔼(V_k+1( X_t_k+1, Q) | X_t_k)||_2^2. Using Minkowski inequality and the inequality: (a+b)^2 ≤ 2(a^2+b^2) yields, Q ∈𝒯_ksup||V^m_k(X_t_k, Q) - V_k(X_t_k, Q) ||_2^2 ≤ 2 Q ∈𝒯_k+1sup||𝔼(V_k+1^m( X_t_k+1, Q) | X_t_k) - 𝔼(V_k+1( X_t_k+1, Q) | X_t_k)||_2^2 +2Q ∈𝒯_k+1sup||Φ_m(X_t_k; θ_k+1, m(Q)) - 𝔼(V_k+1^m( X_t_k+1, Q) | X_t_k) ||_2^2. By the induction assumption, the first term in the right hand side converges to 0 as m → + ∞. Let us consider the second term. Since θ_k+1, m(Q) solves (<ref>), we have Q ∈𝒯_k+1sup||Φ_m(X_t_k; θ_k+1, m(Q)) - 𝔼(V_k+1^m( X_t_k+1, Q) | X_t_k) ||_2^2 ≤Q ∈𝒯_k+1sup||Φ_m(X_t_k; θ̃_k+1, m(Q)) - 𝔼(V_k+1^m( X_t_k+1, Q) | X_t_k) ||_2^2, where θ̃_k+1, m(Q) solves the theoretical optimization problem, θ∈Θ_minf||V_k+1(X_t_k + 1, Q) - Φ_m(X_t_k; θ) ||_2 with Θ_m defined in (<ref>). Then it follows from Minskowki inequality that Q ∈𝒯_k+1sup||Φ_m(X_t_k; θ̃_k+1, m(Q)) - 𝔼(V_k+1^m( X_t_k+1, Q) | X_t_k) ||_2^2 ≤Q ∈𝒯_k+1sup||𝔼(V_k+1( X_t_k+1, Q) | X_t_k) - 𝔼(V_k+1^m( X_t_k+1, Q) | X_t_k) ||_2^2 +Q ∈𝒯_k+1sup||Φ_m(X_t_k; θ̃_k+1, m(Q)) - 𝔼(V_k+1( X_t_k+1, Q) | X_t_k) ||_2^2. Once again, by the induction assumption, the first term in the right hand side converges to 0 as m → +∞. Moreover, thanks to the universal approximation theorem, for all Q ∈𝒯_k+1 ||Φ_m(X_t_k; θ̃_k+1, m(Q)) - 𝔼(V_k+1( X_t_k+1, Q) | X_t_k) ||_2^2 0. Besides notice that, ||Φ_m(X_t_k; θ̃_k+1, m(Q)) - 𝔼(V_k+1( X_t_k+1, Q) | X_t_k) ||_2^2 = Φ∈𝒩𝒩_minf||Φ(X_t_k) - 𝔼(V_k+1(X_t_k+1, Q) | X_t_k) ||_2^2 where 𝒩𝒩_m is defined in (<ref>). But since the sequence (Θ_m )_m is non-decreasing (in the sense that Θ_m ⊆Θ_m+1), then (𝒩𝒩_m)_m is too. So that by the previous equality (<ref>), ( ||Φ_m(X_t_k; θ̃_k+1, m(Q)) - 𝔼(V_k+1( X_t_k+1, Q) | X_t_k) ||_2^2 )_m ≥ 2 is a non-increasing sequence. Thus keeping in mind equation (<ref>), if the function, H_k (𝒩𝒩_m, |·|_sup) ×(𝒯_k+1, |·|) →ℝ (Φ, Q) ⟼ ||L_k(Φ, Q)||_2^2 := ||Φ(X_t_k) - 𝔼(V_k+1(X_t_k+1, Q) | X_t_k) ||_2^2 is continuous, then thanks to Theorem <ref> (noticing that for all m ≥ 2, 𝒩𝒩_m is a compact set), the function Q ↦||Φ_m(X_t_k; θ̃_k+1, m(Q)) - 𝔼(V_k+1( X_t_k+1, Q) | X_t_k) ||_2^2 will be continuous on the compact set 𝒯_k+1. Thus one may use Dini lemma and conclude that the pointwise convergence in (<ref>) is in fact uniform. Which will completes the proof. Note that we have already shown that Q ↦𝔼(V_k+1(X_t_k+1, Q) | X_t_k) is almost surely continuous under assumption ℋ_3, 2q. Moreover using the classic inequality: (a+b)^2 ≤ 2(a^2 + b^2) and then conditional Jensen inequality |L_k(Φ, Q)|^2 ≤ 2 ·|Φ(X_t_k)|^2 + 2 ·𝔼(V_k+1(X_t_k+1, Q)^2 | X_t_k) ≤ 2 ·|Φ(X_t_k)|^2 + 2 ·𝔼(G_k+1^2 | X_t_k) ∈𝕃^1_ℝ(ℙ), where the existence of G_k+1∈𝕃^2_ℝ(ℙ) (independent of Q) follows from Remark <ref> and is implied by assumption ℋ_3, 2q. Note that the integrability of |Φ(X_t_k)|^2 follows from assumptions ℋ_1^𝒩𝒩 and ℋ_3, 2q. This implies that ||L_k(Φ, ·)||_2^2 is continuous. Besides, for some sequence (Φ_n)_n of 𝒩𝒩_m such that Φ_n Φ, it follows from the Lebesgue's dominated convergence theorem (enabled by assumptions ℋ_1^𝒩𝒩 and ℋ_3, 2q) that ||L_k(Φ_n, Q)||_2^2 ||L_k(Φ, Q)||_2^2. Which shows that ||L_k(·, Q)||_2^2 is continuous. Therefore the function H_k is continuous. And as already mentioned this completes the proof. In the previous proposition, we made the assumption that the neural networks are continuous and with polynomial growth. This assumption is clearly satisfied when using classic activation functions such as the ReLU function x ∈ℝ↦max(x,0) and Sigmoïd function x ∈ℝ↦ 1 / (1 + e^-x). §.§ Convergence of Monte Carlo approximation From now on, we assume a fixed positive integer m and focus is on the convergence of the value function that arises from the second approximation (<ref>) or (<ref>). Unlike the preceding section and for technical convenience, we restrict our analysis of the neural network approximation to the bang-bang setting. However, the least squares regression will still be examined in a general context. §.§.§ Least squares regression We establish a convergence result under the following Hilbert assumption. ℋ_5^LS: For all k=0,…,n-1 the sequence (e_i( X_t_k))_i ≥ 1 is a Hilbert basis of 𝕃^2(σ(X_t_k) ). It is worth noting that this assumption is a special case of assumptions ℋ_1^LS and ℋ_2^LS with an orthonormality assumption on e^m(X_t_k). Furthermore, in the field of mathematical finance, the underlying asset's diffusion is often assumed to have a Gaussian structure. However, it is well known that the normalized Hermite polynomials {H_k(x)/√(k!) , k ≥ 0 } serve as a Hilbert basis for 𝕃^2(ℝ, μ), the space of square-integrable functions with respect to the Gaussian measure μ. The Hermite polynomials { H_k(x), k ≥ 0} are defined as follows: H_k(x) = (-1)^k e^x^2d^k/dx^k[ e^-x^2], or recursively by H_k+1(x) = 2x · H_k(x) - 2k · H_k-1(x) with H_0(x) = 1, H_1(x) = 2x. For a multidimensional setting, Hermite polynomials are obtained as the product of one-dimensional Hermite polynomials. Finally, note that assumptions ℋ_5^LS entail that A_m^k = A_m, N^k = I_m. The main result of this section aim at proving that the second approximation V_k^m, N of the swing value function converges towards the first approximation V_k^m as the Monte Carlo sample size N increases to +∞ and with a rate of convergence of order 𝒪(1/√(N)). To achieve this we rely on the following lemma which concern general Monte Carlo rate of convergence. Consider X_1, …, X_N independent and identically distributed random variables with order p (p ≥ 2) finite moment (with μ = 𝔼(X_1)). Then, there exists a positive constant B_p (only depending on the order p) such that ||1/N∑_i = 1^N X_i - μ||_p≤ B_p 2^p-1/p(𝔼(|X|^p) + |μ|^p )^1/p/√(N). It follows from Marcinkiewicz–Zygmund inequality that there exists a positive constant A_p (only depends on p) such that ||1/N∑_i = 1^N X_i - μ||_p^p = 𝔼((∑_i = 1^NX_i - μ/N)^p) ≤ A_p ·𝔼((1/N^2∑_i = 1^N (X_i - μ)^2 )^p/2) = A_p/N^p/2·𝔼((1/N∑_i = 1^N (X_i - μ)^2 )^p/2). Using the convexity of the function x ∈ℝ_+↦ x^p/2 yields, (1/N∑_i = 1^N (X_i - μ)^2 )^p/2≤1/N∑_i = 1^N (X_i - μ)^p. Thus taking the expectation and using the inequality, (a+b)^p ≤ 2^p-1(a^p + b^p) yields, ||1/N∑_i = 1^N X_i - μ||_p^p ≤A_p/N^p/2·𝔼((X - μ)^p ) ≤ A_p ·2^p-1(𝔼(|X|^p) + |μ|^p)/N^p/2. This completes the proof. In the following proposition, we show that using Hilbert basis as a regression basis allows to achieve a convergence with a rate of order 𝒪(1/√(N)). Under assumptions ℋ_3, ∞, ℋ_4, ∞^LS and ℋ_5^LS, for all k=0,…,n-1 and for any s > 1, we have Q ∈𝒯_ksup|| V^m, N_k(X_t_k, Q ) - V^m_k(X_t_k, Q ) ||_s = 𝒪(1/√(N)) as N → +∞. We prove this proposition using a backward induction on k. Since V^m, N_n-1( X_t_n-1, ·) = V^m_n-1(X_t_n-1, ·) on 𝒯_n-1, then the proposition holds for k = n-1. Assume now that the proposition holds for k+1. Using the inequality, |i ∈ Isup a_i - i ∈ Isup b_i| ≤i ∈ Isup |a_i-b_i| and then Cauchy-Schwartz' one, we get, |V^m, N_k(X_t_k, Q ) - V^m_k(X_t_k, Q) | ≤_q ∈ Adm(t_k, Q)|⟨θ_k+1,m,N(Q+q) - θ_k+1,m(Q+q), e^m(X_t_k)⟩| ≤|e^m(X_t_k) | ·_q ∈ Adm(t_k, Q)|θ_k+1,m,N(Q+q) - θ_k+1,m(Q+q) | ≤|e^m(X_t_k) | ·_q ∈𝒰_k(Q)|θ_k+1,m,N(Q + q) - θ_k+1,m(Q + q) |, where 𝒰_k(Q) is the set of all ℱ_t_k+1^X-measurable random variables lying within ℐ_k+1(Q) (see (<ref>)). The last inequality is due to the fact that ℱ_t_k^X ⊂ℱ_t_k+1^X. Then for some constants b, c > 1 such that 1/b + 1/c = 1, it follows from Hölder inequality that, ||V^m, N_k(X_t_k, Q) - V^m_k(X_t_k, Q) ||_s ≤|| |e^m(X_t_k) | ||_sb·|| _q ∈𝒰_k(Q)|θ_k+1,m,N(Q + q) - θ_k+1,m(Q + q) | ||_sc. To interchange the expectation and the essential supremum, we rely on the bifurcation property. Let q_1, q_2 ∈𝒰_k(Q) and denote by q^* = q_1 ·1_{B_k(Q, q_1) ≥ B_k(Q, q_2)} + q_2 ·1_{B_k(Q, q_1) < B_k(Q, q_2)} where B_k(Q, q_i) = |θ_k+1,m,N(Q+ q_i) - θ_k+1,m(Q+ q_i)|^sc for i ∈{1,2}. One can easily check that for all i ∈{1,2}, B_k(Q, q_i) is ℱ_t_k+1^X-measurable so that q^*∈𝒰_k(Q). We also have B_k(Q, q^*) = max(B_k(Q, q_1), B_k(Q, q_2) ). Thus one may use the bifurcation property in (<ref>), we get, ||V^m, N_k(X_t_k, Q) - V^m_k(X_t_k, Q) ||_s ≤|| |e^m(X_t_k)| ||_sb·q ∈𝒰_k(Q)sup|||θ_k+1,m,N(Q + q) - θ_k+1,m(Q + q) | ||_sc ≤|| |e^m(X_t_k) | ||_sb·Q ∈𝒯_k+1sup|||θ_k+1,m,N(Q) - θ_k+1,m(Q) | ||_sc. But for any Q ∈𝒯_k+1, it follows from Minkowski's inequality that, |||θ_k+1,m,N(Q) - θ_k+1,m(Q) | ||_sc = || | 1/N∑_p = 1^N e^m(X_t_k^[p]) · V^m, N_k+1(X_t_k+1^[p], Q) - 𝔼(e^m(X_t_k)V^m_k+1(X_t_k+1, Q)) | ||_sc ≤|||1/N∑_p = 1^N e^m(X_t_k^[p]) ·(V^m, N_k+1(X_t_k+1^[p], Q) - V^m_k+1(X_t_k+1^[p], Q)) | ||_sc + |||1/N∑_p = 1^N e^m(X_t_k^[p])V^m_k+1( X_t_k+1^[p], Q) - 𝔼(e^m(X_t_k)V^m_k+1(X_t_k+1, Q)) | ||_sc ≤||| e^m(X_t_k)| · |V^m, N_k+1(X_t_k+1, Q) - V^m_k+1(X_t_k+1, Q) | ||_sc + |||1/N∑_p = 1^N e^m(X_t_k^[p])V^m_k+1( X_t_k+1^[p], Q) - 𝔼(e^m(X_t_k)V^m_k+1(X_t_k+1, Q)) | ||_sc, where the last inequality comes from the fact that, for all p ≥ 1, (X_t_k^[p],X_t_k+1^[p]) has the same distribution with (X_t_k,X_t_k+1). Therefore, for some constants u, v > 1 such that 1/u + 1/v = 1, it follows from Hölder inequality, || |θ_k+1,m,N(Q) - θ_k+1,m(Q) | ||_sc ≤|||e^m(X_t_k)| ||_scu·||V^m, N_k+1(X_t_k+1, Q) - V^m_k+1(X_t_k+1, Q) ||_scv + |||1/N∑_p = 1^N e^m(X_t_k^[p])V^m_k+1( X_t_k+1^[p], Q) - 𝔼(e^m(X_t_k)V^m_k+1(X_t_k+1, Q)) | ||_sc. Taking the supremum in the previous inequality and plugging it into equation (<ref>) yields, Q ∈𝒯_ksup||V^m, N_k(X_t_k, Q ) - V^m_k(X_t_k, Q) ||_s ≤|| |e^m(X_t_k) | ||_sb·||| e^m(X_t_k)| ||_scu·Q ∈𝒯_k+1sup||V^m, N_k+1(X_t_k+1, Q) - V^m_k+1(X_t_k+1, Q) ||_scv + || |e^m(X_t_k) | ||_sb·Q ∈𝒯_k+1sup|||1/N∑_p = 1^N e^m(X_t_k^[p])V^m_k+1(X_t_k+1^[p], Q) - 𝔼(e^m(X_t_k)V^m_k+1(X_t_k+1, Q)) | ||_sc. Under assumption ℋ_4, r^LS and using induction assumption, the first term in the sum of the right hand side converges to 0 as N → +∞ with a rate of order 𝒪(1/√(N)). Once again, by assumption ℋ_4, ∞^LS, it remains to prove that it is also the case for the second term. But we have, C_N(Q) := |||1/N∑_p = 1^N e^m(X_t_k^[p])V^m_k+1( X_t_k+1^[p], Q) - 𝔼(e^m(X_t_k)V^m_k+1(X_t_k+1, Q)) | ||_sc = ||∑_j = 1^m(1/N∑_p = 1^N e_j(X_t_k^[p])V^m_k+1(X_t_k+1^[p], Q) - 𝔼(e_j(X_t_k)V^m_k+1(X_t_k+1, Q)) )^2 ||_sc/2^1/2 ≤∑_j = 1^m||1/N∑_p = 1^N e_j(X_t_k^[p])V^m_k+1(X_t_k+1^[p], Q) - 𝔼(e_j(X_t_k)V^m_k+1(X_t_k+1, Q)) ||_sc ≤A_ac/√(N)·∑_j = 1^m{𝔼(|e_j(X_t_k)V^m_k+1(X_t_k+1, Q)|^sc) + |𝔼(e_j(X_t_k)V^m_k+1(X_t_k+1, Q))|^sc}, where the second-last inequality comes from Minkowski inequality and the inequality, √(x+y)≤√(x) + √(y) for all x, y ≥ 0. The last inequality is obtained using Lemma <ref> (with a positive constant A_ac only depends on the order a and c). But using the continuity (which holds as noticed in Remark <ref>) of both functions Q ↦𝔼(|e_j(X_t_k)V^m_k+1(X_t_k+1, Q)|^sc) and Q ↦|𝔼(e_j(X_t_k)V^m_k+1( X_t_k+1, Q))|^sc on the compact set 𝒯_k+1 one may deduce that, as N → +∞, Q ∈𝒯_k+1sup C_N(Q) = 𝒪(1/√(N)). This completes the proof. It is worth noting that it is difficult to obtain an almost surely convergence result without further assumptions (for example boundedness assumption) of the regression functions. The preceding proposition is widely based on Hölder inequality emphasizing on why we have chosen the 𝕃^s(ℙ)-norm. However, in the neural network analysis that follows, we prove an almost surely convergence result. §.§.§ Neural network approximation We consider the discrete setting with integer volume constraints with a state of attainable cumulative consumptions given by (<ref>). Results in this section will be mainly based on Lemmas <ref> and <ref> stated below. Let (f_n)_n be a sequence of real functions defined on a compact set K ⊂ℝ^d. Define, v_n = x ∈ Kinf f_n(x) and x_n ∈_x ∈ K f_n(x). Then, we have the following two Lemmas. Assume that the sequence (f_n)_n converges uniformly on K to a continuous function f. Let v^* = x ∈ Kinf f_n(x) and 𝒮^* = _x ∈ K f(x). Then v_n → v^* and the distance d(x_n, 𝒮^*) between the minimizer x_n and the set 𝒮^* converges to 0 as n → +∞. Let (ξ_i)_i ≥ 1 be a sequence of i.i.d. ℝ^m-valued random vectors and h : ℝ^d ×ℝ^m →ℝ a measurable function. Assume that, * a.s., θ∈ℝ^d ↦ h(θ, ξ_1) is continuous, * For all C > 0, 𝔼( |θ| ≤ Csup| h(θ, ξ_1) | ) < +∞. Then, a.s. θ∈ℝ^d ↦1/N∑_i = 1^N h(θ, ξ_i) converges locally uniformly to the continuous function θ∈ℝ^d ↦𝔼(h(θ, ξ_1) ), i.e. lim_N → +∞[θ| ≤ Csup| 1/N∑_i = 1^N h(θ, ξ_i) - 𝔼(h(θ, ξ_1) ) | = 0 a.s. Combining the two preceding lemmas is the main tool to analyze the Monte Carlo convergence of the neural network approximation. The result is stated below and requires the following (additional) assumption. ℋ_3^𝒩𝒩: For any m ≥ 2, 0 ≤ k ≤ n-1, Q ∈𝒯_k and θ^1, θ^2 ∈𝒮_k^m(Q) (defined in (<ref>)), Φ_m(·; θ^1) = Φ_m(·; θ^2). This assumption just states that, almost surely, two minimizers bring the same value. Before showing the main result of this section, it is worth noting this important remark. [label=(*)]otherlist * Under assumptions ℋ_1^𝒩𝒩 and ℋ_3, q and using a straightforward backward induction in equation (<ref>), it can be shown that there exists a random variable G_k ∈𝕃^q_ℝ^d(ℙ) (independent of Q) such that |V_k^m(X_t_k, Q) | ≤ G_k for any Q ∈𝒯_k; where V_k^m is defined in (<ref>). * Under assumption ℋ_1^𝒩𝒩, there exists a positive constant κ_m such that, for any 0 ≤ k ≤ n-1 and any Q ∈𝒯_k, max(|V_k^m(X_t_k, Q)|, |V_k^m, N(X_t_k, Q)| ) ≤ q_max·|S_t_k - K| + κ_m ·(1 + |X_t_k|^q ). If in addition, assumption ℋ_3, q holds true, then the right hand side of the last inequality is an integrable random variable. We now state our result of interest. Let m ≥ 2. Under assumptions ℋ_1^𝒩𝒩, ℋ_2^𝒩𝒩, ℋ_3^𝒩𝒩 and ℋ_3, 2q, for any 0 ≤ k ≤ n-1, we have, lim_N → +∞Q ∈𝒯_ksup| V^m, N_k(X_t_k, Q ) - V^m_k(X_t_k, Q ) |= 0 a.s. Note that in ℋ_3, 2q, parameters q are that involved in assumption ℋ_1^𝒩𝒩. Recall that, the set 𝒯_k is the one of the discrete setting as discussed in (<ref>). We proceed by a backward induction on k. The proposition clearly holds true for k = n-1 since, almost surely, V_n-1^m, N(X_t_n-1, ·) = V_n-1^m(X_t_n-1, ·) on 𝒯_n-1. Assume now the proposition holds true for k+1. Let Q ∈𝒯_k. Using the inequality, |i ∈ Isup a_i - i ∈ Isup b_i| ≤i ∈ Isup |a_i-b_i| and then triangle inequality, we get, | V^m, N_k(X_t_k, Q ) - V^m_k(X_t_k, Q ) | ≤_q ∈ Adm(t_k, Q)|Φ_m(X_t_k; θ_k, m, N(Q+q) ) - Φ_m(X_t_k; θ_k, m, N(Q+q) ) | + _q ∈ Adm(t_k, Q)|Φ_m(X_t_k; θ_k, m, N(Q+q) ) - Φ_m(X_t_k; θ_k, m(Q+q) ) |, where θ_k, m, N(Q) lies within the following set, _θ∈Θ_m1/N∑_p = 1^N|V^m_k+1(X_t_k+1^[p], Q ) - Φ_m(X_t_k^[p]; θ) |^2. Then taking the supremum in (<ref>) and using triangle inequality, we get, Q ∈𝒯_ksup| V^m, N_k(X_t_k, Q ) - V^m_k(X_t_k, Q ) | ≤Q ∈𝒯_k+1sup|Φ_m(X_t_k; θ_k, m, N(Q) ) - Φ_m(X_t_k; θ_k, m(Q) ) | + 2 ·Q ∈𝒯_k+1sup|Φ_m(X_t_k; θ_k, m, N(Q) ) - Φ_m(X_t_k; θ_k, m(Q) ) |. We will handle the right hand side of the last inequality term by term. Let us start with the second term. Note that owing to assumption ℋ_2^𝒩𝒩, the function θ∈Θ_m ↦ V^m_k+1(X_t_k+1, Q ) - Φ_m(X_t_k; θ) is almost surely continuous. Moreover, for any C >0, using the inequality (a+b)^2 ≤ 2(a^2 + b^2) and assumption ℋ_1^𝒩𝒩, there exists a positive constant κ_m such that for any Q ∈𝒯_k+1, 𝔼(|θ| ≤ Csup| V^m_k+1(X_t_k+1, Q ) - Φ_m(X_t_k; θ) |^2 ) ≤ 2 ·𝔼(| V^m_k+1(X_t_k+1, Q ) |^2 ) + 2 ·|θ| ≤ Csup𝔼(|Φ_m(X_t_k; θ) |^2 ) ≤ 2 ·𝔼(| V^m_k+1(X_t_k+1, Q ) |^2 ) + 2κ_m (1 + 𝔼|X_t_k|^2q) and the right hand side of the last inequality is finite under assumption ℋ_3, 2q, keeping in mind point <ref> of Remark <ref>. Thus thanks to Lemma <ref>, almost surely, we have the uniform convergence on Θ_m, lim_N → +∞θ∈Θ_msup| 1/N∑_p = 1^N|V^m_k+1(X_t_k+1^[p], Q ) - Φ_m(X_t_k^[p]; θ) |^2 - || V^m_k+1(X_t_k+1, Q ) - Φ_m(X_t_k; θ) ||_2^2 | = 0. Thus, for any Q ∈𝒯_k+1, Lemma <ref> implies that lim_N → +∞ d(θ_k, m, N(Q), 𝒮_k^m(Q) ) = 0. We restrict ourselves to a subset with probability one of the original probability space on which this convergence holds and the random functions Φ_m(X_t_k; ·) are uniformly continuous (see assumption ℋ_2^𝒩𝒩). Then, there exists a sequence (α_k, m, N(Q))_N lying within 𝒮_k^m(Q) such that, lim_N → +∞|θ_k, m, N(Q) - α_k, m, N(Q) | = 0. Thus, the uniform continuity of functions Φ_m(X_t_k; ·) combined with assumption ℋ_3^𝒩𝒩 yield, |Φ_m(X_t_k; θ_k, m, N(Q) ) - Φ_m(X_t_k; θ_k, m(Q) ) | = |Φ_m(X_t_k; θ_k, m, N(Q) ) - Φ_m(X_t_k; α_k, m, N(Q) ) | 0. Furthermore, since the set 𝒯_k+1 has a finite cardinal (discrete setting) then, we have lim_N → +∞Q ∈𝒯_k+1sup|Φ_m(X_t_k; θ_k, m, N(Q) ) - Φ_m(X_t_k; θ_k, m(Q) ) | = 0. It remains to handle the first term in the right hand side of inequality (<ref>). Note that, if the following uniform convergence, lim_N → +∞θ∈Θ_msup| 1/N∑_p = 1^N|V^m, N_k+1(X_t_k+1^[p], Q ) - Φ_m(X_t_k^[p]; θ) |^2 - 1/N∑_p = 1^N|V^m_k+1(X_t_k+1^[p], Q ) - Φ_m(X_t_k^[p]; θ) |^2 |_:=|Δ_k, m, N^Q(θ)|= 0 holds true, then the latter uniform convergence will entail the following one owing to the uniform convergence (<ref>), lim_N → +∞θ∈Θ_msup| 1/N∑_p = 1^N|V^m, N_k+1(X_t_k+1^[p], Q ) - Φ_m(X_t_k^[p]; θ) |^2 - || V^m_k+1(X_t_k+1, Q ) - Φ_m(X_t_k; θ) ||_2^2 | = 0 and the desired result follows. To achieve this, we start by proving the uniform convergence (<ref>). Then we show how its implication (<ref>) entails the desired result. Using triangle inequality and the elementary identity, a^2 - b^2 = (a-b)(a+b), we have, |Δ_k, m, N^Q(θ)| ≤1/N∑_p = 1^N|V^m, N_k+1(X_t_k+1^[p], Q ) + V^m_k+1(X_t_k+1^[p], Q ) - 2 ·Φ_m(X_t_k^[p]; θ) | ·| V^m, N_k+1(X_t_k+1^[p], Q ) - V^m_k+1(X_t_k+1^[p], Q ) | ≤2/N∑_p = 1^N(q_max|S_t_k+1^[p] - K| + κ_m (1 + |X_t_k+1^[p]|^q) + κ_m (1 + |X_t_k^[p]|^q) ) ·| V^m, N_k+1(X_t_k+1^[p], Q ) - V^m_k+1(X_t_k+1^[p], Q ) | where in the last inequality we used assumption ℋ_1^𝒩𝒩 and the point <ref> of Remark <ref>. Let ε > 0. Then using the induction assumption and the law of large numbers, we get, lim sup_Nθ∈Θ_msup|Δ_k, m, N^Q(θ)| ≤ 2ε·𝔼( q_max|S_t_k+1 - K| + κ_m (1 + |X_t_k+1|^q) + κ_m (1 + |X_t_k|^q) ). Hence letting ε→ 0 entails the result (<ref>). Theorefore, as already mentioned, the result (<ref>) also holds true. Thus, using Lemma <ref>, we get that lim_N → +∞ d(θ_k, m, N(Q), 𝒮_k^m(Q) ) = 0. We restrict ourselves to a subset with probability one of the original probability space on which this convergence holds and the random functions Φ_m(X_t_k; ·) are uniformly continuous (see assumption ℋ_2^𝒩𝒩). Whence, for any Q ∈𝒯_k+1, there exists a sequence (β_k, m, N(Q))_N lying within 𝒮_k^m(Q) such that, lim_N → +∞|θ_k, m, N(Q) - β_k, m, N(Q) | = 0. Thus, the uniform continuity of functions Φ_m(X_t_k; ·) combined with assumption ℋ_3^𝒩𝒩 yield, |Φ_m(X_t_k; θ_k, m, N(Q) ) - Φ_m(X_t_k; θ_k, m(Q) ) | = |Φ_m(X_t_k; θ_k, m, N(Q) ) - Φ_m(X_t_k; β_k, m, N(Q) ) | 0. Then, since the set 𝒯_k+1 has a finite cardinal (discrete setting), we have lim_N → +∞Q ∈𝒯_k+1sup|Φ_m(X_t_k; θ_k, m, N(Q) ) - Φ_m(X_t_k; θ_k, m(Q) ) | = 0. Combining equations (<ref>) and (<ref>) in equation (<ref>) yield the desired result. §.§ Deviation inequalities: the least squares setting To end this paper, we present some additional results related to the least squares approximation. These results focus on some deviation inequalities on the error between estimates (<ref>), (<ref>) and the swing actual value function (<ref>). We no longer consider the Hilbert assumption ℋ_5^LS. Let us start with the first proposition of this section. Let δ > 0 and k = 0, …, n-2. Under assumptions ℋ_3, ∞ and ℋ_4, ∞^LS, for all s ≥ 2, there exists a positive constant D_s, k, m such that, ℙ(_Q ∈𝒬_k|1/N∑_p = 1^N e^m(X_t_k^[p]) V^m_k+1(X_t_k+1^[p], Q) - 𝔼(e^m(X_t_k) V^m_k+1( X_t_k+1, Q) )| ≥δ) ≤D_s, k, m/δ^s N^s/2 where 𝒬_k is the set of all ℱ_t_k^X-measurable random variables lying within 𝒯_k+1. Note that 𝒬_k ⊂𝒬_k'; with the latter set being the set of all ℱ_t_k+1^X-measurable random variables lying within 𝒯_k+1. Then we have, ℙ(_Q ∈𝒬_k|1/N∑_p = 1^N e^m(X_t_k^[p]) V^m_k+1( X_t_k+1^[p], Q) - 𝔼(e^m(X_t_k) V^m_k+1(X_t_k+1, Q) )| ≥δ) ≤ℙ(_Q ∈𝒬_k'|1/N∑_p = 1^N e^m(X_t_k^[p]) V^m_k+1(X_t_k+1^[p], Q) - 𝔼(e^m(X_t_k) V^m_k+1(X_t_k+1, Q) )| ≥δ) ≤ A_s Q ∈𝒬_k'sup{𝔼(| e^m(X_t_k) V^m_k+1(X_t_k+1, Q) |^s ) + 𝔼(| e^m(X_t_k) V^m_k+1(X_t_k+1, Q) | )^s }/N^s/2·δ^s ≤ 2A_s Q ∈𝒬_k'sup𝔼(| e^m(X_t_k) V^m_k+1(X_t_k+1, Q) |^s )/N^s/2·δ^s where in the second-last inequality, we successively used Markov inequality, bifurcation property and Lemma <ref> (enabled by assumptions ℋ_3, ∞ and ℋ_4, ∞^LS) with A_s = B_s^s · 2^s-1 and B_s being a positive constant which only depends on a. To obtain the last inequality, we used Jensen inequality. Besides, following the definition of 𝒬_k' we have, Q ∈𝒬_k'sup𝔼(| e^m(X_t_k) V^m_k+1(X_t_k+1, Q) |^s ) ≤Q ∈𝒯_k+1sup𝔼(| e^m(X_t_k) V^m_k+1(X_t_k+1, Q) |^s ). Then owing to Remark <ref>, the right hand side of the last inequality is a supremum of a continuous function over a compact set; thus finite. Hence it suffices to set, D_s, k, m := 2A_s ·Q ∈𝒯_k+1sup𝔼(| e^m(X_t_k) V^m_k+1(X_t_k+1, Q) |^s ) < + ∞. Which completes the proof. In the following proposition, we state a deviation inequality connecting the estimates of the orthogonal projection coordinates involved in the least squares regression. Consider assumptions ℋ_3, ∞ and ℋ_4, ∞^LS. For all k=0, …, n-2, δ > 0 and s ≥ 2 there exists a positive constant C_s, k, m such that, ℙ(_Q ∈𝒬_k|θ_k, m, N(Q) - θ_k, m(Q) | ≥δ) ≤C_s, k, m/b(s, δ) · N^s/2 where b(s, δ) = δ ^s if δ∈ (0,1] else b(s, δ) = δ ^s/2. We proceed by a backward induction on k. Recall that, for any Q ∈𝒯_n-1, V_n-1^m, N(·, Q) = V_n-1^m(·, Q). Thus, it follows from triangle inequality, |θ_n-2, m, N(Q) - θ_n-2, m(Q) | = | (A_m, N^n-2)^-11/N∑_p = 1^N e^m(X_t_n-2^[p]) V^m_n-1(X_t_n-1^[p], Q) - (A_m^n-2)^-1𝔼(e^m(X_t_n-2) V^m_n-1(X_t_n-1, Q) )| ≤|(A_m, N^n-2)^-1(1/N∑_p = 1^N e^m(X_t_n-2^[p]) V^m_n-1(X_t_n-1^[p], Q) - 𝔼(e^m(X_t_n-2) V^m_n-1( X_t_n-1, Q) ) ) | + |((A_m, N^n-2)^-1 - (A_m^n-2)^-1) ·𝔼(e^m(X_t_n-2) V^m_n-1(X_t_n-1, Q) ) | = |(A_m, N^n-2)^-1(1/N∑_p = 1^N e^m(X_t_n-2^[p]) V^m_n-1(X_t_n-1^[p], Q) - 𝔼(e^m(X_t_n-2) V^m_n-1(X_t_n-1, Q)) ) | + |((A_m^n-2)^-1(A_m^n-2 - A_m, N^n-2)(A_m, N^n-2)^-1) 𝔼(e^m(X_t_n-2) V^m_n-1(X_t_n-1, Q) ) | where in the last equality we used the matrix identity A^-1 - B^-1 = B^-1 (B-A) A^-1 for all non-singular matrices A, B. Hence taking the essential supremum and keeping in mind that the matrix norm |·| is submultiplicative yields, _Q ∈𝒬_n-2|θ_n-2, m, N(Q) - θ_n-2, m(Q) | ≤|(A_m, N^n-2)^-1| ·_Q ∈𝒬_n-2|1/N∑_p = 1^N e^m(X_t_n-2^[p]) V^m_n-1(X_t_n-1^[p], Q) - 𝔼(e^m(X_t_n-2) V^m_n-1(X_t_n-1, Q)) | + C_n-2·|(A_m^n-2)^-1(A_m^n-2 - A_m, N^n-2)(A_m, N^n-2)^-1| where C_n-2 := Q ∈𝒯_n-1sup|𝔼(e^m(X_t_n-2) V^m_n-1(X_t_n-1, Q) )| < +∞. For any ε > 0 and k = 0, …, n-2, denote by Ω_k^ε := {|A_m, N^k - A_m^k| ≤ε}. Then one may choose ε such that |(A_m, N^k)^-1| ≤ 2 |(A_m^k)^-1| on Ω_k^ε. Thus there exists positive constants K_1, K_2 such that on Ω_n-2^ε, _Q ∈𝒬_n-2|θ_n-2, m, N(Q) - θ_n-2, m(Q) | ≤ K_1 ·_Q ∈𝒬_n-2| 1/N∑_p = 1^N e^m(X_t_n-2^[p]) V^m_n-1(X_t_n-1^[p], Q) - 𝔼(e^m(X_t_n-2) V^m_n-1( X_t_n-1, Q) ) | + K_2 ·ε. Therefore, the law of total probability yields, ℙ(_Q ∈𝒬_n-2|θ_n-2, m, N(Q) - θ_n-2, m(Q) | ≥δ) ≤ℙ(_Q ∈𝒬_n-2|1/N∑_p = 1^N e^m(X_t_n-2^[p]) V^m_n-1(X_t_n-1^[p], Q) - 𝔼(e^m(X_t_n-2) V^m_n-1( X_t_n-1, Q) ) | ≥δ - K_2 ·ε/K_1) + ℙ((Ω_n-2^ε)^c) ≤D_s, n-2, m/(δ - K_2 ·ε)^s N^s/2 + U_n-2, m/ε^s N^s/2 where the majoration for the first probability in the second-last line comes from Proposition <ref> and constant D_a, n-2, m embeds constant K_1. The majoration of ℙ((Ω_n-2^ε)^c) is straightforward using successively Markov inequality and Lemma <ref>. Then, choosing ε = ρδ for some ρ > 0 sufficiently small yields, ℙ(_Q ∈𝒬_n-2|θ_n-2, m, N(Q) - θ_n-2, m(Q) | ≥δ) ≤C_s, n-2, m/δ^s N^s/2≤{[ C_s, n-2, m/δ ^s N^s/2ifδ∈ (0, 1],; C_s, n-2, m/δ ^s/2 N^s/2else ].. for some positive constant C_a, n-2, m. Now let us assume that the proposition holds for k+1 and show that it also holds for k. For any Q ∈𝒯_k+1, it follows from triangle inequality that, |θ_k, m, N(Q) - θ_k, m(Q) | ≤|(A_m, N^k)^-1| ·|1/N∑_p = 1^N e^m(X_t_k^[p]) (V^m, N_k+1( X_t_k+1^[p], Q) - V^m_k+1(X_t_k+1^[p], Q)) | +|(A_m, N^k)^-1| ·|1/N∑_p = 1^N e^m(X_t_k^[p])V^m_k+1(X_t_k+1^[p], Q) - 𝔼(e^m(X_t_k) V^m_k+1(X_t_k+1, Q) )| +|(A_m^k)^-1(A_m^k - A_m, N^k)(A_m, N^k)^-1·𝔼(e^m(X_t_k) V^m_k+1(X_t_k+1, Q)) | ≤|(A_m, N^k)^-1| ·1/N∑_p = 1^N|e^m(X_t_k^[p])| ·|V^m, N_k+1(X_t_k+1^[p], Q) - V^m_k+1(X_t_k+1^[p], Q) | + |(A_m, N^k)^-1| ·|1/N∑_p = 1^N e^m(X_t_k^[p])V^m_k+1(X_t_k+1^[p], Q) - 𝔼(e^m(X_t_k) V^m_k+1(X_t_k+1, Q) )| + |(A_m^k)^-1(A_m^k - A_m, N^k)(A_m, N^k)^-1·𝔼(e^m(X_t_k) V^m_k+1(X_t_k+1, Q) ) |. But for all 1 ≤ p ≤ N, Cauchy-Schwartz inequality yields, |V_k+1^m, N(X_t_k+1^[p], Q) - V_k+1^m(X_t_k+1^[p], Q) | ≤_q ∈ Adm(t_k+1, Q)⟨θ_k+1, m, N(Q+q) - θ_k+1, m(Q+q), e^m(X_t_k+1^[p]) ⟩ ≤|e^m(X_t_k+1^[p]) | ·_q ∈ Adm(t_k+1, Q)|θ_k+1, m, N(Q+q) - θ_k+1, m(Q+q)|. Thus, |θ_k, m, N(Q) - θ_k, m(Q) | ≤( |(A_m, N^k)^-1|/N∑_p = 1^N|e^m(X_t_k^[p])| ·|e^m(X_t_k+1^[p]) | ) _q ∈ Adm(t_k+1, Q)|θ_k+1, m, N(Q+q) - θ_k+1, m(Q+q)| + |(A_m, N^k)^-1| ·|1/N∑_p = 1^N e^m(X_t_k^[p])V^m_k+1(X_t_k+1^[p], Q) - 𝔼(e^m(X_t_k) V^m_k+1(X_t_k+1, Q) )| + |(A_m^k)^-1(A_m^k - A_m, N^k)(A_m, N^k)^-1·𝔼(e^m(X_t_k) V^m_k+1(X_t_k+1, Q) ) |. Therefore, on Ω_k^ε, there exists some positive constants K_1, K_2, K_3 such that, _Q ∈𝒬_k|θ_k, m, N(Q) - θ_k, m(Q) | ≤ K_1(1/N∑_p = 1^N|e^m(X_t_k^[p])| ·|e^m(X_t_k+1^[p]) | )_I_N^1_Q ∈𝒬_k+1|θ_k+1, m, N(Q) - θ_k+1, m(Q)|_I_N^2 + K_2 ·_Q ∈𝒬_k+1|1/N∑_p = 1^N e^m(X_t_k^[p])V^m_k+1(X_t_k+1^[p], Q) - 𝔼(e^m(X_t_k) V^m_k+1(X_t_k+1, Q) )|_I_N^3+ K_3 ·ε where to obtain the coefficient K_3 in the last inequality, we used the fact that, _Q ∈𝒬_k+1𝔼(e^m(X_t_k) V^m_k+1(X_t_k+1, Q) ) ≤Q ∈𝒯_k+1sup𝔼(e^m(X_t_k) V^m_k+1(X_t_k+1, Q) ) < + ∞. The term I_N^3 can be handled using Proposition <ref>. Then, it suffices to prove that, ℙ( I_N^1 · I_N^2 ≥δ) ≤K/δ^a · N^a/2 for some positive constant K. But we have, ℙ( I_N^1 · I_N^2≥δ) = 1 - ℙ( I_N^1 · I_N^2 ≤δ) ≤ 1 - ℙ( I_N^1 ≤√(δ); I_N^2 ≤√(δ)) ≤ℙ( I_N^1 ≥√(δ)) + ℙ(I_N^2 ≥√(δ)). Moreover, by the induction assumption, we know that, there exists a positive constant B_a, k, m such that, ℙ(I_N^2 ≥√(δ)) ≤B_s, k, m/δ ^s/2 N^s/2≤{[ B_s, k, m/δ ^s N^s/2ifδ∈ (0, 1],; B_s, k, m/δ ^s/2 N^s/2otherwise. ]. In addition, it follows from Markov inequality and Lemma <ref> that there exists a positive constant M_a, k, m such that ℙ( I_N^1 ≥√(δ)) ≤M_s, k, m/δ^s N^s/2≤{[ M_s, k, m/δ ^s N^s/2ifδ∈ (0, 1],; M_s, k, m/δ ^s/2 N^s/2otherwise. ]. Hence, there exists a positive constant C_s, k, m such that, ℙ( I_N^1 · I_N^2 ≥δ) ≤{[ C_s, k, m/δ ^s N^s/2ifδ∈ (0, 1],; C_s, k, m/δ ^s/2 N^s/2otherwise ]. and this completes the proof. We now state the last result of this paper concerning a deviation inequality involving the actual swing value function. Consider assumptions ℋ_3, ∞ and ℋ_4, ∞^LS. For all k=0, …, n-2, δ > 0 and s ≥ 2 there exists a positive constant C_s, k, m such that, ℙ(_Q ∈𝒬_k|V_k^m, N(X_t_k, Q) - V_k^m(X_t_k, Q) | ≥δ) ≤C_s, k, m/b(s, δ) · N^s/2. Using the inequality, |i ∈ Isup a_i - i ∈ Isup b_i| ≤i ∈ Isup |a_i-b_i| and then Cauchy-Schwartz' inequality, we have, _Q ∈𝒬_k| V_k^m, N(X_t_k, Q) - V_k^m(X_t_k, Q) | ≤|e^m(X_t_k) | ·_Q ∈𝒬_k+1| θ_k + 1, m, N(Q) - θ_k+1, m(Q) |. Thus, using the same argument as in (<ref>), we get, ℙ( _Q ∈𝒬_k| V_k^m, N(X_t_k, Q) - V_k^m(X_t_k, Q) | ≥δ) ≤ℙ( |e^m(X_t_k) | ·_Q ∈𝒬_k+1| θ_k + 1, m, N(Q) - θ_k+1, m(Q) | ≥δ) ≤ℙ( |e^m(X_t_k) | ≥√(δ)) + ℙ( _Q ∈𝒬_k+1| θ_k + 1, m, N(Q) - θ_k+1, m(Q) | ≥√(δ)) ≤K_s, k, m^1/δ^s/2· N^s/2 + K_s, k, m^2/b(s, δ) · N^s/2≤{[ K_s, k, m/δ ^s N^s/2ifδ∈ (0, 1]; K_s, k, m/δ ^s/2 N^s/2otherwise ]. for some positive constant K_s, k, m, where the constant K_s, k, m^1 comes from Markov inequality (enabled by assumption ℋ_4, ∞^LS). The existence of the positive constant K_s, k, m^2 results from Proposition <ref> (enabled by assumptions ℋ_3, ∞ and ℋ_4, ∞^LS). The coefficient b(a, δ) is also defined in Proposition <ref>. This completes the proof. The preceding proposition entails the following result as a straightforward corollary. For all k = 0, …, n-1 and for any Q ∈𝒯_k, we have, ℙ( |V_k^m, N(X_t_k, Q) - V_k^m(X_t_k, Q) | ≥δ) ≤C_s, k, m/b(s, δ) · N^s/2. If we assume that m ≥ 1sup C_s, k, m < +∞, then for any s ≥ 2, we have the following uniform convergence, lim_N → +∞m ≥ 1supQ ∈𝒯_ksupℙ( |V_k^m, N(X_t_k, Q) - V_k^m(X_t_k, Q) | ≥δ) = 0. But it follows from triangle inequality that, ℙ( |V_k^m, N(X_t_k, Q) - V_k(X_t_k, Q) | ≥δ) = 1 - ℙ( |V_k^m, N(X_t_k, Q) - V_k(X_t_k, Q) | ≤δ) ≤ 1 - ℙ({|V_k^m, N(X_t_k, Q) - V_k^m(X_t_k, Q) | ≤δ/2 }∩{|V_k^m(X_t_k, Q) - V_k(X_t_k, Q) | ≤δ/2}) ≤ℙ( |V_k^m, N(X_t_k, Q) - V_k^m(X_t_k, Q) | ≥δ/2 ) + ℙ( |V_k^m(X_t_k, Q) - V_k(X_t_k, Q) | ≥δ/2) ≤ℙ( |V_k^m, N(X_t_k, Q) - V_k^m(X_t_k, Q) | ≥δ/2 ) + 4 ·||V_k^m(X_t_k, Q) - V_k(X_t_k, Q) | |_2^2/δ^2, where in the last inequality, we used Markov inequality. Then using Proposition <ref> and result (<ref>) yields, lim_m → +∞lim_N → +∞Q ∈𝒯_ksupℙ( |V_k^m, N(X_t_k, Q) - V_k(X_t_k, Q) | ≥δ) = 0. The latter result implies that for a well-chosen and sufficiently large regression basis, the limit, lim_N → +∞Q ∈𝒯_ksupℙ( |V_k^m, N(X_t_k, Q) - V_k(X_t_k, Q) | ≥δ) may be arbitrary small insuring in some sense the theoretical effectiveness of the least squares procedure in the context of swing pricing. § ACKNOWLEDGMENTS The author would like to thank Gilles Pagès and Vincent Lemaire for fruitful discussions. The author would also like to express his gratitude to Engie Global Markets for funding his PhD thesis. alpha * § APPENDIX §.§ Some useful results We present some materials used in this paper. The following lemma allows to show the continuity of the supremum of a continuous function when the supremum is taken over a set depending of the variable of interest. Consider a continuous function f : ℝ→ℝ and let A and B be two non-increasing and continuous real-valued functions defined on ℝ such that for all Q ∈ℝ, A(Q) ≤ B(Q). Then the function g: Q ∈ℝ↦q ∈ [A(Q), B(Q)]sup f(q) is continuous. To prove this lemma, we proceed by proving the function g is both left and right continuous. Let us start with the right-continuity. Let Q ∈ℝ and h a positive real number. Since A and B are non-increasing functions, two cases can be distinguished A(Q + h) ≤ A(Q) ≤ B(Q + h) ≤ B(Q). Using the definition of g, we have, g(Q + h) = max(q ∈ [A(Q + h), A(Q)]sup f(q), q ∈ [A(Q), B(Q + h)]sup f(q) ). Since f is continuous on the compact set [A(Q + h), A(Q)], it attains its maximum on a point α(Q, h) ∈ [A(Q + h), A(Q)]. Owing to the squeeze theorem, the latter implies that lim_h → 0α(Q, h) = A(Q) since A is a continuous function. Thus it follows from the continuity of f lim_h → 0 >q ∈ [A(Q + h), A(Q)]sup f(q) = lim_h → 0 > f(α(Q, h)) = f(A(Q)). Moreover, since B(Q + h) ≤ B(Q), we have q ∈ [A(Q), B(Q + h)]sup f(q) ≤q ∈ [A(Q), B(Q)]sup f(q) = g(Q). Thus by the continuity of the maximum function and taking the limit in (<ref>) yields lim_h → 0 > g(Q + h) ≤lim_h → 0 >max(q ∈ [A(Q + h), A(Q)]sup f(q), g(Q)) = max(lim_h → 0 >q ∈ [A(Q + h), A(Q)]sup f(q), g(Q)). = max(f(A(Q)), g(Q)) ≤ g(Q). It remains to prove that lim_h → 0 > g(Q + h) ≥ g(Q) to get the right-continuity. But since A(Q + h) ≤ A(Q) g(Q) ≤q ∈ [A(Q + h), B(Q)]sup f(q) = max(g(Q+h), q ∈ [B(Q + h), B(Q)]sup f(q) ). As above, using the continuity of f on the compact set [B(Q + h), B(Q)] yields lim_h → 0 >q ∈ [B(Q + h), B(Q)]sup f(q) = f(B(Q)). Therefore taking the limit in (<ref>) yields g(Q) ≤max(lim_h → 0 > g(Q+h), f(B(Q)) ) = max(lim_h → 0 > g(Q+h), lim_h → 0 > f(B(Q + h)) ) ≤lim_h → 0 > g(Q+h). where in the last inequality we used the fact that, f(B(Q + h)) ≤ g(Q+h). This gives the right-continuity in this first case. Let us consider the second case. A(Q + h) ≤ B(Q + h) ≤ A(Q) ≤ B(Q) Since B(Q+h) ≤ A(Q), it follows from the definition of g that, lim_h → 0 > g(Q+h) ≤max(lim_h → 0 >q ∈ [A(Q + h), A(Q)]sup f(q), g(Q) ) = max(f(A(Q)), g(Q) ) = g(Q). where we used as above the continuity of f on the compact set [A(Q + h), A(Q)]. Moreover, notice that g(Q) ≤q ∈ [A(Q + h), B(Q)]sup f(q) = max(g(Q+h), q ∈ [B(Q + h), B(Q)]sup f(q) ) Then, taking the limit in the last inequality yields, g(Q) ≤max(lim_h → 0 > g(Q+h), lim_h → 0 >q ∈ [B(Q + h), B(Q)]sup f(q) ) =max(lim_h → 0 > g(Q+h), f(B(Q)) ) = max(lim_h → 0 > g(Q+h), lim_h → 0 > f(B(Q+h)) ) ≤lim_h → 0 > g(Q+h). Thus, from equations (<ref>) and (<ref>) one may deduce that lim_h → 0 > g(Q+h) = g(Q). So that g is a right-continuous function. Proving the left-continuity can be handled in the same way. The idea is the following. We start with h a negative real number and consider the two following cases: A(Q) ≤ A(Q+h) ≤ B(Q) ≤ B(Q+h) and A(Q) ≤ B(Q) ≤ A(Q+h) ≤ B(Q+h) and proceed as for the right-continuity. Which will give lim_h → 0 < g(Q+h) = g(Q). Therefore g is a continuous function on ℝ. The following theorem also concerns the continuity of function in a parametric optimization. If X, Y are topological spaces and Y is compact, then for any continuous function f : X × Y →ℝ, the function g(x) := y ∈ Yinf f(x,y) is well-defined and continuous. . Note that g(x) > -∞ since for any fixed x∈ X, f(x,·):Y→ℝ is a continuous function defined on a compact space, and hence the infimum is attained. Then using that the sets (-∞,a) and (b,∞) form a subbase for the topology of ℝ, it suffices to check that g^-1((-∞,a)) and g^-1((b,∞)) are open. Let π_X be the canonical projection π_X:X× Y→ X, which we recall is continuous and open. It is easy to see that g^-1((-∞,a)) = π_X ∘ f^-1((-∞,a)). Thus since f and π_X are continuous, g^-1((-∞,a)) is open. We now need to show that g^-1((b,∞)) is open. We rely on the compactness of Y. Observe that, g(x) > b f(x,y) > b ∀ y ∀ y, (x,y) ∈ f^-1((b,∞)). Since f is continuous, then f^-1((b,∞)) is open. The latter implies that for all x∈ g^-1((b,∞)) and for all y∈ Y there exists a box neighborhood U_(x,y)× V_(x,y) contained in f^-1((b,∞)). Now using compactness of Y, a finite subset {(x,y_i)} of all these boxes cover {x}× Y and we get, {x}× Y ⊂( ∩_i = 1^k U_(x,y_i))× Y ⊂ f^-1((b,∞)) and hence g^-1((b,∞)) = ∪_x∈ g^-1((b,∞))∩_i = 1^k(x) U_x,y_i is open. Which completes the proof. [Gram determinant] Let F be a linear subspace with dimension n of a pre-Hilbert space E. Consider (x_1, …, x_n) as a basis of F and x ∈ E. Let p(x) denotes the orthogonal projection of x onto F. Then, G(x, x_1, …, x_n) = || x - p(x) ||^2 · G(x_1, …, x_n) where G(x_1, …, x_n) denotes the Gram determinant associated to (x_1, …, x_n). . Note that p(x) is a linear combination of (x_i)_1 ≤ i ≤ n. Since the determinant is stable by elementary operation, we have G(x, x_1, …, x_n) = G(x - p(x), x_1, …, x_n). But x - p(x) is orthogonal to each x_i so that, G(x - p(x), x_1, …, x_n) = || x - p(x) ||^2 · G(x_1, …, x_n). this completes the proof. §.§ Correspondences This section concerns correspondence and the well known Berge's maximum theorem. For a thorough analysis of the concept of correspondence, one may refer to Chapter 2 and 6 in <cit.>. Let X and Y be two non-empty sets. * a correspondence Γ from X to 2^Y (noted: Γ: X ⇉ 2^Y) is a mapping that associates for all x ∈ X a subset Γ(x) of Y. Moreover for all subset S ⊆ X, Γ(S) := ∪_x ∈ S^Γ(x). * a correspondence Γ is single-valued if Card(Γ(x)) = 1 for all x ∈ X * a correspondence Γ is compact-valued (or closed-valued) if for all x ∈ X, Γ(x) is a compact (or closed) set. Notice that a single-valued correspondence can be thought of as a function mapping X into Y. Thus as correspondences appear to be a generalization of functions some properties or definitions in functions has their extension in correspondences. Specially the continuity for a classic numerical function is a particular case of the hemicontinuity for a correspondence. Let (X, d_X) and (Y, d_Y) be two metric spaces and Γ: X ⇉ 2^Y a correspondence. * Γ is upper hemicontinuous at x ∈ X if and only if for any open set V such that Γ(x) ⊆ V, there exists an open set U ∋ x such that for all y ∈ U, Γ(y) ⊆ V. * Γ is lower hemicontinuous at x ∈ X if and only if for any open set V such that Γ(x) ∩ V ≠∅, there exists an open set U ∋ x such that for all y ∈ U, V ∩Γ(y) ≠∅. As for continuous functions on a metric space, there exists a sequential characterization of the hemicontinuity. [Sequential characterization of hemicontinuity] Let (X, d_X) and (Y, d_Y) be two metric spaces and Γ: X ⇉ 2^Y a correspondence. * Γ is lower hemicontinuous at x ∈ X if and only if for all sequence (x_n)_n ∈ℕ∈ X^ℕ that converges towards x, for all y ∈Γ(x) there exists a subsequence (x_n_k)_k ∈ℕ of (x_n)_n ∈ℕ and a sequence (y_k)_k ∈ℕ such that y_k ∈Γ(x_n_k) for all k ∈ℕ and y_k → y. * if Γ is upper hemicontinuous at x ∈ X then for all sequence (x_n)_n ∈ℕ∈ X^ℕ and all sequence (y_n)_n ∈ℕ such that for all n ∈ℕ, y_n ∈Γ(x_n), there exists a convergent subsequence of (y_n)_n ∈ℕ whose limit lies in Γ(x). If Y is compact then, the converse holds true. An important result relating correspondence and parametric optimization is the Berge's maximum theorem. [Berge's maximum theorem] Let 𝒬 and Y be two topological spaces, Γ: 𝒬⇉ 2^Y a compact-valued and continuous correspondence and ϕ a continuous function on the product space Y ×𝒬. Define for all Q∈𝒬 σ(Q) := _q ∈Γ(Q)ϕ(q, Q) ϕ^*(Q) := q ∈Γ(Q)maxϕ(q, Q). Then, * The correspondence σ: 𝒬⇉ Y is compact-valued, upper hemicontinuous, and closed * The function ϕ^*: 𝒬→ℝ is continuous
http://arxiv.org/abs/2307.06116v1
20230712121513
Scalable generation and detection of on-demand W states in nanophotonic circuits
[ "Jun Gao", "Leonardo Santos", "Govind Krishna", "Ze-Sheng Xu", "Adrian Iovan", "Stephan Steinhauer", "Otfried Gühne", "Philip J. Poole", "Dan Dalacu", "Val Zwiller", "Ali W. Elshaari" ]
quant-ph
[ "quant-ph", "physics.optics" ]
[email protected] [email protected] [email protected] Quantum physics phenomena, entanglement and coherence, are crucial for quantum information protocols, but understanding these in systems with more than two parts is challenging due to increasing complexity. The W state, a multipartite entangled state, is notable for its robustness and benefits in quantum communication. Here, we generate an 8-mode on-demand single photon W states, using nanowire quantum dots and a silicon nitride photonic chip. We demonstrate a reliable, scalable technique for reconstructing W-state in photonic circuits using Fourier and real-space imaging, supported by the Gerchberg-Saxton phase retrieval algorithm. Additionally, we utilize an entanglement witness to distinguish between mixed and entangled states, thereby affirming the entangled nature of our generated state. The study provides a new imaging approach of assessing multipartite entanglement in W-states, paving the way for further progress in image processing and Fourier-space analysis techniques for complex quantum systems. Scalable generation and detection of on-demand W states in nanophotonic circuits Ali W. Elshaari August 12, 2023 ================================================================================= Correlations form the basis for scientific inferences about the world. One of the most notable examples is that of causal inference where correlations between events are explained in terms of models that relate them from direct causation and/or shared common cause<cit.>. This is a central paradigm in data analysis in science (e.g, cosmology, medical and social sciences), whose results impact from our own understanding of reality to decision making in public policies. In all these cases, probabilities (and consequently, correlations) arise due to ignorance about all the parameters behind the analyzed events. In contrast, entanglement is a particular type of correlation between space-like separated quantum systems for which there is no counterpart in the classical world. This fact is precisely stated by Bell's theorem<cit.>, which demonstrates the impossibility of reproducing correlations between measurement results performed on entangled quantum systems in terms of local-hidden-variable models, a result whose experimental verification and impact on quantum information science led to the 2022 Nobel Prize in Physics. The characterization and detection of entanglement, as well as their impact in subsequent generation and experimental manipulation, is therefore of paramount importance. Such questions, however, are equally challenging, both theoretically and experimentally. These difficulties are particularly accentuated when we consider entanglement between more than two particles, here called multipartite entanglement. A fundamental problem lies in the exponential scaling of the dimension of the underlying Hilbert space, thus rendering an exhaustive classification difficult. In order to gain insight into multipartite entanglement phenomena, different concepts based on symmetries, graphical representations, matrix-product approximations, etc., have been used to select quantum states with particularly relevant properties within some context (see, e.g., Refs. <cit.>). In this work we are interested in the so-called W states of N qubit systems. These states are characterized by a coherent superposition of all the qubits involved, with equal probability amplitudes. They gained prominence in the scientific literature in the context of multipartite entanglement classification<cit.>. As it turns out, such states are intrinsically robust against particle loss and have been shown to be central as a resource in quantum information processing and multiparty quantum communication <cit.>. Furthermore, W states are examples of the so-called Dicke states, which are quantum states that arise naturally in the study of the emission of light by a cloud of atoms via so-called superradiance <cit.>. In the last two decades, the precise control of quantum systems allowed the experimental generation of multipartite entangled states <cit.>. Several schemes have been presented for preparing W states in a variety of physical platforms, including cavity quantum electrodynamics <cit.>, quantum spin chains <cit.>, nuclear magnetic resonance <cit.>, atomic systems <cit.> and trapped ions <cit.>. These schemes are frequently not scalable and/or require complex quantum state witnesses or quantum state tomography for their analysis. Single photons in optical platforms, in contrast, can be generated and manipulated with a high degree of purity, which makes them promising candidates for high-order W state generation. The generation of W states on such platforms, however, is yet challenging since it typically requires complex bulk-optical set-ups<cit.>. In this work, we propose a scalable method for generating and detecting W states in nanophotonic circuits. We experimentally generate an 8-mode W state on an integrated nanophotonic circuit based on cascaded arrays of Y-branch splitters. The circuit is fabricated on a complementary-metal-oxide semiconductor (CMOS) compatible silicon nitride platform. On-demand single photons generated from a InAsP nanowire quantum dot are fibre-coupled onto the photonic chip with the nanophotonic circuit. The output facet of the chip is imaged, generating real and Fourier space images. We then employ the Gerchberg-Saxton phase retrieval algorithm<cit.> to reconstruct the quantum state probability amplitudes and relative phases from the experimentally obtained real and Fourier space images. The experimentally obtained Fourier-space image is then compared with numerical simulations for the ideal case scenario of uniform coherent superposition. We observe a great similarity between these images, with both presenting similar interference patterns. Such a pattern is not presented by incoherent statistical mixtures, which leads us to conclude that the final state is indeed the W state. Compared with previous experiments <cit.>, our approach stands out for the on-demand nature of the quantum state generation, large operational bandwidth offered by the Y-splitter-based architecture, and the better scalability and smaller circuit size offered by our state analysis protocol. We prepare the W state with an on-demand single photon source, as shown in the experimental setup Fig. <ref>(a). The on-demand single photon source consists of an InAsP quantum dot (QD) embedded in a wurtzite InP nanowire <cit.>, and further details on the corresponding nanowire growth process can be found in the Supporting Information. The nanowire quantum devices were maintained at 4.2 K in an attocube closed-cycle cryogenic system. The single photon source was excited using 780 nm pulsed laser beam with a repetition rate of 320 MHz and an excitation power of 100 nW. A linear polarizer, a set of quarter-wave plates, and a half-wave plate are used to purify the laser's polarization. From among 100s of tested ultra-bright single photon sources, we selected the optimum emitter in terms of emission wavelength, brightness and emission linewidth. A long pass filter is used to reject laser light from single photons emitted by the nanowire quantum dot. A cascade of adjustable long-pass, short-pass and band-pass filters, placed on a rotating stage, are then used to select a single transition from the QD's S-shell. After coupling the single photons to an optical fiber, the photons can be either connected to a Hanbury Brown and Twiss effect (HBT) <cit.> setup to measure the second order correlation function, or coupled, using a tapered optical fiber, to the photonic chip for W state generation. The output of the photonic chip is imaged using a qCMOS single photon sensitive camera by Hamamatsu. Real and Fourier space intensity images of the chip-output can projected to the camera using a combination of 100X objective and an optical lens. Fig. <ref>(b) and (c) show scanning electron microscope image of the W state device, and magnified image of a single Y-splitter. Fig. <ref>(a) shows the emission spectrum of the nanowire quantum dot. The S-shell emission shows 3 types of particle-complexes, a neutral exciton, a bi-excitons and a trion, which were all verified using power and polarization series measurements. We used the brightest trion line at a wavelength of 881.7 nm, generating single photons with emission rates in the MHz range, measured using a superconducting single photon detector<cit.>. To characterise the purity of the emitted single photons, we conducted zero delay second order correlation measurement g^(2)(0) using a fiber-based HBT setup equipped with two superconducting single photon detectors. The system efficiencies of the two detectors are 80% and 66%, with timing jitter of 18 picoseconds and 11 picoseconds, respectively, and dark counts of less than 10 Hz. At zero delay, the measured value of g^(2)(0) is 0.04, well below the high order emission level, allowing us to operate in the single photon-limit of the Hilbert space, the results are shown in Fig. <ref>(b). The non-zero value for the g^(2)(0) is due to re-excitation of the quantum dot within the lifetime of the photon emission, and possible contributions from other states within the filtered emission range. To determine the mode profile of the single photons emitted from the nanowire, 3D finite difference time domain simulations were performed, the results are shown in Fig. <ref>(c). The QD is simulated as a dipole located 1.5 μm out from the base of the nanowire, the dipole orientation is perpendicular to the growth direction. The waveguiding, provided by the core-shell design and the tapering of the nanowire, forms a circularly symmetric mode-profile that enhances coupling to single mode fibers. To verify the beam profile experimentally, and enhance the coupling efficiency of the single photons to the optical fiber, the emission profile of the single photons was measured as shown in Fig. <ref>(d). The results show excellent agreement with the numerical simulations, with a Gaussian-like emission. The mode profile of the QD emission was matched to a 780HP single mode fiber using a Schäfter+Kirchhoff fiber coupler with an adjustable aspherical lens to achieve a high coupling efficiency. The fiber-coupled single photons are then injected to the W state photonic chip using a tapered optical fiber with a working distance of 13 μm and spot-size of 3 μm to maximize coupling to the transverse electric (TE) field mode of the photonic waveguides. The waveguides are made of silicon nitride deposited by Low Pressure Chemical Vapor Deposition (LPCVD) technique and then lithographically patterned to a width of 500 nm and a height of 250 nm, ensuring single mode operation at the nanowire quantum dot emission wavelength. The waveguides are coated with PMMA to ensure symmetric mode confinement. More details about the photonic chip fabrication can be found in the Supporting Information. The optical W state based on channel waveguides is characterized by a coherent distribution of a single photon over N waveguides. The state is defined by |W_N⟩ = 1/√(N)∑_n=1^N exp(iϕ_n)a_n^†|0⟩, where ϕ_n is the arbitrary phase and a_n^† is the Bosonic creation operator at each channel. In our experiment, the single photon W state generation occurs through coherent evolution of single photons through cascaded arrays of 3 sets of Y-branch 50-50 power splitters. In our circuit, every Y-branching was made to be precisely transversely symmetrical, providing an identical pathlength from input to output, regardless of the path. The emission lifetime of our single photon source is in the order 1 ns which is much longer than the pathlength corresponding to the physical dimensions of the chip. This ensures the presence of only a single photon in the chip at a time. In comparison to previously demonstrated methods employing directional couplers<cit.> and evanescent coupling in waveguide arrays<cit.>, the Y-splitters based protocol is easier to design and scalable with larger operating bandwidth, limited by the single mode cut-off of the photonic waveguide. The single photon state, a_1^†|0⟩ (where a_1^† is the creation operator of the input waveguide), launched into the input waveguide is initially localized. Its state after evolution through the first y-splitter can be expressed as a 2-order W state <cit.> |W_2⟩=1/√(2)(b_1^†+b_2^†)|0⟩, where b_1^† and b_2^† are the creation operators at the outputs of the first Y branch. Similarly, the state after the second set of 2 Y-splitters reads |W_4⟩=1/2(c_1^†+c_2^†+c_3^†+c_4^†)|0⟩, and, finally, the final output state after the third set of 4 Y-splitters is the 8-order W state |W_8⟩=1/√(8)(d_1^†+d_2^†+d_3^†+d_4^†+d_5^†+d_6^†+d_7^†+d_8^†)|0⟩. Here, c^† and d^† are the creation operators for the second and third set of Y-branch outputs respectively. Therefore, as ideally a single photon is sent to the circuit, the final state produced will be an optical 8-order W state given by the above equation with equal relative phases [cf. Eq. (<ref>)]. After coupling the single photons to the chip input, the output facet of the chip was imaged using a qCMOS Hamamatsu camera. The Fourier and real space images can be obtained by either adding or removing an additional optical lens before the camera. In the experiment, we use the Gerchberg-Saxton algorithm which was devised by crystallographers Ralph Gerchberg and Owen Saxton to deduce the phase distribution of electron beams in a transverse plane from the intensity distributions in two planes <cit.>. A process flow diagram of the algorithm is shown in Fig. <ref>. The output phase distribution of our circuit can be reconstructed using this iterative phase retrieval process. The algorithm takes the 2D matrix corresponding to the real space amplitudes u_0 as the input and each point in the matrix is assigned an arbitrary phase value ψ. A Fourier transform operation on this matrix (with elements u_0 exp(iψ)) gives a Fourier space matrix Uexp(iΨ) which can in turn give a real space matrix with the application of inverse Fourier transform on it. Several iterations of this scheme is performed and each iteration yields a matrix with a set of either real space or Fourier space amplitudes and phases. After each transform operation, the amplitudes in the output matrix (u, U) are replaced by the amplitudes from the experimentally obtained real and Fourier images (u_0, U_0). These serve as the constraints in the algorithm. The phase values (ψ in real space and Ψ in Fourier space) are left unchanged and can evolve freely. The iterations continue until we get a convergence yielding real and Fourier space matrix amplitudes (u, U) very close to the experimentally observed ones (u_0, U_0). The phase that evolved freely, now converges to certain values which is equal to the actual relative phases of the real space image. Fig. <ref>(a) shows the real space amplitude (square-root of intensity) image of the output facet obtained using an exposure time of 10 minutes. Fig. <ref>(b) shows the Fourier space image obtained using the same experimental setup. There are 8 modes in the real space image corresponding to the 8-waveguides from the cascaded Y-splitters, and single photons are coherently distributed among them. This fact is highlighted in the Fourier space image by a distinct diffraction pattern produced by the individual photons (g^(2)(0)=0.04). The interference is the consequence of coherent single-photon superposition across the photonic chip without any knowledge of the which-path information. The real and Fourier space images in the experiment each had 4 million pixels, the Gerchberg-Saxton phase retrieval algorithm was run for 5000 iterations before convergence. The reconstructed amplitude and phase distributions of the W state from the experiment are shown in Fig. <ref>(c) and (d), respectively. The algorithm was able to successfully reconstruct the 8-mode real space image by directly taking the inverse Fourier transform of the reconstructed Fourier-space image. The degree of similarity of the reconstructed real space image with the experimentally obtained real space image is a measure of the accuracy of the derived phase values. Fig. <ref>(e) and (f) show extracted amplitudes and phases of the experimentally measured on-demand W state generated by our photonic chip. We could observe good uniformity in the output probability amplitudes with a standard deviation of 0.085 around the mean value of 0.343 and a standard deviation of 0.086 around the ideal value of 0.354. The obtained phase values are also close to the ideal value of 0. The finite deviation from the ideal values in both cases is mainly due to the slight imperfections in the fabricated nanostructures, and background noise scattered in the cladding during image acquisition. Traditionally, W states have been identified through state tomography and entanglement witnesses. Proper implementation of such techniques allows rigorous verification of the presence of multipartite entanglement or even reconstructing the obtained state. However, the complexity of implementing these techniques prohibits their application for quantum states involving a high number of qubits. So, it is desirable to find a scalable approach for higher-order W state verification. Here we propose a simple method based on image comparison techniques combined with reasonable assumptions about the experimental setup. We start with some optical 8-order W state [Eq. (5)]: 8^-1/2∑_n=1^8 exp(iϕ_n)a_n^† |0⟩. The output image in the real space is composed of eight spots, each one corresponding to a term exp(iϕ_n)a_n^† |0⟩. The Fourier space image, in turn, is obtained via FT(N^-1/2∑_n exp(iϕ_n)a_n^†|0⟩), where FT(|ψ⟩) stands for the Fourier transform of the image generated by |ψ⟩. In the case of an ideal W state, 8^-1/2∑_n=1^8 a_n^† |0⟩, the real space image corresponds to 8 identical Gaussian spots and the Fourier space image resembles that of n slits experiment with a characteristic interference pattern arising from the coherent superposition between different distinguishable states. In contrast to this, if the state undergoes complete decoherence its real space image remains the same but the interference pattern in the Fourier space image disappears since its Fourier transform now reads N^-1/2∑_n FT(exp(iϕ_n)a_n^†|0⟩). In Fig.<ref> we compare the ideal W state images, its fully decohered version (mixed state) and the experimentally obtained image. Visually, we can clearly see that the result obtained experimentally resembles the simulation for the ideal W state, being in strong contrast with the mixed state. This is indeed confirmed since the images are more than 90% similar when compared via the Structural Similarity Index Measure (SSIM) <cit.>. Furthermore, by computing the correlation between these images we can estimate the overlap between the ideal W state and the experimentally produced state, from which we get 83.1%. We now employ an entanglement witness of the form 𝒲_αβγ=α𝒫_0+β𝒫_1+γ𝒫_2-|W_8⟩⟨ W_8| (𝒫_i is the projection onto the subspace with i excitations), under different assumptions. By assuming that the produced state is a convex mixture of the ideal W state and its fully decohered version, i.e., p|W_8⟩⟨ W_8|+(1-p)𝒫_1/8, we get p>80%. Employing the methods described in Ref. <cit.> and briefly reviewed in Supporting Information we numerically found 𝒲_αβγ witnessing the entanglement of the generated state. Moreover, the value of the second order correlation function g^(2)(0) at zero delay gives an upper bound on the probability of having more than one photon on the chip. It was experimentally measured [Fig. 2 (b)] to be 0.04. Considering multiple photon generation as another possible source of noise in generated W state, the final state would have the form (1-q)[p|W_8⟩⟨ W_8|+(1-p)𝒫_1/8]+q𝒫_2/28 if we neglect contributions from subspaces with more than two excitations. These states, with the value of q upper bounded by 0.04, can also have their entanglement witnessed by 𝒲_αβγ. We have demonstrated a scalable on-demand scheme for high-order on-chip single photon W state generation. The on-demand nature of our protocol, using nanowire QDs and silicon nitride hybrid system, facilitates the scope of its integrability into other hybrid quantum systems<cit.>, with potential bit-rates in the GHz range, limited only by the lifetime of the quantum emitter. Our circuit based on Y-splitters use no resonant or interference effects, thus delivering large operating bandwidth. The ease of fabrication and the Fourier-space imaging based verification of superposition and coherence, makes our approach scalable to higher order W states. Through experimental measurements and theoretical modeling, we showed strong evidence to verify that our output state is a multipartite coherent superposition, as opposed to a mixed state in which interferences between different channels vanish. Our findings pave the way for future developments in image processing methods and Fourier space analysis for characterizing multi-partite entangled quantum systems. Multipartite entanglement provides a great deal of room for phenomena that are not available in systems with just two subsystems, which makes it an active field of research, both from fundamental science and applications point of view<cit.>. Our results introduce a quantifiable visual approach to experimentally validate multi-partite entanglement, which can be of paramount importance for the experimental advancement of multi-particle quantum-information processing protocols. A. W. E acknowledges support from the Knut and Alice Wallenberg (KAW) Foundation through the Wallenberg Centre for Quantum Technology (WACQT), Swedish Research Council (VR) Starting, and Vinnova quantum kick-start project 2021. S. S. acknowledges support from VR Starting. V. Z. acknowledges support from the KAW and VR. S. S. acknowledges support from the Swedish Research Council (Starting Grant No. 2019-04821) and from the Göran Gustafsson Foundation. L. S. acknowledges support from the House of Young Talents of the University of Siegen. O. G. acknowledges support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation, project numbers 447948357 and 440958198), the Sino-German Center for Research Promotion (Project M-0294), the ERC (Consolidator Grant 683107/TempoQ), and the German Ministry of Education and Research (Project QuKuK, BMBF Grant No. 16KIS1618K). The authors acknowledge support from Quantum Design for using AFSEM, a correlative AFM and SEM system, to characterize the fabrication process of our waveguides. 99 Dalacu_APL2011 D. Dalacu, K. Mnaymneh, X. Wu, J. Lapointe, G. C. Aers, P. J. Poole, R. L. Williams, Selective-area vapor-liquid-sold growth of tunable InAsP quantum dots in nanowires, Appl. Phys. Lett. 98, 251101 (2011). Laferriere_SR2022 P. Laferrière, E. Yeung, I. Miron, D. B. Northeast, S. Haffouz, J. Lapointe, M. Korkusinski, P. J. Poole, R. L. Williams, D. Dalacu, Unity yield of deterministically positioned quantum dot single photon sources, Sci. Rep. 12, 6376 (2022). pearl2009causality Judea Pearl, Causality, Cambridge university press (2009). bell1964einstein John S Bell, On the Einstein-Podolsky-Rosen paradox, Physics Physique Fizika, 1(3), 195 (1964). dicke1954coherence Robert H Dicke, Coherence in spontaneous radiation processes, Physical Review, 93(1), 99 (1954). agrawal2006perfect Pankaj Agrawal, Arun Pati, Perfect teleportation and superdense coding with W states, Physical Review A, 74(6), 062320 (2006). horodecki2009 Ryszard Horodecki, Paweł Horodecki, Michał Horodecki, Karol Horodecki, Quantum entanglement, Reviews of Modern Physics, 81(2), 865 (2009). bengtsson2017 Ingemar Bengtsson, Karol Życzkowski, Geometry of quantum states: an introduction to quantum entanglement, Cambridge university press (2017). guhne2009 Otfried Gühne, Géza Tóth, Entanglement detection, Physics Reports, 474(1-6), 1–75 (2009). dur2000 Wolfgang Dür, Guifre Vidal, J Ignacio Cirac, Three qubits can be entangled in two inequivalent ways, Physical Review A, 62(6), 062314 (2000). grafe2014 Markus Gräfe, René Heilmann, Armando Perez-Leija, Robert Keil, Felix Dreisow, Matthias Heinrich, Hector Moya-Cessa, Stefan Nolte, Demetrios N Christodoulides, Alexander Szameit, On-chip generation of high-order single-photon W-states, Nature Photonics, 8(10), 791–795 (2014). cabello2002 Adán Cabello, Bell’s theorem with and without inequalities for the three-qubit Greenberger-Horne-Zeilinger and W states, Physical Review A, 65(3), 032108 (2002). barnea2015 Tomer Jack Barnea, Gilles Pütz, Jonatan Bohr Brask, Nicolas Brunner, Nicolas Gisin, Yeong-Cherng Liang, Nonlocality of W and Dicke states subject to losses, Physical Review A, 91(3), 032108 (2015). sohbi2015 Adel Sohbi, Isabelle Zaquine, Eleni Diamanti, Damian Markham, Decoherence effects on the nonlocality of symmetric states, Physical Review A, 91(2), 022101 (2015). murao1999 M Murao, D Jonathan, MB Plenio, V Vedral, Quantum telecloning and multiparticle entanglement, Physical Review A, 59(1), 156 (1999). shi2002 Bao-Sen Shi, Akihisa Tomita, Teleportation of an unknown state by W state, Physics Letters A, 296(4-5), 161–164 (2002). joo2003 Jaewoo Joo, Young-Jai Park, Sangchul Oh, Jaewan Kim, Quantum teleportation via a W state, New Journal of Physics, 5(1), 136 (2003). fang2019 B Fang, M Menotti, M Liscidini, JE Sipe, VO Lorenz, Three-photon discrete-energy-entangled w state in an optical fiber, Physical review letters, 123(7), 070508 (2019). erhard2020 Manuel Erhard, Mario Krenn, Anton Zeilinger, Advances in high-dimensional quantum entanglement, Nature Reviews Physics, 2(7), 365–381 (2020). hanbury1997correlation R Hanbury Brown, RQ Twiss, Correlation between photons in two coherent beams of light, SPIE MILESTONE SERIES MS, 139, 93–95 (1997). hanbury1993test R Hanbury Brown, RQ Twiss, A test of a new type of stellar interferometer on Sirius (from Nature 1956), SPIE MILESTONE SERIES MS, 73, 335–335 (1993). dalacu2012ultraclean Dan Dalacu, Khaled Mnaymneh, Jean Lapointe, Xiaohua Wu, Philip J Poole, Gabriele Bulgarini, Val Zwiller, Michael E Reimer, Ultraclean emission from InAsP quantum dots in defect-free wurtzite InP nanowires, Nano letters, 12(11), 5919–5923 (2012). gerchberg1994practical RW Gerchberg, WO Saxton, A practical algorithm for the determination of phase from image and diffraction plane pictures, SPIE milestone series MS, 94, 646–646 (1994). feng2019chip Tianfeng Feng, Xiaoqian Zhang, Yuling Tian, Qin Feng, On-chip multiphoton entangled states by path identity, International Journal of Theoretical Physics, 58(11), 3726–3733 (2019). perez2013generating Armando Perez-Leija, JC Hernandez-Herrejon, Hector Moya-Cessa, Alexander Szameit, Demetrios N Christodoulides, Generating photon-encoded W states in multiport waveguide-array systems, Physical Review A, 87(1), 013842 (2013). swain2020single Manoranjan Swain, Amit Rai, M Karthick Selvan, Prasanta K Panigrahi, Single photon generation and non-locality of perfect W-state, Journal of Optics, 22(7), 075202 (2020). menotti2016generation M Menotti, L Maccone, JE Sipe, M Liscidini, Generation of energy-entangled W states via parametric fluorescence in integrated devices, Physical Review A, 94(1), 013845 (2016). ivanova2016using AE Ivanova, SA Chivilikhin, AV Gleim, Using of optical splitters in quantum random number generators, based on fluctuations of vacuum, In Journal of Physics: Conference Series, 735(1), IOP Publishing (2016). guo2002scheme Guang-Can Guo, Yong-Sheng Zhang, Scheme for preparation of the W state via cavity quantum electrodynamics, Physical Review A, 65(5), 054302 (2002). zang2016 Xue-Ping Zang, Ming Yang, Fatih Ozaydin, Wei Song, Zhuo-Liang Cao, Deterministic generation of large scale atomic W states, Optics express, 24(11), 12293–12300 (2016). wang2001entanglement Xiaoguang Wang, Entanglement in the quantum Heisenberg XY model, Physical Review A, 64(1), 012313 (2001). vandersypen2005nmr Lieven MK Vandersypen, Isaac L Chuang, NMR techniques for quantum control and computation, Reviews of modern physics, 76(4), 1037 (2005). dogra2015 Shruti Dogra, Kavita Dorai, Experimental construction of generic three-qubit states and their reconstruction from two-party reduced states on an NMR quantum information processor, Physical Review A, 91(2), 022312 (2015). das2015 Debmalya Das, Shruti Dogra, Kavita Dorai, Experimental construction of a W superposition state and its equivalence to the Greenberger-Horne-Zeilinger state under local filtration, Physical Review A, 92(2), 022307 (2015). haas2014entangled Florian Haas, Jürgen Volz, Roger Gehr, Jakob Reichel, Jérôme Estève, Entangled states of more than 40 atoms in an optical fiber cavity, Science, 344(6180), 180–183 (2014). hosten2016measurement Onur Hosten, Nils J Engelsen, Rajiv Krishnakumar, Mark A Kasevich, Measurement noise 100 times lower than the quantum-projection limit using entangled atoms, Nature, 529(7587), 505–508 (2016). mcconnell2015entanglement Robert McConnell, Hao Zhang, Jiazhong Hu, Senka Ćuk, Vladan Vuletić, Entanglement with negative Wigner function of almost 3,000 atoms heralded by one photon, Nature, 519(7544), 439–442 (2015). frowis2017experimental Florian Frwis, Peter C Strassmann, Alexey Tiranov, Corentin Gut, Jonathan Lavoie, Nicolas Brunner, Félix Bussières, Mikael Afzelius, Nicolas Gisin, Experimental certification of millions of genuinely entangled atoms in a solid, Nature communications, 8(1), 1–6 (2017). pu2018experimental Yunfei Pu, Yukai Wu, Nan Jiang, Wei Chang, Chang Li, Sheng Zhang, Luming Duan, Experimental entanglement of 25 individually accessible atomic quantum interfaces, Science advances, 4(4), eaar3931 (2018). li2021multipartite Hang Li, Jian-Peng Dou, Xiao-Ling Pang, Chao-Ni Zhang, Zeng-Quan Yan, Tian-Huai Yang, Jun Gao, Jia-Ming Li, Xian-Min Jin, Multipartite entanglement of billions of motional atoms heralded by single photon, npj Quantum Information, 7(1), 1–9 (2021). roos2004 Christian F Roos, Mark Riebe, Hartmut Haffner, Wolfgang Hansel, Jan Benhelm, Gavin PT Lancaster, Christoph Becher, Ferdinand Schmidt-Kaler, Rainer Blatt, Control and measurement of three-qubit entangled states, Science, 304(5676), 1478–1480 (2004). haffner2005scalable Hartmut Häffner, Wolfgang Hänsel, CF Roos, Jan Benhelm, D Chek-al-Kar, M Chwalla, T Körber, UD Rapol, M Riebe, PO Schmidt, Scalable multiparticle entanglement of trapped ions, Nature, 438(7068), 643–646 (2005). papp2009characterization Scott B Papp, Kyung Soo Choi, Hui Deng, Pavel Lougovski, SJ Van Enk, HJ Kimble, Characterization of multipartite entanglement for one photon shared among four optical modes, Science, 324(5928), 764–768 (2009). choi2010 KS Choi, A Goban, SB Papp, SJ Van Enk, HJ Kimble, Entanglement of spin waves among four quantum memories, Nature, 468(7322), 412–416 (2010). elshaari2020hybrid Ali W Elshaari, Wolfram Pernice, Kartik Srinivasan, Oliver Benson, Val Zwiller, Hybrid integrated quantum photonic circuits, Nature Photonics, 14(5), 285–298 (2020). elshaari2017chip Ali W Elshaari, Iman Esmaeil Zadeh, Andreas Fognini, Michael E Reimer, Dan Dalacu, Philip J Poole, Val Zwiller, Klaus D Jöns, On-chip single photon filtering and multiplexing in hybrid quantum photonic circuits, Nature communications, 8(1), 1–8 (2017). zadeh2016deterministic Iman Esmaeil Zadeh, Ali W Elshaari, Klaus D Jons, Andreas Fognini, Dan Dalacu, Philip J Poole, Michael E Reimer, Val Zwiller, Deterministic integration of single photon sources in silicon based photonic circuits, Nano Letters, 16(4), 2289–2294 (2016). elshaari2018strain Ali W Elshaari, Efe Buyukozer, Iman Esmaeil Zadeh, Thomas Lettner, Peng Zhao, Eva Scholl, Samuel Gyger, Michael E Reimer, Dan Dalacu, Philip J Poole, Strain-tunable quantum integrated photonics, Nano letters, 18(12), 7969–7976 (2018). gourgues2019controlled Ronan Gourgues, Iman Esmaeil Zadeh, Ali W Elshaari, Gabriele Bulgarini, Johannes WN Los, Julien Zichi, Dan Dalacu, Philip J Poole, Sander N Dorenbos, Val Zwiller, Controlled integration of selected detectors and emitters in photonic integrated circuits, Optics express, 27(3), 3710–3716 (2019). pan2000experimental Jian-Wei Pan, Dik Bouwmeester, Matthew Daniell, Harald Weinfurter, Anton Zeilinger, Experimental test of quantum nonlocality in three-photon Greenberger–Horne–Zeilinger entanglement, Nature, 403(6769), 515–519 (2000). lombardi2002teleportation Egilberto Lombardi, Fabio Sciarrino, Sandu Popescu, Francesco De Martini, Teleportation of a vacuum–one-photon qubit, Physical review letters, 88(7), 070402 (2002). qiao2021multistage Lu-Feng Qiao, Zhi-Qiang Jiao, Xiao-Yun Xu, Jun Gao, Zhe-Yong Zhang, Ruo-Jing Ren, Wen-Hao Zhou, Xiao-Wei Wang, Xian-Min Jin, Multistage quantum swapping of vacuum-one-photon entanglement, Physical Review A, 104(2), 022415 (2021). das2022optimal Tamoghna Das, Marcin Karczewski, Antonio Mandarino, Marcin Markiewicz, Marek .Zukowski, Optimal Interferometry for Bell Nonclassicality Induced by a Vacuum–One-Photon Qubit, Physical Review Applied, 18(3), 034074 (2022). chitambar2014 Eric Chitambar, Debbie Leung, Laura Mančinska, Maris Ozols, Andreas Winter, Everything you always wanted to know about LOCC (but were afraid to ask), Communications in Mathematical Physics, 328(1), 303–326 (2014). nielsen1999 Michael A Nielsen, Conditions for a class of entanglement transformations, Physical Review Letters, 83(2), 436 (1999). vicente2013 Julio I de Vicente, Cornelia Spee, Barbara Kraus, Maximally entangled set of multipartite quantum states, Physical review letters, 111(11), 110502 (2013). mermin1990 N David Mermin, Extreme quantum entanglement in a superposition of macroscopically distinct states, Physical Review Letters, 65(15), 1838 (1990). greenberger1990 Daniel M Greenberger, Michael A Horne, Abner Shimony, Anton Zeilinger, Bell’s theorem without inequalities, American Journal of Physics, 58(12), 1131–1143 (1990). wei2003 Tzu-Chieh Wei, Paul M Goldbart, Geometric measure of entanglement and applications to bipartite and multipartite quantum states, Physical Review A, 68(4), 042307 (2003). steinberg2022 Jonathan Steinberg, Otfried Guhne, Maximizing the geometric measure of entanglement, arXiv preprint arXiv:2210.13475, [Online; accessed 24-Oct -2022]. wang2004image Zhou Wang, Alan C Bovik, Hamid R Sheikh, Eero P Simoncelli, Image quality assessment: from error visibility to structural similarity, IEEE transactions on image processing, 13(4), 600–612 (2004). chang2023nanowire Jin Chang, Jun Gao, Iman Esmaeil Zadeh, Ali W Elshaari, Val Zwiller, Nanowire-based integrated photonics for quantum information and quantum sensing, Nanophotonics, 12(3), 339–358 (2023). esmaeil2021superconducting Iman Esmaeil Zadeh, J Chang, Johannes WN Los, Samuel Gyger, Ali W Elshaari, Sander N Dorenbos, Val Zwiller, Superconducting nanowire single-photon detectors: A perspective on evolution, state-of-the-art, future developments, and applications, Applied Physics Letters, 118(19), 190502 (2021). esmaeil2020efficient Iman Esmaeil Zadeh, Johannes WN Los, Ronan BM Gourgues, Jin Chang, Ali W Elshaari, Julien Romain Zichi, Yuri J van Staaden, Jeroen PE Swens, Nima Kalhor, Antonio Guardiani, others, Efficient single-photon detection with 7.7 ps time resolution for photon-correlation measurements, Acs Photonics, 7(7), 1780–1787 (2020). moody20222022 Galan Moody, Volker J Sorger, Daniel J Blumenthal, Paul W Juodawlkis, William Loh, Cheryl Sorace-Agaskar, Alex E Jones, Krishna C Balram, Jonathan CF Matthews, Anthony Laing, others, 2022 Roadmap on integrated quantum photonics, Journal of Physics: Photonics, 4(1), 012501 (2022). §.§ Supplementary Information §.§ Quantum entanglement of W-states Here, we briefly review quantum entanglement theory with a focus on W states. We restrict ourselves to two-level quantum systems (qubits). As usual, we fix a “computational basis" defined by two orthogonal states: |0⟩ and |1⟩. To start with, we consider the simplest bipartite quantum system presenting entanglement, a pair of qubits A and B. A given pure state |ψ⟩_ AB is entangled if it cannot be written as a product, i.e., |ψ⟩_ AB≠ |ϕ⟩_ A⊗ |τ⟩_ B for all local states |ϕ⟩_ A and |τ⟩_ B. For two quantum states, one can ask more generally, whether one state can be transformed into the other via local operations and classical communication (LOCC) <cit.>. For pure bipartite states this can be solved <cit.>, but in the general case this is a hard and open question <cit.>. A slight, but significant generalization of LOCC is that of stochastic local operations and classical communication (SLOCC). These are LOCC transformation on a single copy of a state, but without imposing that the target state has to be achieved with certainty. In that case, two states are equivalent if each one of them can be converted into the other with a non-zero probability and vice versa. For two qubits there is only one equivalence class represented by the Bell state: |Φ^+⟩=1/√(2)(|00⟩+|11⟩). Any entangled state can be obtained via LOCC from the Bell state, and conversely, any entangled state can be transformed into the Bell state with nonzero probability via SLOCC. This is one of the many facts that justify the designation of “maximally entangled" for the Bell state. When considering a system of three or more qubits, the situation becomes much more complex. For three qubits A, B and C, for example, a pure state |ψ⟩_ ABC may be written as a product in two different ways: total separability, when the state is written as a product of three local states, |ψ⟩_ ABC=|α⟩_ A⊗ |β⟩_ B⊗ |γ⟩_ C; and biseparability if |ψ⟩_ ABC=|ϕ⟩_ A⊗ |τ⟩_ BC, |ψ⟩_ ABC=|ϕ⟩_ AB⊗ |τ⟩_ C or |ψ⟩_ ABC=|ϕ⟩_ B⊗ |τ⟩_ CA. If |ψ⟩_ ABC is neither fully separable nor biseparable then it is genuine multipartite entangled <cit.>. Furthermore, pure genuine multipartite entangled states can be entangled in two inequivalent ways <cit.>, i.e., there exist two classes of states which cannot be transformed into another by SLOCC, in contrast to two qubits. The representatives of these entanglement classes are the Greenberger-Horne-Zeilinger (GHZ) state, |GHZ⟩=1/√(2)(|000⟩+|111⟩), and the W state |W⟩=1/√(3)(|100⟩+|010⟩+|001⟩). The GHZ and W states are of central importance in quantum information science <cit.>. Both can lead to violations of Bell inequalities <cit.>, with the GHZ state violating the famous Mermin inequality <cit.> maximally and leading to the GHZ argument <cit.>. Contrary to that, the entanglement in the W state is robust against particle loss and the state is maximally entangled according to the geometric measure of entanglement <cit.>. Both states can be generalized to systems with many qubits. However, for systems with more than three qubits, there are infinitely many equivalence classes via SLOCC, which makes characterization much more complex. The generalization of Eq. (<ref>) for N qubits reads |W_N⟩=1/√(N)(|100… 0⟩+|010… 0⟩+… +|000… 1⟩). This state presents a variety of properties that make it unique in the set of pure states of many qubits system. The first of these properties is that, although it generally does not lead to the maximum violation of the better-known Bell inequalities in contrast to the GHZ state, the W state is much more robust against particle loss <cit.>, making W state a good candidate to encode quantum information. In fact, the marginal states of GHZ (<ref>) are separable while the W state (<ref>) is the state with maximum possible bipartite entanglement in the reduced two-qubits state. Last but not least, W states are high-dimensional quantum states that exhibit a high degree of entanglement and whose experimental generation can be implemented robustly and much less demanding than other quantum states (e.g., GHZ states). So, on-demand generation of W states is a valuable tool for quantum technologies since such states are highly entangled and quite robust against harmful effects of the surrounding environment. §.§ Photonic circuit fabrication To design the waveguide, ellipsometry measurements to characterise the height of the silicon nitride and its refractive index were performed. The simulated mode profiles based on the experimental measurements are shown in Fig. <ref>(a), for the transverse electric TE (a) and transverse magnetic TM (b) modes. The TM mode is weakly localized, with an effective index of 1.588, compared to the TE mode which has an effective index of 1.643. The simulations are performed at the emission wavelength of the trion line of the QD. The substrate consists of a 500 μm thick silicon wafer capped with 3.3 μm of thermal oxide, and 250 nm LPCVD silicon nitride prepared by Rogue Valley Microdevices. An adhesion promoter, AR 300-80 by ALLRESIST, is spin-coated on the substrate at 3000 RPM. The substrate is then soft-baked for 90 seconds at 90 ^∘C. Negative resist m-aN 2403 by Microresist technology is spin-coated at 3000 RPM and baked at 90 ^∘C for 1 minute, yielding an approximate resist thickness of 300 nm. The electron beam lithography dose assignment in the CAD is proximity-corrected using commercial software package BEAMER. The cad is exposed using 50 keV electron-beam lithography Voyager system developed by Raith nanofabrication. The waveguide width was designed to be 600 nm. After exposure, the chip is developed in ma-D 525, an aqueous-alkaline based developer, supplied by Microresist technology, then the chip is rinsed in DI water. The waveguides are etched in PlasmaPro 100 Cobra ICP etching System using SF6 based chemistry. After etching, 950K A8 PMMA resist was spin-coated at 1000 RPM and baked at 150 ^∘C for 5 minutes. The refractive index of PMMA at 885 nm, the emission wavelength range of the S-shell transitions in the QD, is closely matched to the bottom oxide cladding. This provides symmetric mode confinement of the single photons in the silicon nitride waveguide. Finally, the chip is cleaved through a crystallographic direction of the silicon wafer, providing a smooth chip-facet for coupling light into the waveguides using a tapered optical fiber. The fabrication steps are depicted in Fig. <ref>(d) to (f). §.§ Nanowire QD growth Chemical beam epitaxy using Trimethylindium (TMI), phosphine (PH_3) and arsine (AsH_3) as sources of In, P and As, respectively, was used to grow the wurtzite nanowires for this study. We us a selective-area vapour-liquid-solid growth technique described in detail in Refs. <cit.>. On a (111)B InP substrate we deposit a 20 nm thick SiO_2 mask. Using electron-beam lithography, HF wet-etching and metal lift-off we produce patterned substrates consisting of gold droplets in the centres of holes in the SiO_2 mask. On this substrate we first grow InP nanowire cores which have a diameter of 20 nm, set by the droplet size. In these cores we incorporate InAsP quantum dots ∼ 5 nm thick and having the same diameter as the core. We then clad the core with an InP shell to produce a photonic nanowire having a base diameter of 250 nm which tapers to 100 nm over the ∼ 15 μm length of the nanowire. The cladding is produced by adjusting the growth conditions from that used to grow the core, in particular, increasing the growth temperature from 435^∘ to 450^∘ and increasing the V/III ratio. The growth process is shown in Fig. <ref>(a)-(c). A scanning electron microscope image of an array of deterministically fabricated nanowire quantum dots is shown in Fig. <ref>(d). § FOURIER-SPACE IMAGES OF W STATES The quantum interference between different channels in our system, which is revealed by the Fourier-space image, resembles that of n-slit experiment. To reveal this similarity, we constructed different quantum W states and computed their Fourier transform as shown in Fig. <ref>. We selected W states of the following orders: |W⟩=1/√(1)(|1⟩) |W⟩=1/√(2)(|10⟩+|01⟩) |W⟩=1/√(4)(|1000⟩+|0100⟩+|0010⟩+|0001⟩) |W⟩=1/√(6)(|100000⟩+|010000⟩+|001000⟩+|000100⟩+|000010⟩+|000001⟩) |W⟩=1/√(8)(|10000000⟩+|01000000⟩+|00100000⟩+|00010000⟩+|00001000⟩+|00000100⟩ +|00000010⟩+|00000001⟩). In the trivial case of a single Gaussian beam input g (x,y), W state of the 1^st order, the far-field diffraction is the Fourier transform (FT) of the input mode. The Fourier transform is a scaled Gaussian function, but in the spatial frequency space (f_x,f_y). The results are shown in Fig. <ref>(a) and (b) and described by = FT [g(x,y))] = G(f_x,f_y). The situation becomes more interesting when more quantum channels are involved. For example, in the 2^nd order W state, we can use the the translation property of the Fourier transform. If we assume that the two input modes are located at distances ± d from zero in the x-direction, the interference pattern can be written as = FT [g(x-d,y)+g(x+d,y))] = G(f_x,f_y)[e^j2 π f_xd+e^-j2π f_xd] The intensity of the interference pattern is simply a Gaussian function modulated by a squared cosine function. The input mode profile and the diffraction pattern intensity profile are shown in Fig. <ref>(c) and (d) and given by ∼ G(f_x,f_y) ·cos^2(2π f_xd). As the number of modes is increased, we can write the diffraction pattern of the W state as ∼ G(f_x,f_y) ·sin^2(N π f_xd)/sin^2( π f_xd). Here, N is the number of interfering modes in the W state in Eq. <ref>. The position of the bright regions in the Fourier transform is preserved, following the same physics as in multi-slit diffraction. The results for the cases of 2, 4, 6, and 8 modes are shown in Fig. <ref>(c) to (k). This is in a stark contrast to the mixed-state case, where the interference between different modes in the W state is lost, resulting in an incoherent mix of all the modes, with vanishing diffraction pattern. In our setup the Fourier transform can be computed by inserting a lens to project the W state output of the chip to the back-focal plane of the lens. The image in the back-focal plane of a positive lens, having a focal length f and light of wavelength λ, is given by G( x', y')=exp{j πx'^2+y'^2/λ f(1-z/f)}∬ g(x, y) exp{-j 2 πx x'+y y'/λ f}d x  d y, where x' y' are the transverse spatial coordinates after the lens at a distance z. The output image is simply the 2-dimensional Fourier transform of the input, as shown by the integral over the input state in the second term. The integration is taken over the pupil function of the lens. The first term describes the spherical wave-front free-space propagation. When the camera is placed exactly one focal distance behind the lens, the Fourier transform computed by the lens is exactly G( x_f, y_f)=∬ g(x, y) exp{-j 2 πx x_f+y y_f/λ f}d x  d y, where the spatial frequencies in the f_x and f_y direction are related to the spatial coordinates at the focal point x_f, y_f by f_x=x_f/ λ f and f_y=y_f/ λ f. Moreover, the translation property of the Fourier transform can be understood in our optical setup as shown in Fig. <ref>. Different modes of the W state are focused to the back-focal plane, with each having a different path-length corresponding to unique phase factor in the Fourier transform. This coherent locked phase between the modes enables the diffraction pattern we measure in the experiment between different single-photon channels. § ENTANGLEMENT WITNESSES An entanglement witness is a self-adjoint operator 𝒲 satisfying tr(𝒲ρ)≥ 0 for all density operator representing a separable (i.e., non-entangled) quantum state. Thus, tr(𝒲ρ) being negative is a sufficient criterion to conclude that a given state ρ is entangled. The construction of an adequate entanglement witness depends on some prior knowledge about the devices that produce such a quantum state. Here, in particular, we ideally produce the 8 order W state, |W_8⟩. In that case, a good ansatz for entanglement witness is <cit.> W_αβγ=α𝒫_0+β𝒫_1+γ𝒫_2-|W_8⟩⟨ W_8 |. Here 𝒫_i are projectors onto the subspaces with exactly i excitations. We need to guarantee that 𝒲_αβγ is actually an entangled witness. From the symmetry of the W state, it suffices to prove non-negativity for states |a⟩⊗ |b⟩, where |a⟩= a_0|00...00⟩+a_1(|00...01⟩ + ...+ |10...00⟩) and similarly for |b⟩. Therefore, the problem of finding an entanglement witness for the produced state ρ is read as Find αβγ Subject to ⟨ ab|𝒲_αβγ|ab ⟩≥ 0 and tr(𝒲_αβγρ)<0. For N=8, the number of parameters allows this problem to be solved numerically. The states we consider in the main text have the form ρ=(1-q)[p|W_8⟩⟨ W_8|+(1-p)𝒫_1/8]+q𝒫_2/28. The condition for ρ being entangled then reads (1-q)(β-7p+1/8)+γ q<0, given that 𝒲_αβγ is a witness. For small values of 1-p and q, it is possible to find such a witness. In particular, we numerically verify it for p>0.7 and q<0.2.
http://arxiv.org/abs/2307.05417v1
20230711163427
No-resonance conditions, random matrices, and quantum chaotic models
[ "Jonathon Riddell", "Nathan Pagliaroli" ]
quant-ph
[ "quant-ph" ]
[email protected] School of Physics and Astronomy, University of Nottingham, Nottingham, NG7 2RD, UK [email protected] Department of Mathematics, Western University, 1151 Richmond St, London ON N6A 3K7, Canada In this article we investigate no-resonance conditions for quantum chaotic and random matrix models. No-resonance conditions are properties on the spectrum of a model, usually employed as a theoretical tool in the analysis of late time dynamics. The first order no-resonance condition holds when a spectrum is non-degenerate, while higher order no-resonance conditions imply sums of an equal number of energies are non-degenerate outside of permutations of the indices. The condition is usually assumed to hold for quantum chaotic models. In this work we use several tests from random matrix theory to demonstrate that no-resonance conditions are likely to be violated for all equal sums containing greater than one energy. This is due to the presence of level-attraction in the spectra after resolving appropriate symmetries. This result is produced for both a quantum chaotic Hamiltonian and two random matrix models. We then generalize important bounds in quantum equilibration theory to a case where the conditions are violated, and to the case of random matrix models. No-resonance conditions, random matrices, and quantum chaotic models Nathan Pagliaroli August 12, 2023 ==================================================================== One of the most ubiquitous observations in many body physics is the connection between the spectral statistics of many body quantum systems and that of random matrices. Quantum systems are not chaotic in the classical sense since unitary time evolution guarantees that the overlap between two states in time is constant. This excludes the classical notation of chaos in quantum systems for which we observe exponential sensitivity to small differences in initial conditions. However, their spectral statistics behave qualitatively differently if their corresponding classical limit is integrable or chaotic. If the classical limit is chaotic, the spectral statistics of the quantum Hamiltonian agree with the predictions of Random Matrix Theory (random) and we refer to these models as quantum chaotic <cit.>. The notion of quantum chaos can be extended to quantum systems that do not have a well-defined classical limit <cit.>. An extremely important property of the spectral statistics of a quantum chaotic Hamiltonian is the presence of level-repulsion amongst neighboring energies. Originally this level-repulsion was first modeled for heavy atomic nuclei by Wigner using Gaussian ensembles of random matrices. Since Wigner's work, it has been established that features of the spectrum of classically chaotic quantum systems are accurately described by various ensembles of random matrices<cit.>. The connection between the spectrum of quantum chaotic systems and random matrices has been well studied in single particle systems <cit.>, along with many body systems <cit.> and recently has seen a surge of interest in the case of circuit or periodically driven type models <cit.>. The first to extend Wigner's work were Dyson and Mehta in the series of papers <cit.>. In particular, Dyson classified the three most immediately relevant ensembles: the Gaussian Unitary Ensemble, the Gaussian Orthogonal Ensemble, and the Gaussian Symmplectic Ensemble in what is known as the “threefold way" <cit.>. Of the most immediate interest to this work is the Gaussian Orthogonal Ensemble (GOE). The Bohigas, Giannoni, and Schmit (BGS) conjecture <cit.> states that the GOE has the same level-spacing as a wide class of quantum systems with classical limits <cit.>. Let E_0≤ E_1≤ E_2, ... be a sequence of unfolded energy eigenvalues of the GOE; then Wigner surmised the distribution of average consecutive level-spacings, that is the average of s_k = E_k+1 - E_k for all k is p(s) =π s/2e^-π^2 s^2/4. To see how to unfold a spectrum see Chapter 6 of <cit.> or for example <cit.>. It is important to note that Wigner's Surmise is an approximation <cit.> of the actual distribution, originally derived in <cit.>. This was further simplified in terms of Painlevé transcendentals <cit.>. In contrast to level-repulsion, if one considers the level-spacing of i.i.d. random variables, not only does one not see repulsion, but rather one sees attraction <cit.>, which has been used as a marker for non-chaotic systems <cit.>. In particular after unfolding the spacing of such systems, the distribution is Poisson p(s) = e^-s. The presence of level-repulsion and GOE spectral statistics is a hallmark test of Quantum chaos, while Poisson statistics are associated with integrable or non-chaotic models. A key consequence of the presence of level-repulsion is that the value of the probability density at zero is zero, meaning that we can assume with high probability that we will not find degeneracies in the quantum chaotic spectrum. This observation is useful, for example, when considering dephasing arguments, which has recently been particularly popular in the quantum equilibration community <cit.>. If we consider the time-evolution of many dynamical functions under unitary dynamics, time-dependent terms in the series will often appear as the following: z e^-i(E_m-E_n)t, where z is a complex number and t is time. Terms such as these survive the infinite time average if and only if E_m = E_n. In the case of quantum chaotic Hamiltonians it is a safe assumption that any surviving term would imply that m=n, since we do not expect degeneracy due to the presence of level-repulsion. The cases where E_m = E_n and m ≠ n are referred to as resonances. However, in general dynamical functions can be more complex with terms such as z e^-i(E_m_1-E_n_1+ E_m_2-E_n_2+...)t. Such terms can, for example, appear in out of time ordered correlators or other higher order correlation functions <cit.>. To discuss the terms that survive the infinite time average in equation <ref> we introduce the qth order no-resonance condition. Let H be a Hamiltonian with spectrum H= ∑_j E_j |E_j⟩⟨E_j|, and let Λ_q,Λ'_q be two arbitrary sets of q energy levels { E_j}. H satisfies the q no-resonance condition if for all Λ_q,Λ'_q, the equality ∑_j ∈Λ_q E_j = ∑_j ∈Λ'_q E_j implies that Λ_q=Λ'_q. By definition <ref> the set of terms that satisfy the q no-resonance condition are the minimum set of terms that survive the infinite time average as in equation <ref>. Terms that fall outside of definition <ref> are referred to as q-resonances. Typically in the literature it is suggested that quantum chaotic Hamiltonians satisfy definition <ref> <cit.>. This greatly simplifies arguments involving infinite time averages in quantum chaotic models. Despite this condition being somewhat common in the literature, studies only test this condition for the q=1 case where one finds level-repulsion governed by the Wigner-Dyson distribution <cit.>. As for the q=2 case, an explicit formula is known for the density of states <cit.>, but as far the authors can tell nothing is known about the level-spacing distribution. However, as we will see, the numerical simulations performed in this paper strongly suggest that for the GOE the q=2 level-spacing distribution is Poisson. In the appendix we numerically demonstrate that q=3,4 also appear Poisson and have level-attraction. We then conjecture that all level spacing distributions for q≥2 have level-attraction and appear Poissonian. § SPECTRAL STATISTICS FOR A QUANTUM CHAOTIC HAMILTONIAN In this section we first investigate what the spectral statistics look like for a specific quantum chaotic model. In particular we study a Heisenberg type model with nearest and next nearest neighbour interactions. H = ∑_j=1^L J_1 ( S_j^+ S_j+1^- + h.c.) + γ_1 S_j^Z S_j+1^Z + J_2 ( S_j^+ S_j+2^- + h.c.)+ γ_2 S_j^Z S_j+2^Z, where (J_1,γ_1,J_2,γ_2) = (-1,1,-0.2,0.5) gives us a non-integrable model. This model has a free limit for (J_1,0,0,0) and an interacting integrable limit for (J_1,γ_1,0,0). Recently this model was confirmed to obey the eigenstate thermalization hypothesis <cit.>. We perform full spectrum exact diagonalization in the maximally symmetric sector of this model. In particular, this matrix conserves the total magnetization m_z = ∑_j S_j^Z, and is translation invariant. We choose to work in the sector such that ⟨ m_z ⟩ = 0 with quasi-momenta k = 0. This allows us to further diagonalize the model with the spatial reflection symmetry P and the spin inversion symmetry Z. In this section we will focus on the spectral statistics for the cases q=1, as a benchmark, and q = 2, the first non-resonance condition that is unexplored in the literature. As we will show in the appendix, the behavior for q > 2 is qualitatively similar to q=2. First, let us establish that our model satisfies the usual tests for quantum chaos in the q=1 case. Perhaps the most common test is to investigate the level spacing distribution s_j = E_j+1-E_j. The act of unfolding allows us to have a universal scale for the comparison of spectra of different Hamiltonians. The distribution of s_j for a quantum chaotic model should be a Wigner surmise. To unfold the spectrum we use Gaussian broadening. Namely we map our energies E_k to ϵ_k in the following way <cit.>, ϵ_k = N(E_k), N(E) = ∫_-∞^E ∑_k 1/σ_k √(2 π) e^-(e-E_k)^2/2 σ_k^2 de, where we use the same convention as in <cit.> and take σ_k = 0.608 αΔ_k, where Δ = (E_k+α-E_k-α)/2α and we find that α = 20 is quite suitable for our spectrum. Fig. <ref> demonstrates that our model for q=1 has level-repulsion and appears to have a level spacing distribution well approximated by the Wigner surmise. While this result shows us that our spectrum strongly resembles the predictions of RMT, the unfolding procedure is usually chosen to find such agreement, therefore it is desirable to perform a test that does not need unfolding. Such a test is given by investigating the distribution of ratios between successive gaps <cit.>. We introduce the ratios r_j = min{s_j, s_j+1}/max{s_j, s_j+1}, which tells us that r_j ∈ [0,1]. We emphasize that the s_j we use here don't need to be unfolded gaps. This test can be done with the model's physical spectrum. For the GOE in <cit.> it was analytically shown that the distribution of the r_j for 3× 3 matrices is given by p(r) = 27/4r+r^2/( 1+r+r^2)^5/2. If instead our energy levels were independent randomly distributed variables we would instead get level-attraction, p(r) = 2/(1+r)^2. We see in Fig. <ref> (b) that our result experiences level-repulsion, agreeing with the distribution in equation <ref>. Next we consider the case for q=2. The spectrum we are now interested in is equivalent to the spectrum of the Hamiltonian, Ĥ_2 = Ĥ⊗𝕀 + 𝕀⊗Ĥ, which has the spectrum Λ_k,l = E_k+E_l. This construction introduces an unwanted symmetry in the spectrum of Ĥ_2, namely that Λ_k,l = Λ_l,k, that is, the spectrum is invariant under permutations of the individual energies' indices. For q=2 this might be understood as a spatial reflection symmetry for a larger two component non-interacting system. Addressing this symmetry is simple. We only consider unique pairs of (k,l), namely, we take l>k, where we also ignore the portion of the spectrum where k=l. Ignoring k=l does not appear to significantly alter the results but allows us to eliminate trivial multiples of the q=1 spectrum. In fact, the contribution of the q=l portion of the spectrum is vanishingly small compared to the total size of our spectrum. We further introduce a new index that orders the spectrum α = 1,2… such that Λ_α< Λ_α+1. With this new spectrum we can analyze the level spacing and ratio distribution. Fig. <ref> indicates that the spectrum of Ĥ_2 experiences level-attraction. This is contrary to the q=1 case which has level-repulsion. Importantly this indicates that the spectrum of Ĥ_2 behaves like an integrable model, and has gaps clustered around s=0. While this does not guarantee violations of the q=2 no-resonance theorem, it does make violations more likely. Likewise, we expect a large amount of pseudo violations such that s_j = Λ_j+1 - Λ_j≈ 0, meaning unless very large time scales (potentially non-physically large) are considered these violations would appear as resonances in the spectrum. Considering this fact, results such as <cit.> should be investigated to understand the effects of resonances. In the appendix <ref> we demonstrate that the Poisson statistics and level-attraction persist for higher values of q and conjecture that level-attraction persists for all values of q>1. One further test we can perform is to test the actual average value of r we observe in the ratio distribution. ⟨ r ⟩ = 2ln 2 - 1 ≈ 0.38629436112 for Poisson systems and ⟨ r ⟩ = 4-2√(3)≈ 0.535898384 for the GOE. Testing this quantity allows us to clearly observe convergence to the predictions of random matrix theory as a function of system size. We see this test in Fig. <ref>. In the right panel we see the test for q=2 which reveals a strong convergence in agreement with the Poisson predictions. The data at L = 22 gives ⟨ r⟩ = 0.386294325894, which confirms seven decimal points of convergence. Therefore, from the perspective of short range correlations in the spectrum we conclude that Ĥ_2 obeys Poisson statistics, and importantly, that the q=2 case experiences level-attraction. In appendix <ref>, we demonstrate that this level-attraction persists for higher values of q and speculate that for all q>1 the spectrum must experience level-attraction. In appendix <ref> we repeat our numerical studies but for random matrices, showing that our results from a quantum chaotic Hamiltonian agree with the results of RMT. Importantly our tests here are local tests on the spectrum. It is an open question if the symmetry resolved Hamiltonian Ĥ_2 will still obey Poisson statistics for more complex tests such as investigating the spectral form factor <cit.>. We leave this question to future work. We emphasize that the presence of level-attraction does not imply violations of the q>1 no-resonance condition. It does, however, imply the gaps in the spectrum of Ĥ_2 cluster close to zero. If we investigate the probability of finding a gap within the range 0<s<ϵ, where ϵ is small, we have for the GOE, ∫_0^ϵπ s/2e^-π^2 s^2/4 ds = 1-e^--π^2 ϵ^2/4/π≈πϵ^2/4 - π^3 ϵ^4/32…, so we see the probability is proportional to ϵ^2 for small gaps. On the contrary for the Poisson distribution one intuitively yields something much larger, ∫_0^ϵ e^-s ds = sinhϵ - coshϵ +1 ≈ϵ - ϵ^2/2…, giving us only linear scaling for small gaps. While both probabilities are of course small, the GOE is significantly smaller, giving one a significantly stronger case to assume definition <ref> is satisfied in your chaotic model. In the case of Poisson statistics, one might expect to find one or many gaps that are essentially zero due to level-attraction. Infinite time averages are theoretical tools for which we average over times significantly longer than the Heisenberg time τ_H ∼ e^S, where S is the thermodynamic entropy at the appropriate energy E <cit.>. The presence of essentially zero gaps will lead to terms e^i(E_k-E_k-1)t which are stationary on time scales proportional to τ_H. Despite the presence of such violators, we expect the set of problematic gaps to be small relative to the total Hilbert space dimension. Since it is likely that some violations or cases that are indistinguishable from violations of definition <ref> are inevitable, especially for cases using q>1, it is instructive to revisit past results keeping in mind a small number of violations will most likely be present. Below we discuss modifying key results in the field of quantum equilibration theory to accommodate the presence of violations of definition <ref>. § EQUILIBRATION AND RECURRENCE §.§ Physical models In this section we tackle the problem of equilibration in light of our investigation of the higher order no-resonance conditions and the presence of level-attraction. First, let us review a basic setup. Consider a time independent system with the Hamiltonian Ĥ where we label the energy eigen basis as Ĥ|E_k⟩ = E_k |E_k⟩. For simplicity, we take the spectrum of Ĥ to be discrete and finite. We will initialize our system in some pure state |ψ (t=0)⟩ = ∑_m c_m |E_k⟩. To track equilibration, we study properties of the expectation value of an observable Â. This observable is general, but we demand that the largest single value ||A|| is independent of system size or saturates to a finite value in the thermodynamic limit. In what follows we will assume our spectrum has level-repulsion, so that we may safely assume, E_m = E_l m = l . If our observable equilibrates, its finite time value ⟨Â(t) ⟩ = ⟨ψ(t)|  | ψ(t) ⟩ must relax to its infinite time average value i.e. A = lim_T →∞1/T∫_0^T⟨Â(t) ⟩ dt =lim_T →∞1/T∫_0^T ∑_m,nc̅_m c_n A_m,n e^i(E_m-E_n)t dt = ∑_m |c_m|^2 A_m,m. A̅ is usually written in terms of the diagonal ensemble ω = ∑_m |c_m|^2 |E_m⟩⟨ E_m| as A̅ = ( ωÂ). A typical quantity to study in quantum equilibration would be the variance of the expectation value around A̅. This was studied and bounded in <cit.> assuming that the q=2 no-resonance condition was satisfied. The variance is written as μ_2 = lim_T →∞1/T∫_0^T ( ⟨Â(t) ⟩ - A̅)^2 dt. It was famously found in <cit.> that this variance can be bounded by the purity of the diagonal ensemble μ_2 ≤ ||A||^2 ( ω^2 ) . Note equation <ref> holds as a consequence of the q=2 no-resonance condition holding. The purity of the diagonal ensemble usually decays exponentially fast with respect to the system size (see for example Fig. 2 in <cit.>). If one assumes higher order q no-resonance conditions, it was recently found that, for higher moments, μ_q = lim_T →∞1/T∫_0^T ( ⟨Â(t) ⟩ - A̅)^qdt, a similar bound can be found <cit.>, |μ_q| ≤(q ||A|| √(( ω^2 )))^q. In light of section <ref> and the presence of level-attraction for higher order q, these results should be updated to reflect the high probability of there being a violation of the q no-resonance condition. Suppose we have a model that has violations of the q no-resonance condition. Then the moments μ_q can be bounded as |μ_q| ≤ ||A||^q ( q^q + 𝒩_q,L/2q)√(( ω^2 ))^q, where 𝒩_q,L is the maximum number of times one E_m appears in violations of the q no-resonance condition for a given system size L. We call the E_m's that appear in more than one violation of the resonance condition exceptional violators. Terms that contribute to μ_q are sums of energies that are equal. Let Λ_q and Λ_q' be sets of indices corresponding to particular energies, ∑_m∈Λ_q E_m = ∑_m∈Λ_q' E_m. The no-resonance condition picks out the trivial set of energies that satisfy this equality, which is when Λ_q = Λ_q'. These contributions were bounded in <cit.>. We collect the remaining violations in a set 𝒮 and write, |μ_q| ≤(q ||A|| √(( ω^2 )))^q +| ∑_Λ_q ∈𝒮∏_j=1^q c̅_m_j c_n_j A_m_j,n_j|, where we have identified Λ_q ∈𝒮 = { m_j,n_j }. The second term can be bounded as follows. |∑_Λ_q ∈𝒮∏_j=1^q c̅_m_j c_n_j A_m_j,m_j| ≤ ||A||^q ∑_Λ_q ∈𝒮∏_j=1^q |c_m_j|| c_n_j|. Since all |c_m_j| are positive, we may use the inequality of arithmetic and geometric means, giving ≤||A||^q/2q∑_Λ_q ∈𝒮∑_j=1^q ( |c_m_j|^2q + |c_n_j|^2q). We know that ( ω^q ) = ∑_m |c_m|^2q. Assuming an individual |c_m_j|^2q contributes at most 𝒩_q,L times, we have that ≤||A||^q 𝒩_q,L/2q( ω^q ). We lastly recall that ( ω^q ) ≤( ω^2 )^q/2, which completes the proof. Accommodating the presence of degenerate gaps for the q=2 case has been considered before in <cit.>. Our bound reads, |μ_2| ≤ ||A||^2 ( 1+𝒩_2,L/4) ( ω^2 ). Instead, one can likewise write <cit.> as |μ_2| ≤ N(ϵ) ||A||^2( ω^2 ), where N(ϵ) is the maximum number of energy gaps in any interval for ϵ > 0, i.e. N(ϵ) = max_E | { (k,l) | E_k - E_l ∈ [E,E+ϵ) }|. One can recover the maximum degeneracy of the gaps by considering lim_ϵ→ 0^+ N(ϵ). In the limit of non-degenerate gaps these bounds are identical, and only differ by a constant factor for a small number of degeneracies in the gaps. Our result might in theory give better constant factors than the result in <cit.>, however N(ϵ) is likely a more intuitive quantity and easier to work with numerically. We next wish to understand the properties of 𝒩_q,L, which in practice is challenging to study numerically. The worst scaling it could have is the total number of violations, i.e. 0≤𝒩_q,L≤ |𝒮|. As we have noted earlier, the presence of level-attraction does not imply |𝒮| >0. An easy property to understand however is that if 𝒩_q,L≥ 2 this implies at the very least that 𝒩_q+1,L≥ 1. To see this consider q = 2 for an exceptional violator E_m that appears at least twice. We might have E_m as an exceptional violator as, E_m + E_n = E_p +E_l, E_m + E_k = E_r + E_h. This implies for q = 3, a violation of the no-resonance condition is E_p + E_l + E_k = E_r + E_h +E_n. Despite two exceptional violations for q = 2 implying at least one for q = 3, this does not imply 𝒩_q,L is decreasing in q. To get a handle on the size of 𝒩_q,L we can attempt to quantify the expected or average behavior of the quantity. First, let us assume we randomly generated the set S. We will assume that the indices which appear are uniformly generated, so each element of S can be understood to be a tuple of 2q indices, (m_1, … m_2q). These indices are not necessarily independent. For example, they cannot be equal to each other under our assumptions. Despite this, in the large L limit this dependence cannot effect results due to the smallness of q and the corresponding exponential nature of the number of possible indices 2^L. We can therefore focus on the first index of each tuple m_1. Our goal will be to predict the average number of times m_1 ends up being the same index. It can at most appear |S| times, and thus we wish to compute ⟨𝒩_q,L⟩= ∑_n=1^|S| n p(n), where p(n) is the probability of the same index appearing n times. The total number of configurations possible for the first index of each tuple in S is 2^|S|L, and therefore we must simply count the number of configurations where n copies of the same m_1 appear. This is given by |S| n 2^L ( |S|-n ), which gives the following formula for our expected value ⟨𝒩_q,L⟩= ∑_n=0^|S|n |S| n/2^Ln = |S|/2^L (2^-L+1)^|S|-1. We now have some special limiting cases to consider. Suppose that |S| ∝ c 2^L for some constant c. Then the expected value of ⟨𝒩_q,L⟩ is c e^c as L goes to infinity. However, if |S| has sub exponential growth, for example if it scales as L, then the expected value goes to zero for large system size as 𝒪(L/2^L)= 𝒪(|S|/2^L). Therefore we expect that for most cases, even with modest violations of the no-resonance condition we expect lim_L →∞ N_q,L to be finite and quite small. §.§ A Random Matrix Theory Approach In this section we will show how one could compute μ_q for the GUE and GOE with an unfolded spectrum in the large N limit. We can rewrite equation <ref> for finite T as μ_q(T) = ∑_i_1, j_1,...,i_q, j_q( ∏_k=1^qc_i_kc_j_k⟨ i_k|A|j_k⟩)1/T∫_0^T e^i∑_k=1^q(λ_i_k-λ_j_k)t dt, where the eigenvalues are unfolded and from either a N× N GUE or GOE distributed matrix. We define its moments as its expectation value. Define the n-level spectral form factor as 𝒦_2n^β(t) = 1/N^2n⟨∑_i_1,j_1,...,i_n,j_n=1^N e^i t∑_k=1^nλ_i_k-λ_j_k⟩_β, where the subscript β =1 or 2 denotes the GOE and GUE expectation values, respectively. Then we may express the expectation values of μ_q in terms of its its q-level spectral form factor ⟨μ_q(T) ⟩_β = lim _N →∞∑_i_1, j_1,...,i_q, j_q( ∏_k=1^qc_i_kc_j_k⟨ i_k|A|j_k⟩)1/T∫_0^T. ⟨ e^i∑_k=1^q(λ_i_k-λ_j_k)t . ⟩_β dt = ( 1/T∫_0^T𝒦_2|B|^β(t)dt)∑_i_1≠ j_1,...,i_q≠ j_q( ∏_k=1^qc_i_kc_j_k⟨ i_k|A|j_k⟩) = ( 1/T∫_0^T𝒦_2|B|^β(t)dt) ((Aρ -ω))^q. It is also worth noting that this equation is so general that it applies to any random matrix ensemble. Usually the GUE and GOE are of interest, but progress has been made studying the spectral form factor for other matrix ensembles. For example see <cit.>. The q-level spectral form factor can be computed explicitly, but it is a computationally heavy task. For example see <cit.>, where it is computed but for ensembles that are not unfolded. In particular, for the GOE and GUE, the 2-level spectral form factor of the unfolded spectrum has a well-known explicit formula in the large N limit <cit.>. This leads to the following result. For any fixed T greater than zero, the GOE and GUE the expectation value of μ_2(T) goes to zero as 1/N^2 in the large N limit. Furthermore, if T goes to infinity at the same rate as N (i.e. N=T) then μ_2(T) goes to zero as 1/T. From <cit.>, we know that for large N the spectral form factors can be approximated by 𝒦_2^1(t) ≈{[ 4t/π N^2 + 2t/π N^2ln (1 + 4t/π N ) if 0≤ t ≤π N/2; 2/N + 2t/π N^2ln(4t/π N +1/4t/π N -1) if t≥π N/2 ]. and 𝒦_2^2(t) ≈{[ 2t/π N^2 if 0≤ t ≤π N/2; 1/N if t≥π N/2 ]. . Clearly, the first part of every piecewise function will dominate for large N, thus completing the first claim. Next, set T=N. Taking the above quantities time averages we get 1/T∫_0^T𝒦_2^1(t)dt ≈1/T(3/2 π-π/16ln( 1+4/π) +1/πln( 1+4/π) +3 π/32+1/4) and 1/T∫_0^T𝒦_2^2(t)dt ≈1/π T. This proves the second claim. As we demonstrate in appendix <ref>, the spectrum of the random matrix Hamiltonian likewise experiences level-attraction for q≥ 2. However, despite the presence of level-attraction, the above RMT result indicates that we still should expect μ_q → 0 indicating equilibration on average of our observable. § CONCLUSION In this work we have explored spectral statistics of chaotic Hamiltonians, namely the statistics surrounding sums of energies. We found that despite being chaotic, sums of energies displayed Poisson statistics instead of Wigner-Dyson statistics. This was demonstrated numerically for both a chaotic spin Hamiltonian and the GOE. The presence of level-attraction leads one to believe that accounting for potential degeneracies or “resonances" in infinite time averages of some dynamical quantities is necessary. We applied this observation to the theory of equilibration where we generalized known bounds to accommodate for degeneracies. Assuming the number of degeneracies is not exponentially large in system size, we demonstrated that the the bounds can be easily generalized to accomondate the presence of resonances. We further used techniques from RMT to prove that, for the GOE, moments of equilibration go to zero in the thermodynamic limit. § ACKNOWLEDGEMENTS J.R. would like to thank Bruno Bertini, Marcos Rigol and Alvaro Alhambra for fruitful conversations. J.R. would like to extend special thanks in particular to Bruno who gave valuable feedback at various stages of the project. J.R. acknowledges the support of Royal Society through the University Research Fellowship No. 201101. N.J.P. acknowledges support from the Natural Sciences and Engineering Research Council of Canada (NSERC). § RMT PREDICTIONS FOR Q=2,3 In this section we demonstrate that random matrix models have level-attraction for q = 2,3. We accomplish this by simply looking at the ratio test outlined in the main text. First let us define a random matrix Hamiltonian, Ĥ = A + A^T, where A is a matrix filled with random numbers generated from a normal distribution with zero mean and unit variance. We label the side length of A as N. Similar to the physical Hamiltonian, we can study the q = 2 case by first constructing the Hamiltonian, Ĥ_2 = Ĥ⊗𝕀 + 𝕀⊗Ĥ, with the spectrum Λ_k,l. Again this spectrum is symmetric under permutations of the indices, so we resolve this symmetry and only treat eigenvalues with unique (k,l) such that l > k. We investigate the spectral properties of this Hamiltonian after resolving our symmetry in figure <ref> in the left panel. Here we clearly see agreement with a Poisson distribution. The random matrix model experiences level-attraction. The construction of a new Hamiltonian for q = 3 is similar. We have Ĥ_3 = Ĥ⊗𝕀⊗𝕀 + 𝕀⊗Ĥ⊗𝕀 + 𝕀⊗𝕀⊗Ĥ. This gives us a new spectrum of Λ_k,l,q = E_k + E_l + E_q, where this new spectrum is also invariant under permutations of its indices. We resolve this symmetry by considering terms such that q>l>k. The result of the ratio test on this new spectrum is given in the right panel of Fig. <ref>, indicating again level-attraction and agreement with Poisson statistics. This is similarly found for higher values of q, which leads us to conjecture that this will be true for all q≥2. § PHYSICAL HAMILTONIAN Q = 3,4 SPECTRAL STATISTICS In this section we provide numerical evidence for level-attraction for q = 3,4 in the physical Hamiltonian. We repeat the q=3 case as was covered in the RMT appendix, and also investigate the q=4 statistics. Both cases will be covered with the ratio test, and we will use the same physical model as the main text, where we resolve all relevant symmetries. For the q = 4 case, we must work with the Hamiltonian, Ĥ_4 = Ĥ⊗𝕀⊗𝕀⊗𝕀 + 𝕀⊗Ĥ⊗𝕀⊗𝕀 + 𝕀⊗𝕀⊗Ĥ⊗𝕀 + 𝕀⊗𝕀⊗𝕀⊗Ĥ. This gives us a new spectrum, again which is invariant under index permutations. We can resolve this symmetry with an identical strategy to the q=2,3 cases, and study the corresponding symmetry-resolved spectrum. The results of this for q=3,4 in the physical Hamiltonian are given in Fig. <ref>. These results again indicate that the spectrum has level-attraction and obeys Poisson statistics. The left panel in Fig. <ref> also serves as evidence that the statistics of the Hamiltonian agrees with RMT as seen in the right panel of Fig. <ref>.
http://arxiv.org/abs/2307.05722v1
20230710112941
Exploring Large Language Model for Graph Data Understanding in Online Job Recommendations
[ "Likang Wu", "Zhaopeng Qiu", "Zhi Zheng", "Hengshu Zhu", "Enhong Chen" ]
cs.AI
[ "cs.AI", "cs.CL", "cs.IR" ]
Calculating Originality of LLM Assisted Source Code Shipra Sharma [email protected] Balwinder Sodhi Department of Computer Science and Engineering Indian Institute of Technology Ropar India [email protected] ========================================================================================================================================================================== [1]Corresponding Author. Large Language Models (LLMs) have revolutionized natural language processing tasks, demonstrating their exceptional capabilities in various domains. However, their potential for behavior graph understanding in job recommendations remains largely unexplored. This paper focuses on unveiling the capability of large language models in understanding behavior graphs and leveraging this understanding to enhance recommendations in online recruitment, including the promotion of out-of-distribution (OOD) application. We present a novel framework that harnesses the rich contextual information and semantic representations provided by large language models to analyze behavior graphs and uncover underlying patterns and relationships. Specifically, we propose a meta-path prompt constructor that leverages LLM recommender to understand behavior graphs for the first time and design a corresponding path augmentation module to alleviate the prompt bias introduced by path-based sequence input. By leveraging this capability, our framework enables personalized and accurate job recommendations for individual users. We evaluate the effectiveness of our approach on a comprehensive dataset and demonstrate its ability to improve the relevance and quality of recommended quality. This research not only sheds light on the untapped potential of large language models but also provides valuable insights for developing advanced recommendation systems in the recruitment market. The findings contribute to the growing field of natural language processing and offer practical implications for enhancing job search experiences. § INTRODUCTION Recommendation in online recruitment aims at suggesting relevant job opportunities to job seekers based on their preferences and qualifications, improving the chances of matching the right employment. With the exponential growth of online recruitment platforms and the need for efficient and personalized job search experiences, the development of effective job recommendation systems has become crucial. In online recruitment systems, job postings and resumes are written in natural language. Traditional approaches have treated job-resume matching as a supervised text-matching problem using paired data for training <cit.>. However, online recruitment platforms often suffer from sparse interaction data, with job postings attracting only a few candidates on average <cit.>. To address this, recent studies <cit.> have explored the use of behavior graphs to capture high-order interactions and alleviate the sparse interaction issue. These behavior graphs leverage message passing to enhance the understanding of user preferences. Different from many general recommendation tasks, it is easy to find that textual understanding forms the backbone of job recommendation, and behavior modeling contributes to the personalized module. In our work, we aim to break through the accuracy bottleneck of job recommender by promoting the semantic richness of textual representation. Inspired by several recent successful recommendations based on text pre-training <cit.>, we first introduce the large language model (LLM) as the job recommendation framework that directly generates targets to achieve this goal. There are many benefits and is also very natural to do this. For instance, out-of-distribution items usually appear in recruitment markets since new job demands are constantly emerging, such as prompt engineers for generative models. The powerful semantic mining ability and massive external knowledge of LLM enhance the generation and associative power of recommender, which is able to generate reasonable recommendation results for the hard OOD items. However, the existing learning schema of LLM recommender cannot understand the non-textual behavior graph which weakens the personalized recommendation ability for different job seekers. To address this challenge, we propose a meta-path prompt constructor to encode the interaction information of graph into the natural language prompt. Specifically, in such a heterogeneous behavior graph, each meta-path composed of various types of nodes and edges can be transferred into a description naturally since each type indicates a specific and meaningful interaction, e.g., interview, conversation, etc. Along this line, for each job seeker, LLM captures the high-order interaction feature to augment her personality with the meta-path prompt. Based on the above analysis, we explore the inclusion of graph data understanding in large language model-based recommendations for the first time. An efficient large language model named GLRec (Graph-understanding LLM Recommender) is proposed to optimize the recommended quality of job recommendation, which is fine-tuned with LoRa <cit.> in our constructed instruction dataset for aligning the gap between pre-trained knowledge and actual recruitment domain. Especially, our exploration presents two valuable and important findings that largely influence the graph understanding strategy of LLM: (i). Different paths would present different weights for the model decision. (ii). The position bias of the order of path prompts brings unstable answers. For this issue, we carefully design path shuffling, adaptive path selector, and their hybrid path augmentation mechanism to alleviate the negative impact brings by different path prompts. Through extensive experiments on real-world recruitment datasets, we observe a significant performance gain through the development of LLM and its graph learning strategy. The main contributions could be summarized as follows: * To our best knowledge, we are the first to implement the fine-tuned large language model as job recommender, which promotes matching accuracy via the semantic richness and massive knowledge of LLM. * We propose the meta-path prompt constructor that leverages LLM recommender to understand behavior graphs for the first time and design a corresponding path augmentation module to alleviate the prompt bias. * We conduct sufficient experiments on real-world recruitment datasets, and the experimental results and visualization cases show the superiority of our model. § RELATED WORK By combing through the research idea, our work is mainly related to two research areas: job recommendation and methods of LLM for recommendation. We will introduce the mainstream work of these two research directions in detail, and point out the shortcomings of existing methods to draw the motivation for our proposed framework. §.§ Job Recommendation Job Recommendation, especially job-resume matching is a necessary task in recruitment data mining, and it has been extensively studied in the literature <cit.>. Early methods approached this problem as a recommendation task <cit.>, relying on collaborative filtering assumptions. However, recent research has focused more on text-matching technology, aiming to improve the representation of job and resume documents [26]. Various techniques have been proposed to encode job and resume information. For example,  <cit.> utilized CNN for encoding, while <cit.> leveraged RNN and BiLSTM to capture sequential information.  <cit.> introduced a profiling memory module to learn latent preference representation by interacting with both job and resume sides. Additionally,  <cit.> explored the effectiveness of adversarial training for job-resume matching. In addition to the aforementioned research, there are also works that consider multi-granularity interactions. The ranking-based loss function can be used to capture multi-level interactions as supervision signals <cit.>. <cit.> propose a bilateral multi-behavior sequence model to describe users' dynamic comprehensive preferences. These approaches highlight the importance of considering various interaction patterns and incorporating additional user information to improve the quality of job recommendations. However, online recruitment platforms frequently encounter challenges due to sparse interaction data, resulting in job postings attracting only a limited number of candidates on average <cit.>. Recent studies <cit.> have investigated the utilization of behavior graphs to capture high-order interactions and mitigate the problem of sparse interactions. These behavior graphs employ message-passing techniques to enrich the understanding of personalized user preferences. §.§ Large Language Models for Recommendation LLMs offer the potential to extract high-quality representations of textual features and leverage extensive external knowledge to enhance recommendation systems.  <cit.> conducted a systematic review and analysis of existing LLM-based recommendation systems. Existing work can be divided into two categories: discriminative models and generative models. Most discriminative models align the representations of pre-trained models like BERT with domain-specific data through fine-tuning. For example, <cit.> proposed pre-training and fine-tuning-based approach to learn users' representation, which leveraged content-rich domains to complement those users' features with insufficient behavior data. Additionally, some research explores training strategies like prompt tuning.  <cit.> leveraged BERT's Masked Language Modeling (MLM) head to uncover its understanding of item genres using cloze-style prompts. Prompt4NR <cit.> pioneered the application of the prompt learning paradigm for news recommendation. Generative models usually translate recommendation tasks as natural language tasks, and then apply techniques such as in-context learning <cit.>, prompt tuning <cit.>, and instruction tuning <cit.> to adapt LLMs to directly generate the recommendation results. Compared to discriminative models, generative models have better natural language generation capabilities. In the job-resume matching area, there is a generative model which develops LLM to generate potential JDs for more explainable and suitable recommendations <cit.>. Although LLM recommenders achieve successful applications through their ability of knowledge association, the lack of graph data understanding ability reduces personalized adaption. In our work, we aim to address this crucial challenge in the online recruitment scenario. § METHODOLOGY In this section, we first illustrate our research problem formally and present related notations. Then the technical detail of GLRec would be introduced progressively. The overall framework is shown in Figure <ref>. §.§ Preliminary §.§.§ Problem Formulation Consider a set of candidates C = {c_1, c_2, …, c_n_1} and a set of jobs 𝒥 = {j_1, j_2, …, j_n_2}, where n_1 and n_2 represent the total number of candidates and jobs, respectively. Each candidate and job are associated with textual documents that describe their resumes and job requirements. They are also linked to a collection of directed interaction records (such as interviewing and discussing) within the recruitment platform. These interactions are formally represented as 𝒜c_i = {c_i → j' | c_i ∈ C, j' ∈𝒥} and 𝒜j_k = {j_k → c' | j_k ∈𝒥, c' ∈ C}, indicating the directed interactions or links initiated by candidate c_i or employer j_k (referred to as a job). We use i and k as indices for candidates and jobs, respectively. Our objective is to predict the compatibility between a job posting and a candidate. §.§.§ Generative Large Language Models Generative LLMs are powerful language models that can generate coherent and contextually relevant text. These models, such as GPT-3/4, are trained on vast amounts of text data and can generate human-like text based on a given prompt or input. Fine-tuning is a common adaption strategy to align the target of pre-trained model and domain-specific applications, such as two popular paradigms of prompt tuning, and instruction tuning. For all these tuning methods, they have an equal final objective loss of autoregressive training as follows: ℒ_f = max _Θ∑_(x, y) ∈𝒯∑_t=1^|y|log(𝒫_Θ(y_t| x, y_<t)), Taking instruction tuning as an example, which designs and constructs instruction data to restrict the output scope and format. x and y represent the “Instruction Input” and “Instruction Output” in the self-instruct data, respectively, e.g., Instruction Input: “Do you like this item?”, Instruction Output: “Yes.”. And y_t is the t-th token of the y, y_<t represents the tokens before y_t, Θ is the original parameters of LLM, and 𝒯 is the training set. §.§.§ Task-specific Instruction In our work, we design two job recommendation tasks to test the LLM recommender following existing related work <cit.>, i.e., point-wise and pair-wise job matching. Here we introduce our designed template for the sample in our dataset, where information related to privacy and business has been filtered. Assume there is a job seeker called candidate whose Candidate Profile Prompt and recommended JD Prompt are defined as: Candidate Profile Prompt: Age: 25, Education: Bachelor's degree, Graduation School: XXX University, Major: Computer Applied Science, Work Experience: 2 years. JD Prompt: Position Title: Full Stack Engineer, Educational Requirement: Bachelor's degree, Work Experience: 1-3 years, Skill Requirements: HTML/JAVA/Spring Boot/SQL. For the point-wise task, we let the LLM recommender learn to predict the satisfaction of a candidate with a recommended job. The instruction is designed as: Point-wise Instruction: You are a recommender, determining whether a candidate would be satisfied with the recommended job position. Please answer with “Yes." or “No.". For the pair-wise task, we let the LLM recommender learn to justify the preference of a candidate for a recommended job pair. Given two jobs' JD Prompt “A" and “B", the instruction is designed as: Pair-wise Instruction: You are a recommender, determining which position will match the candidate. Please answer with “[A]." or “[B].". With the above designed prompts and instruction text, the LLM is able to adapt to a domain recommendation situation. Note that, to ensure the stability of training, we add the JD prompt to the back of ground truth to increase the predicted length. To further fuse interaction knowledges, in the next section, we will illustrate the understanding part of graph data for the large language model: behavior meta-path prompt generation. §.§ Behavior Meta-path Prompt Generation To inject large language models with the ability to comprehend interactive relationships in graph data, we propose a meta-path-based prompt constructor to obtain prompt inputs that represent local subgraphs. Before delving into the details of our approach, it is necessary to provide a formal introduction to heterogeneous graph and meta-path. Heterogeneous Graph. A heterogeneous graph, denoted as 𝒢=(V, E), consists of an object set V and a link set E. A heterogeneous graph is also associated with a node type mapping function ϕ: V →𝒱 and a link type mapping function ψ: E →ℰ. 𝒱 and ℰ denote the sets of predefined object types and link types, where |𝒱|+|ℰ|>2. Meta-path. A meta-path P is defined as a path in the form of 𝒱_1 𝒱_2 ⋯𝒱_l+1 (abbreviated as 𝒱_1 𝒱_2 ⋯𝒱_l+1), which describes a composite relation ℰ_1 ∘ℰ_2 ∘⋯∘ℰ_l between objects 𝒱_1 and 𝒱_l+1, where ∘ denotes the composition operator on relations. Heterogeneous graphs are more diverse and complex in terms of their semantics compared to homogeneous graphs. Meta-paths are commonly used techniques to mine and represent the interaction semantics within them. In the context of online recruitment, the interactions between job seekers and job positions, which involve different types of behaviors, form a behavior graph. This behavior graph is a typical heterogeneous graph, where different node types include Candidate, JD, and different edge types include messaging, interviewing, matching, and more. Due to the unique and defined semantics of each type of edge in the behavior graph, it is natural to consider transferring the graph data format meta-path to a natural language description which is acceptable for the large language model. We only need to predefine the prompt template according to the appeared edges in a path and then fill in the template with the resume or job description information. For instance, given a typical meta-path c_1 j_1 c_2. The prompt template is constructed as: Meta-path Prompt: c_1 interviewed for position j_1. This position discussed with a job seeker c_2. The node information, i.e., the description of candidates or JD, then will be filled in the meta-path prompt template to generate the final prompt data in our dataset. The real case can be referred to in Figure <ref>. In addition, to avoid too similar meta-paths leading to redundancy, we define a simple similarity metric as follows, 𝒮_i,j = |P_i ∩ P_j |/|P_i ∪ P_j|,     P_i, P_j ∈Φ_P, where Φ_P denotes the set of sampled meta-paths for a candidate. P_i, P_j indicates two meta-paths in this set. |P_i ∩ P_j| is the number of tokens that exist simultaneously in two paths, and P_i ∪ P_j is the union of them. We ensure that 𝒮_i, j≤γ between the final selected M meta-paths and 0 ≤γ≤ 1 is a hyperparameter. §.§.§ Path Debiasing and Soft Selection Different from the traditional network embedding, sequence-based meta-path prompts would lead to two challenges for LLM to understand the candidates' behavior sub-graph. Influence of Path Weight. Different meta-paths would present different weights for the model decision. Position Bias of Path Prompt. The position bias of the order of path prompts brings unstable answers. These two challenges appeared when recognizing the pre-trained large language model as a recommender, which hinders the effective modeling of semantic relationships in the graph by LLM recommendation models. To provide a more intuitive explanation, we extracted a real-world case from the log of a popular recruitment platform and visualized them in Figure <ref>. Specifically, for a job seeker in the IT industry, given his Candidate Profile Prompt, Meta-path Prompt 1, and Meta-path Prompt 2, we further feed the LLM with a Task-specific Instruction belonging to point-wise recommendation. The LLM recommender is expected to output the decision of “Yes” or “No” to present the preference of the candidate. Challenge 1 corresponds to Case 1 and Case 2 in this figure. We can find that the same profile and task description with different behavior meta-paths forces LLM to make different predictions. Obviously, the diversity of technology stacks in Path 1 reveals the candidate's preference for full-stack development, and compared to Path 2, the background of path-related job seeker is more close to our candidate. Therefore, for this candidate, Path 1 is evidently more important for the final decision. For Challenge 2, if we construct the input sequence as Case 3, i.e., the order is meta-path prompt 1 → meta-path prompt 2, the LLM outputs the wrong answer “No”. But with a reverse path prompt order, the LLM is able to provide an accurate prediction. Similar to the widely known position bias of candidate items <cit.>, the position of context prompt clearly misleads the model to generate unstable outputs. To address the negative impact of these two challenges on the recommendation results, we carefully design an augmentation module specifically for the meta-path prompt, which consists of three concise but effective strategies. The first strategy is Shuffle Mechanism. When preparing domain data for the model's supervised fine-tuning (SFT), for each sample that contains multiple paths, we randomly shuffle the meta-path prompts in the sample m times. This data augmentation technique allows the model to learn semantic invariance patterns from different combinations of paths, leading to more stable results. It enhances the robustness of the model without introducing redundant information. The second strategy is Path Soft Selector. In this work, we regard the path sampling process in Behavior Meta-path Prompt Generation as a hard selection to heuristic selects semantically rich paths. The Path Soft Selector is used to further adaptively assign a learned weight distribution to the constructed meta-path prompts. Firstly, for a given meta-path prompt ℳ_i , i ∈{1, 2, ..., M} (M denotes the number of path), we obtain the LLM word embedding e_t of each token t ∈ℳ_i. So, the meta-path embedding H_i of ℳ_i can be obtained via a mean pooling as follows, H_i = 1/|ℳ_i|∑_t ∈ℳ_i e_t,     i ∈{1, 2, ..., M}. Then we propose a soft selector to calculate the weight for each meta-path embedding as: α_i = softmax (W_a H_i) = exp(W_a H_i)/∑_j=1^Mexp(W_a H_j), where W_a ∈ℛ^1 × d_e is a trainable parameter, and d_e denotes the dimension of E_i. To avoid the training collapse caused by changed value scale, we utilize a controller parameter λ∈ (0, 0.5] to update word embeddings in Eq. (<ref>). ê_t = e_t + λ·α_i e_t,    t ∈ℳ_i, Compared with most existing tuned or non-tuned LLM models, our prompt augmentation mechanism considers phrase-based attention to distinguish different paths. Actually, this simple solution can be transferred to other similar situations, such as weighed sentence embeddings. What's more, the third strategy is the Hybrid Mechanism which implements Shuffle Mechanism and Path Soft Selector simultaneously. This hybrid module is expected to address the both two challenges. We will evaluate these three strategies in the experiment section. §.§ LLM Instruction Tuning and Recommendation In this subsection, we will introduce the instruction tuning and recommendation process, which aims to align the used LLM with the recommendation task effectively and efficiently. For instruction tuning, we follow the general supervised fine-tuning method to minimize the autoregressive loss calculated by ground truth and corresponding LLM output. In our work, we mask the loss position of the prompt part. Specific prompt format, task-specific instruction, and ground truth have been introduced in the Methodology section. However, direct fine-tuning of the entire model can be computationally intensive and time-consuming. To address this, we propose a lightweight fine-tuning strategy using LoRA, which involves freezing the pre-trained model parameters and introducing trainable rank decomposition matrices into each layer of the Transformer architecture. This approach facilitates lightweight fine-tuning while reducing GPU memory consumption. And the final learning objective can be computed as follows: ℒ_f = max _Θ_L∑_(x, y) ∈𝒯∑_t = 1^|y|log(P_Θ+Θ_L(y_t| e_x, y_<t)) where Θ_L is the LoRA parameters and we only update LoRA parameters during the training process. Note that, different from existing fine-tuning frameworks for recommendation systems, we replace their token input x by the embedding e_x in Eq. (<ref>), since we update the prompt token embedding in the soft selector. As for the recommendation process, since the trained model has learned the output format of our defined ground truth after several SFT alignment steps. So our designed answer parsing is a simple way. We catch the softmax probability of label generation (the token used to denote label, such as “Yes./No.” or “[A]/[B]” in our work ) in the position of model's output corresponding to that in the ground truth. Along this line, the final prediction probability is calculated. § EXPERIMENTS To evaluate the motivation of our model, we conduct experiments to answer the following research questions: * RQ1: How much improvement can be achieved in the field of job recommendation by using recommendation systems based on generative large language models? * RQ2: How does the inclusion of behavior graph understanding affect the effectiveness of GLRec? * RQ3: How well does the meta-path augmentation module optimize the influence of path selection on decision-making and the bias introduced by prompts? §.§ Experimental Settings §.§.§ Datasets. We conduct experiments on the dataset Recr which is collected from a real-world and large online recruitment platform in China to assess recommendation methods. The dataset was constructed from the online logs and contained two kinds of behavior: Match and Interaction, corresponding to the matching set and interaction set mentioned in Problem Formulation. Besides, each candidate (and job) is associated with a descriptive text (i.e., resume or job description). The overall statistics are shown in Table <ref>. From the statistical data, it can be seen that job recommendation is a sparsely interactive scenario. The segmentation ratio of the training set and testing set is 5:1. Note that all sensitive or private information has been filtered out from the data. §.§.§ Baseline. To provide a comprehensive evaluation of our GLRec, we compare it against both LLM-based and traditional recommendation methods: * RobertaRec <cit.>: Candidate resume and JD text are encoded into fixed-length vectors using RoBERTa encoder and then used to calculate similarity scores, enabling personalized recommendations. * HGT <cit.>: Heterogeneous Graph Transformer is a powerful graph learning model which propagates the embeddings (initialized by RoBERTa) of nodes on behavior graph to capture high-order interactions. * TALLrec <cit.>: An advanced fine-tuned LLM recommender that uses instruction tuning on self-instruct data with users' historical interactions. The original backbone of its pre-trained model is LLaMA, and we change it by BELLE as the same as ours for the Chinese corpus. §.§.§ Evaluation Metric. We evaluate the two tasks using the conventional evaluation metric for explicit recommendation: Area Under the Receiver Operating Characteristic (AUC), as our two tasks can be transferred to binary classification problems and the metric captures the similarity between our setting and predicting user interest in a target item. We calculate the AUC score using the Scikit-learn package. §.§.§ Implementation Details. In this paper, we utilize BELLE-LLaMA-7B <cit.> as the pre-trained LLM backbone due to its expanded Chinese vocabulary. The instruction-tuning and model inference, using LoRa, are conducted on 4 Tesla A100 80G GPUs. Our approach incorporates the meta-path prompt and user-specific task instructions as model inputs for personalized recommendations. In our experiments, we investigate the impact of different numbers of paths, specifically [0, 1, 2, 3], for GLRec. Further details regarding the path prompt and instructions can be found in the Methodology section. Additionally, both RobertaRec and HGT have a token embedding dimension of 768, and HGT utilizes mean pooling to obtain the initial node embedding. For all methods, we optimize model parameters using the Adam <cit.> optimizer with a default learning rate of 1e-4, minimizing the MSE loss as the optimization objective. §.§ Performance Comparison In this section, we conduct performance comparison experiments on Recr to answer RQ1. As mentioned in the task definition in Section Methodology, the point-wise and pair-wise settings are implemented for evaluation. We also explore the influence of the OOD situation on different models. The experimental split settings of Random, OOD_position, and OOD_JD are introduced below: * Random: We randomly split the training and testing dataset based on the interaction record of each user. * OOD_position: The intersection on JD's “job position” feature between the training set and the testing set is empty. * OOD_JD: The intersection on JD items between the training set and the testing set is empty. Our experimental results are reported in Table <ref>. Overall, our proposed GLRec model achieves the best performance among all baselines. There are distinctive score gaps between GLRec and all baselines according to the improvement in Table <ref>. It demonstrates the superiority and adaptability of the large-scale model framework that incorporates relationship understanding and extensive semantic knowledge in the job recommendation scenario. What's even more exciting is that GLRec demonstrates impressive performance on OOD tasks. While its performance may decline slightly compared to the random setting, our model achieves a significant breakthrough compared to other models, which essentially result in near-random guessing. This phenomenon illustrates the necessity of utilizing knowledge association for model generalization. Go deeper into the part of baselines, the graph-based HGT outperforms the conventional dual-tower matching model (RobertaRec) in the context of job recommendation, which further proves the significance of learning relationships. What's more, we find that most models perform better on the pair-wise task than that of point-wise task. That is to say, directly determining whether an item is suitable is more challenging than comparing its priority with another item. §.§ The Impact of Meta-path Number In this experiment, we investigate the impact of meta-path number on the effectiveness of GLRec. Here we evaluate the point-wise performance on Random setting using the AUC metric for different numbers of meta-paths, ranging from 0 to 3. We also input the meta-path prompt (removing extra instruction text for feature conciseness) into RobertaRec for comparison. From the line graph of Figure <ref>, we can observe the following trends: * For GLRec, the results consistently increase as the number of meta-paths increases. This indicates that the inclusion of behavior graph understanding significantly improves the recommendation effectiveness of GLRec. * One notable observation is the significant improvement in GLRec's performance when transitioning from 0 meta-paths to 1 meta-path, and achieve the peak with only 2 meta-paths. The core increases from 0.71 to 0.88, indicating a substantial boost in recommendation effectiveness. This improvement suggests that the chain-of-thought ability of the LLM, inspired by in-context learning, plays a crucial role in GLRec's performance. * For RobertaRec, which does not incorporate behavior graph understanding, the values remain relatively stable across different meta-path numbers. The reason is that discriminative bert-based model lacks the ability to effectively understand prompts like generative LLMs. The results indicate that the inclusion of behavior graph understanding through meta-path prompt input has a significant positive impact on the effectiveness of GLRec. By leveraging the rich information in behavior graphs, GLRec gains a deeper understanding of user-item interactions, leading to improved recommendation performance, which provides the sufficient evidence for RQ2. §.§ The Impact of Bias of Meta-path Prompt Due to the sequential nature of language model input, the construction of multi-path prompt sequences results in a human-induced position bias, or order bias, which disrupts the final decision-making of LLM model. Additionally, this input pattern does not allow the model to learn the importance of semantic information in different paths. Therefore, we design a path shuffle mechanism, a path soft selector, and a hybrid mechanism combining both to enhance the model's understanding of path information and mitigate bias. The experimental results are reported in Figure <ref>. Here the metric is AUC and the task is point-wise setting. According to Figure <ref>, our three strategies can all surpass the original input without path prompt augmentation in both two sub-experiments, which proves the necessity of path debiasing. Although the shuffle mechanism and soft selector have their own advantages and disadvantages in two different path scale experiments, both can relatively improve the quality of the results. And the hybrid module of both can bring more stable results, indicating that it is indeed necessary for the model to consider the position factors of input meta-paths and the influencing factors of different path prompts on decision-making in experiments, in order to cope with actual recommendation scenarios. In theory, in other similar scenarios, such as the input for LLM consists of multiple sentence prompts without prior order, our proposed shuffle mechanism and the soft selector can both play a certain role in enhancing the robustness of model training. We will continue to explore this property in our future work. § CONCLUSION In conclusion, this paper proposed GLRec, a job recommendation model that first combines large language models (LLMs) with behavior graph understanding. By leveraging the semantic richness and massive knowledge of LLMs, GLRec improved the quality of job recommendations compared to traditional approaches. The meta-path prompt constructor encoded the behavior graph's interaction information into natural language prompts, enhancing personalized recommendations. Experimental results validated the effectiveness of GLRec, showcasing its superiority in real-world recruitment datasets. This research contributes to the advancement of LLM-based job recommendation and opens up new possibilities of graph data understanding for LLMs in personalized recommendations. However, there are still some areas that need to be further optimized in our work, such as larger scale experimental validation and finer grained module testing. aaai
http://arxiv.org/abs/2307.04323v1
20230710033248
Optimal $(2,δ)$ Locally Repairable Codes via Punctured Simplex Codes
[ "Dong Wang", "Weijun Fang", "Sihuang Hu" ]
cs.IT
[ "cs.IT", "math.IT" ]
Optimal (2,δ) Locally Repairable Codes via Punctured Simplex Codes Dong Wang12, Weijun Fang124, and Sihuang Hu124 1 Key Laboratory of Cryptologic Technology and Information Security, Ministry of Education, Shandong University, Qingdao, 266237, China, [email protected] 2 School of Cyber Science and Technology, Shandong University, Qingdao, 266237, China, {fwj, husihuang}@sdu.edu.cn 4 Quancheng Laboratory, Jinan 250103, China ================================================================================================================================================================================================================================================================================================================================================================================================================== This research is supported in part by National Key Research and Development Program of China under Grant Nos. 2021YFA1001000 and 2022YFA1004900, the National Natural Science Foundation of China under Grant No. 62201322, the Natural Science Foundation of Shandong under Grant No. ZR2022QA031. (Corresponding Author: Weijun Fang) Locally repairable codes (LRCs) have attracted a lot of attention due to their applications in distributed storage systems. In this paper, we provide new constructions of optimal (2, δ)-LRCs. Firstly, by the techniques of finite geometry, we present a sufficient condition to guarantee a punctured simplex code to be a (2, δ)-LRC. Secondly, by using characteristic sums over finite fields and Krawtchouk polynomials, we construct several families of LRCs with new parameters. All of our new LRCs are optimal with respect to the generalized Cadambe-Mazumdar bound. § INTRODUCTION In order to ensure the reliability of nodes in large-scale distributed storage systems, the concept of locally repairable codes was first proposed in <cit.>. Let [n]={1,2,...,n}, for a linear code C of length n over the finite field , a code symbol c_i of C has locality r if there exists a subset R_i⊆ [n] such that i∈ R_i,|R_i|≤ r+1 and c_i is a linear combination of {c_j}_j∈ R_i\{i} over . If each symbol of a codeword in C has locality r, then C is called a locally repairable code with locality r or an r-LRC. However, when multiple node failures happens in a distributed storage system, the r-LRCs can not recover failed nodes successfully. To address this problem, Prakash et al. <cit.> extended the concept of r-LRCs to (r,δ)-LRCs which can tolerate any δ-1 erasures. A code symbol c_i of C has locality (r,δ) if there exists a subset R_i⊆ [n] such that i∈ R_i,|R_i|≤ r+δ-1 and d(C|_R_i)≥δ where C|_R_i is the punctured code on the set [n]\ R_i. The code C is called an (r,δ)-LRC if all code symbols have locality (r,δ). Obviously when δ=2,(r,δ)-LRCs reduce to r-LRCs. §.§ Known Results about (r,δ)-LRCs In <cit.>, analogous to the classical Singleton bound for general codes, the following Singleton-type bound for an (r,δ)-LRC with parameters [n,k,d] is given as d≤ n-k+1-(⌈ k/r⌉ -1)(δ -1). If an (r,δ)-LRC achieves the Singleton-type bound (singletonBound) with equality, then the code is called a Singleton-optimal (r,δ)-LRC. Due to its interesting algebraic structures and practical applications in distributed storage systems, several constructions of Singleton-optimal (r,δ)-LRCs have been proposed in <cit.>. Note that the Singleton-type bound is independent of the field size. In <cit.>, Cadambe and Mazumdar derived the first field-dependent bound for q-ary r-LRCs with parameters [n,k,d], k≤min_s∈ℤ_+{sr+k_opt^(q)(n-s(r+1),d)}, where k_opt^(q)(n,d) is the maximum dimension of a q-ary linear code of length n and minimum distance d. The generalized Cadambe-Mazumdar bound was considered in <cit.>, which stated that for a q-ary (r,δ)-LRC with parameters [n,k,d], k≤min_s∈ℤ_+{sr+k_opt^(q)(n-s(r+δ-1),d)}. We call a code achieving the generalized C-M bound (CMbound_rdeltaLRC) with equality as a k-optimal (r,δ)-LRC. In <cit.>, the authors proved that the simplex code is a k-optimal 2-LRC. By deleting some columns from the generator matrix of the simplex code, several new families of k-optimal LRCs with localities 2 or 3 were proposed in <cit.> and <cit.>. In <cit.>, Luo et al. presented several binary k-optimal 2-LRCs by deleting or adding some columns from a binary simplex code and used character sums to determine their parameters. Motivated by works of <cit.>, Luo et al. constructed a family of p-ary linear codes and demonstrated that they are k-optimal 2-LRCs in some cases. Tan et al.<cit.> determined the locality of some known linear codes and showed that many of these codes are k-optimal. §.§ Our Contributions and Techniques In this paper, we focus on new constructions of (2, δ)-LRCs. We follow the construction of linear codes presented in <cit.>. This construction has been applied to secret sharing schemes or LRCs by many researchers <cit.>. It is more intuitive to describe the properties of (2,δ)-LRCs in the language of finite geometry, this converts the analysis of locality into how many lines pass through a point in projective geometry. From the finite geometry point of view, we give a simple but useful sufficient condition to guarantee a linear code to be a (2,δ)-LRC (see Theorem suffi_condi_lrc). We generalize some results proposed by Luo et al.<cit.> (see Theorems thm_generaliz_luo, thm_generaliz_luo_loc and loc_gen_luo) and Silberstein et al.<cit.> (see Theorem gen_wt2). In particular, we extend the p-ary linear codes presented in <cit.> to the q-ary linear codes, where p is a prime and q is the power of p, and determine their locality in some cases. Motivated by Silberstein's work on r-LRCs, we utilize Krawtchouk polynomials to determine the parameters of some punctured simplex codes. Specifically speaking, if the punctured columns from the generator matrix of a simplex code have certain weight, then determining the minimum distance of the punctured simplex code is equivalent to determining the minimum value of Krawtchouk polynomials. Then we construct two infinite families of k-optimal (2,q)-LRCs. Our constructions are generalizations of the results of <cit.>. Moreover, all our new LRCs are k-optimal with respect to the generalized C-M bound. The rest of this paper is organized as follows. In Section II, we recall a general construction of linear codes given by Ding et al.<cit.>, and some basic notation and results on finite geometry and Krawtchouk polynomials. In Section III, we consider (2,δ)-LRCs and present three infinite families of k-optimal (2,δ)-LRCs. Section IV concludes the paper. § PRELIMINARIES §.§ A General Construction of Linear Codes In this subsection, we describe a general construction of linear code which was given by Ding et al.<cit.>. Let m be a positive integer, q a power of some prime p, 𝔽 _q the finite field containing q elements and ^m the vector space over of dimension m. For any vector x= (x_1,x_2,⋯,x_m)∈^m, the Hamming weight of x is given as wt(x)=|{1≤ i≤ m:x_i≠ 0}|. We let tr_q^m/q(·) be the trace function from 𝔽_q^m to and tr(·) the absolute trace function from to . Ding et al.<cit.> established a general construction of linear codes, which says that if D={d_1,...,d_n} is a nonempty subset of 𝔽 _q^m, a q-ary linear code of length n is constructed by C_D={c_x=(tr_q^m/q(xd_1),⋯,tr_q^m/q(xd_n)):x∈𝔽 _q^m}. If D={ d_1, d_2, ⋯, d_n} is a nonempty subset of 𝔽^m_q, then the above construction (<ref>) can be modified to C_D={c_x=(x· d_1,...,x· d_n):x∈^m}, where x· d_i is the Euclidean inner product of x and d_i. Using character sums over finite fields, we can compute the parameters of those constructed codes. Assume that ω_p is the primitive p^th root of unity in the complex number field ℂ, then for a∈, the additive character χ_a from to is defined as χ_a(c)=ω_p^tr(ac), for all c∈. If a=0, then ∑_c∈χ_a(c)=q; otherwise ∑_c∈χ_a(c)=0 (<cit.>). The following two bounds are useful in subsequent sections. Let C be a q-ary [n,k,d] linear code, then n≥∑_i=0^k-1⌈d/q^i⌉. A linear code achieving the Griesmer bound with equality is called a Griesmer code. Let C be a q-ary code with M codewords, length n and minimum distance d. If qd>(q-1)n, then M≤qd/qd-(q-1)n. §.§ Finite Geometry The projective space PG(m-1, q) over 𝔽_q is the geometry whose points, lines, planes, ⋯ , hyperplanes are the subspaces of 𝔽^m_q of dimension 1, 2, 3, ⋯ , m-1. So, we also use a nonzero vector g ∈𝔽^m_q to denote the point in PG(m-1, q). Two nonzero vectors g_1 and g_2 are the same point in PG(m-1, q) if and only if g_1=λ g_2 for some λ∈𝔽^*_q. Note that when we replace g_i by λ g_i for some λ∈𝔽^*_q, the parameters of the code given by Eq. (<ref>) do not change. So we rewrite the code construction given in Eq. (<ref>) via the language of projective geometry as follows. Suppose D={ d_1, d_2, ⋯, d_n} is a nonempty subset of PG(m-1,q), then a q-ary linear code of length n is constructed by C_D={c_x=(x· d_1,...,x· d_n):x∈^m}. In this paper, we will use Eq. (<ref>) to construct optimal LRCs. Note that when D=PG(m-1,q), C_D is the famous simplex code. Thus in this sense, for general nonempty subset D, the code C_D is the punctured code of the simplex code. We let the points in PG(m-1,q) be the vectors in ^m that the first nonzero coordinate is 1 for simplicity. If A is a nonempty subset of [m], we let P_[m]=PG(m-1,q) and P_A be the subset of PG(m-1,q) that the coordinates outside of A are 0. It is easy to see that |P_A|=q^|A|-1/q-1,⋃_α∈^*α P_A=L_A^* where α P_A={αa:a∈ P_A} and L_A={(a_1,...,a_m)∈𝔽 _q^m:a_i=0 if i∉ A}. For any two subsets A_1,A_2 of [m], the intersection of P_A_1 and P_A_2 is equal to P_A_1∩ A_2, where P_∅ =∅ . §.§ Krawtchouk Polynomials In this subsection, we briefly review some basic results of Krawtchouk polynomials. Given positive integers n,q, and suppose 0 ≤ k ≤ n, the Krawtchouk polynomial of degree k is defined as<cit.> K_k(x;n,q)=K_k(x)=∑_j=0^k(-1)^jxjn-xk-j(q-1)^k-j. The following lemma is a slight modification of <cit.>. Let a and s be positive integers and x be a vector of length m over with wt(x)=a. Then we have ∑_y∈^m,wt(y)=sω _p^tr(x·y)=K_s(a;m,q). § (2,Δ)-LRCS FROM PUNCTURED SIMPLEX CODES In this section, we will provide several constructions of LRCs via punctured simplex codes. Firstly, we give a simple lemma which will be used to determine the locality of linear codes. Let δ≥ 2 be an integer, g_1,⋯,g_δ+1 be δ+1 distinct collinear points in PG(m-1,q). Let C be the linear code with the generator matrix G=[g_1 ... g_δ+1], then C is a q-ary [δ+1, 2, δ]-MDS code. Since any two of g_1,⋯,g_δ+1 are linearly independent and any three of g_1,⋯,g_δ+1 are linearly dependent, we have rank(G)=2. Thus (C)=2 and (C^⊥)=δ-1. On the other hand, G is the parity-check matrix of C^⊥, thus d(C^⊥) ≥ 3. By the Singleton bound, d(C^⊥) ≤δ+1-(δ-1)+1=3, hence C^⊥ is a [δ+1,δ-1,3]-MDS code. So C is a [δ+1,2,δ]-MDS code. In the following, we give a sufficient condition which guarantees that a punctured simplex code is a (2,δ)-LRC. Suppose 2≤δ≤ q and D is a subset of PG(m-1, q). If |D|≤q^m-1-1/q-1(q+1-δ)-1, then the code C_D^c given in Eq. (<ref>) is a q-ary (2,δ)-LRC, where D^c=PG(m-1,q)∖ D. For any point g∈ D^c, there are q^m-1-1/q-1 lines in PG(m-1,q) containing g, and each line has q+1 points. Since |D|≤q^m-1-1/q-1(q+1-δ)-1, by the Pigeonhole Principle, there exists at least one line L containing g, such that there are δ+1 points g_1= g, g_2, ⋯, g_δ+1 of L belonging to the subset D^c. By Lemma <ref>, d((C_D^c)_|E)=δ, where E={ g_1, g_2, ⋯, g_δ+1}. Hence the code C_D^c has (2, δ)-locality. When D=∅, then C_D^c is the q-ary simplex code. From Theorem <ref>, we know that the q-ary simplex codes have locality (2,q). In particular, to ensure the code C_D^c to be a 2-LRC, it only needs to satisfy that |D| ≤ q^m-1-2. Thus our method is simpler than that in <cit.>. Hyun et al.<cit.> constructed infinite families of binary Griesmer codes punctured by unions of projective spaces, and Luo et al.<cit.> obtained similar results of linear codes over 𝔽_p. In the following, we extend their results to general q-ary codes. Let m,t>1 be positive integers. Assume that A_1,...,A_t are nonempty subsets of [m] satisfying A_i∩ A_j=∅ for any i≠ j∈[t]. Let D=∪ _i=1^tP_A_i and D^c=P_[m]\ D, then the code C_D^c defined by Eq. (<ref>) is a q-ary linear code with parameters [q^m-1/q-1-∑_i=1^tq^|A_i|-t/q-1,m,q^m-1-∑_i=1^tq^|A_i|-1]. Furthermore, assume that |A_1|=...=|A_i_1|=s_1,|A_i_1+1|=...=|A_i_2|=s_2, ...,|A_i_u-1+1|=...=|A_i_u|=s_u where s_1<s_2<...<s_u. If max{i_1,i_2-i_1,...,i_u-i_u-1}≤ q-1, then C_D^c is a Griesmer code. Note that P_A_i∩ P_A_j=P_A_i∩ A_j=∅ for any i≠ j∈[t], so we have |D|=∑_i=1^t|P_A_i|=∑_i=1^tq^|A_i|-t/q-1, thus the length of C_D^c is q^m-1/q-1-∑_i=1^tq^|A_i|-t/q-1. Let x=(x_1,...,x_m) be any nonzero vector of ^m, then wt(c_x) =|D^c|-|{d∈ D^c|x· d=0}| =|D^c|-∑_d∈ D^c1/q∑_y∈ω_p^tr(yx· d) =q-1/q|D^c|-1/q∑_d∈ D^c∑_y∈^*ω_p^tr(yx· d) =q-1/q|D^c|-1/q∑_d∈^m^*ω_p^tr(x· d)+1/q∑_d∈ D∑_y∈^*ω_p^tr(yx· d). Note that ∑_d∈^m^*ω_p^tr(x· d) =∑_d_1∈⋯∑_d_m∈ω_p^tr(x_1d_1)⋯ω_p^tr(x_md_m)-1 =∏_i=1^m(∑_d_i∈ω_p^tr(x_id_i))-1=-1, where d=(d_1,...,d_m) and ∑_d∈ D∑_y∈^*ω_p^tr(yx· d) =∑_i=1^t∑_d∈ P_A_i∑_y∈^*ω_p^tr(x·(yd)) =∑_i=1^t∑_d∈^|A_i|^*ω_p^tr(x_A_i·d). As ∑_d∈^|A_i|^*ω_p^tr(x_A_i·d)= q^|A_i|-1,x_A_i=0 -1,x_A_i≠0 for 1≤ i≤ t, then the minimum weight is min_x∈^m^*wt(c_x)=q-1/q|D^c|+1/q-t/q =q^m-1-∑_i=1^tq^|A_i|-1. It is easy to prove that q^m-∑_i=1^tq^|A_i|>0 since ∑_i=1^t |A_i|≤ m. Thus wt(c_x)=0 if and only if x= 0, hence the dimension is m. Suppose ∑_i=1^tq^|A_i|-1=∑_i=g^hb_iq^i, where 0≤ b_i≤ q-1,i=g,...,h. Then ∑_i=0^m-1⌈q^m-1-∑_j=1^tq^|A_j|-1/q^i⌉=∑_i=0^m-1⌈q^m-1-∑_j=g^hb_jq^j/q^i⌉ =∑_i=0^m-1q^m-1-i-∑_i=0^g∑_j=g^hb_jq^j-i-∑_i=g+1^h∑_j=i^hb_iq^j-i =q^m-1/q-1-∑_i=1^tq^|A_i|-∑_i=g^hb_i/q-1. As max{i_1,i_2-i_1,...,i_u-i_u-1}≤ q-1,∑_i=g^hb_i=i_1+i_2-i_1+...+i_u-i_u-1=i_u=t. The length of C_D^c is q^m-1/q-1-|D|=q^m-1/q-1-∑_i=1^tq^|A_i|-∑_i=g^hb_i/q-1, hence the code C_D^c is a Griesmer code. We now investigate the locality of the codes given in Theorem thm_generaliz_luo. Keep the notation as in Theorem thm_generaliz_luo. If t=2 and |A_i|≤ m-2 for all i∈[t], then the code C_D^c has locality (2,q); if t≥ 3 and m≥ 4, then the code C_D^c has locality (2,q); if m>t=2,q>2 and |A_1|=m-1, then the code C_D^c has locality (2,q-1). Case 1: t=2, |A_i|≤ m-2,i=1,2. If m≥ 4, then |D|=q^|A_1|+q^|A_2|-2/q-1≤q^2+q^m-2-2/q-1≤q^m-1-q/q-1; if m=3, then |D|=q^|A_1|+q^|A_2|-2/q-1=2≤q^2-q/q-1. By Theorem <ref>, we can deduce that the code C_D^c has locality (2,q). Case 2: m>3 and t>2. Note that |D|=∑_i=1^tq^|A_i|-t/q-1≤q^m-t+1+(t-1)q-t/q-1=q^m-t+1+t(q-1)-q/q-1≤q^m-t+1+m(q-1)-q/q-1≤q^m-2+q^m-2(q-1)-q/q-1=q^m-1-q/q-1. By Theorem <ref>, we can deduce that the code C_D^c has locality (2,q). Case 3: m>t=2, |A_1|=m-1,|A_2|=1,q>2. Note that |D|=q^m-1+q-2/q-1≤2q^m-1-2/q-1-1. By Theorem <ref>, we can deduce that the code C_D^c has locality (2,q-1). Keep the notation as in Theorems thm_generaliz_luo and thm_generaliz_luo_loc, the code C_D^c is k-optimal LRC with respect to the bound (<ref>). Case 1: t=2, |A_i|≤ m-2,i=1,2. We let n'=q^m-1/q-1-q^|A_1|+q^|A_2|-2/q-1-q-1,d=q^m-1-q^|A_1|-1-q^|A_2|-1, according to the Plotkin bound, we have k_opt^(q)(n',d) ≤⌊log_qq(q^m-1-q^|A_1|-1-q^|A_2|-1)/q^2-2⌋ ≤⌊log_q(q^m-1-q^|A_1|-1-q^|A_2|-1)⌋ ≤ m-2. Utilizing the bound (CMbound_rdeltaLRC) with s=1, we obtain that k≤ 2+k_opt^(q)(n',d)≤ m. Therefore, the code C_D^c is k-optimal. Case 2: m>3 and t>2. The Griesmer code C_D^c has parameters [n=q^m-∑_i=1^tq^|A_i|+t-1/q-1,k=m,d=q^m-1-∑_i=1^tq^|A_i|-1]. When q>2, we have ⌈q^m-1-∑_i=1^tq^|A_i|-1/q^m-2⌉≥⌈ q-t-1+q^m-t/q^m-2⌉≥⌈ q-m-1+q^m-3/q^m-2⌉=q and ⌈q^m-1-∑_i=1^tq^|A_i|-1/q^m-1⌉=1. So we have n-1=∑_i=0^m-2⌈d/q^i⌉,n-q-1≥∑_i=0^m-3⌈d/q^i⌉. Using the Griesmer bound, we obtain that k_opt^(q)(n-1-q,d)≤ m-2, thus k≤ 2+k_opt^(q)(n-1-q,d)≤ m and the code C_D^c is k-optimal. When q=2, since A_1,...,A_t are mutually disjoint and max{i_1,i_2-i_1,...,i_u-i_u-1}≤ 1, we obtain that ⌈2^m-1-∑_i=1^t2^|A_i|-1/2^m-2⌉≥ 2. According to the similar arguments, k≤ 2+k_opt^(q)(n-1-q,d)≤ m and the code C_D^c is k-optimal. Case 3: m>t=2, |A_1|=m-1,|A_2|=1,q>2. We let n'=q^m-1/q-1-q^m-1+q-2/q-1-q,d=q^m-1-q^m-2-1, according to the Plotkin bound, we have k_opt^(q)(n',d) ≤⌊log_qq(q^m-1-q^m-2-1)/q^2-q-1⌋ ≤⌊log_q(q^m-1-q^m-2-1)⌋ ≤ m-2. Utilizing the bound (CMbound_rdeltaLRC) with s=1, we obtain that k≤ 2+k_opt^(q)(n',d)≤ m. Therefore, the code C_D^c is k-optimal. Let q=4,m=3 and A_1={1},A_2={2,3}, then C_D^c defined in Theorem thm_generaliz_luo is a 4-ary Griesmer code [15,3,11] with a generator matrix G=(G_1 G_2), where G_1=[ 1 1 1 1 1 1 1; 1 α α+1 0 1 α α+1; 0 0 0 1 1 1 1 ],G_2=[ 1 1 1 1 1 1 1 1; 0 1 α α+1 0 1 α α+1; α α α α α+1 α+1 α+1 α+1 ] and α is a primitive element in 𝔽_4. Then C_D^c is a (2, 3)-LRC. For instance, one can see that the columns (1,1,0)^T,(1,0,1)^T,(1,α,α+1)^T,(1,α+1,α)^T of G generate a [4,2,3]-code. Hence the first symbol of C_D^c has locality (2,3). Note that k^(4)_opt(11,11)=1, thus C_D^c attains the generalized C-M bound (<ref>). In the following, we consider another family of punctured simplex codes, which is motivated by<cit.>. Let A⊆ [m],|A|=s≥ 3,D={d∈ P_A: wt(d)=2},D^c=P_[m]\ D, then the code C_D^c defined in Eq. (<ref>) is a q-ary k-optimal (2,q)-LRC with parameters [n,k,d]=[q^m-1/q-1-(q-1)s2,m,q^m-1-⌊(2(s-1)(q-1)+q)^2/8q⌋] providing that s2(q-1)≤q^m-1-q/q-1 and 0<qd/qd-(q-1)(n-q-1)<q^m-1. There are (q-1)^2s2 vectors in L_A with Hamming weight 2, so |D|=(q-1)s2 since wt(a)=wt(λa) if and only if λ∈^*. ∑_d∈ D∑_y∈^*ω_p^tr(yx· d) =∑_d∈^s,wt(d)=2ω_p^tr(x_A·d) =K_2(a;s,q) =q^2/2a^2-(2(q-1)qs+q(2-q)/2)a +s2(q-1)^2. Thus the minimum weight of C_D^c corresponding to x is min_x∈^m^*wt(c_x) =⌈ q^m-1-(q-1)^2/qs2-4s(q-1)+(q-2)^2/8q⌉ =q^m-1-⌊(q-1)^2/qs2 +4s(q-1)+(q-2)^2/8q⌋ =q^m-1-⌊(2(s-1)(q-1)+q)^2/8q⌋ according to the Eq. (calcu_min_weight). According to Theorem suffi_condi_lrc, the code has locality (2,q). Using the Plotkin bound, we obtain that k_opt^(q)(n-q-1,d) ≤⌊log_qqd/qd-(q-1)(n-q-1)⌋ ≤ m-2. Thus the code C_D^c is a k-optimal (2,q)-LRC according to the generalized C-M bound (CMbound_rdeltaLRC). By the techniques of graph theory, the authors in <cit.> have obtained these codes as 2-LRCs. And they only proved that these codes are k-optimal 2-LRCs for s=3,m≥ 3,2≤ q≤ 14. However, in Theorem thm_weight2, we prove that all the codes in <cit.> are actually k-optimal (2,q)-LRCs. Let A_1,A_2,...,A_t⊆ [m],|A_i|=s_i≥ 3 for i∈[t],D_i={d∈ P_A_i|wt(x)=2},D=⋃_i∈[t] D_i,D^c=P_[m]\ D. If |A_i∩ A_j|≤ 1 for all i≠ j∈[t], and (q-1)∑_i=1^ts_i2≤q^m-1-q/q-1, then the code C_D^c defined in (gen_linear_constr) is a q-ary k-optimal (2,q)-LRC with parameters [n,k,d]=[q^m-1/q-1-(q-1)∑_i=1^ts_i2,m,q^m-1-Δ] providing that 0<qd/qd-(q-1)(n-q-1)<q^m-1 where Δ=⌊∑_i=1^t (2(s_i-1)(q-1)+q)^2/8q⌋. For all i≠ j∈[t], D_i∩ D_j={d∈ P_A_i∩ A_j|wt(d)=2}=∅ since |A_i∩ A_j|≤ 1. |D|=∑_i=1^t|D_i|=(q-1)∑_i=1^ts_i2. ∑_d∈ D∑_y∈^*ω_p^tr(yd· x) =∑_i=1^t∑_d∈ D_i∑_y∈^*ω_p^tr(yd· x) =∑_i=1^t∑_d∈^s_i,wt(d)=2ω_p^tr(x_A_i·d) =K_2(w_1;s_1,q)+...+K_2(w_t;s_t,q) where w_i=wt(x_A_i). Thus the weight of a codeword corresponding to some nonzero x is wt(c_x) =(q^m-1/q-1-(q-1)∑_i=1^ts_i2)q-1/q +1/q+1/q∑_i=1^tK_2(w_i;s_i,q) =q^m-1-(q-1)^2/q∑_i=1^ts_i2+1/q∑_i=1^tK_2(w_i;s_i,q). Note that the axis of symmetry of K_2(w_i;s_i,q) is 2(q-1)s_i+2-q/2q=1/q(1-s_i)+s_i-1/2≥s_i/2>1 for all prime power q, thus w_i≥1 when K_2(w_i;s_i,q) get the minimum value. Meanwhile, |A_i∩ A_j|≤ 1 for all i≠ j∈[t], so K_2(w_i;s_i,q) get the minimum value simultaneously for all i∈[t]. The minimum weight of c_x is min_x∈^m^*wt(c_x) =⌈ q^m-1-(q-1)^2/q∑_i=1^ts_i2. . -∑_i=1^t4s_i(q-1)+(q-2)^2/8q⌉ =q^m-1-⌊∑_i=1^t(2(s_i-1)(q-1)+q)^2/8q⌋. According to Theorem suffi_condi_lrc, the code has locality (2,q). Using the Plotkin bound, we obtain that k_opt^(q)(n-q-1,d) ≤⌊log_qqd/qd-(q-1)(n-q-1)⌋ ≤ m-2. The code C_D^c is a k-optimal (2,q)-LRC according to (CMbound_rdeltaLRC). Let q=2,m=5,A_1={1,2,3} and A_2={3,4,5}. By the SageMath software, the binary code C_D^c defined in Theorem gen_wt2 has parameters [25,5,12] and the generator matrix is G=(G_1 G_2) where G_1=[ 1 0 0 1 0 1 0 1 1 0 1 0; 0 1 0 1 0 0 1 1 0 1 1 0; 0 0 1 1 0 0 0 0 1 1 1 0; 0 0 0 0 1 1 1 1 1 1 1 0; 0 0 0 0 0 0 0 0 0 0 0 1 ],G_2=[ 1 0 1 1 0 1 0 0 1 0 1 0 1; 0 1 1 0 1 1 0 1 1 0 0 1 1; 0 0 0 1 1 1 0 0 0 1 1 1 1; 0 0 0 0 0 0 1 1 1 1 1 1 1; 1 1 1 1 1 1 1 1 1 1 1 1 1 ]. By the Plotkin bound, k^(2)_opt(22,12) ≤⌊log_2 12 ⌋=3. Hence C_D^c is a k-optimal 2-LRC achieving the C-M bound. § CONCLUDING REMARKS In this paper, we have investigated new constructions of optimal (2, δ)-LRCs via punctured simplex codes. By using the language of finite geometry, we propose a simple but useful condition to ensure that a linear code has (2,δ)-locality. According to some characteristic sums and Krawtchouk polynomials, we obtain several infinite families of q-ary (2,δ)-LRCs. All these codes are optimal with respect to the generalized C-M bound. We not only generalize some previous results of 2-LRCs to the (2, δ)-LRCs, but also construct some new optimal (2, δ)-LRCs which are not optimal in the sense of 2-LRCs. It is interesting to find more new optimal (2, δ)-LRCs and generalize these results to the (r,δ)-LRCs with r ≥ 3 in the future. IEEEtran
http://arxiv.org/abs/2307.05470v1
20230708213703
A Robust and Efficient Optimization Model for Electric Vehicle Charging Stations in Developing Countries under Electricity Uncertainty
[ "Mansur Arief", "Yan Akhra", "Iwan Vanany" ]
math.OC
[ "math.OC", "econ.GN", "q-fin.EC", "stat.AP" ]
1]Mansur M. Arief cor1 [email protected] 2]Yan Akhra 2]Iwan Vanany [cor1]Corresponding Author [1]organization=Department of Aeronautics and Astronautics Engineering, Stanford University, addressline=450 Serra Mall, city=Stanford, postcode=94305, state=CA, country=USA [2]organization=Department of Industrial and Systems Engineering, Institut Teknologi Sepuluh Nopember, addressline=Sukolilo, city=Surabaya, postcode=60111, state=East Java, country=Indonesia The rising demand for electric vehicles (EVs) worldwide necessitates the development of robust and accessible charging infrastructure, particularly in developing countries where electricity disruptions pose a significant challenge. Earlier charging infrastructure optimization studies do not rigorously address such service disruption characteristics, resulting in suboptimal infrastructure designs. To address this issue, we propose an efficient simulation-based optimization model that estimates candidate stations' service reliability and incorporates it into the objective function and constraints. We employ the control variates (CV) variance reduction technique to enhance simulation efficiency. Our model provides a highly robust solution that buffers against uncertain electricity disruptions, even when candidate station service reliability is subject to underestimation or overestimation. Using a dataset from Surabaya, Indonesia, our numerical experiment demonstrates that the proposed model achieves a 13% higher average objective value compared to the non-robust solution. Furthermore, the CV technique successfully reduces the simulation sample size up to 10 times compared to Monte Carlo, allowing the model to solve efficiently using a standard MIP solver. Our study provides a robust and efficient solution for designing EV charging infrastructure that can thrive even in developing countries with uncertain electricity disruptions. * Proposed a simulation-based optimization model to design optimal EV charging station infrastructure that can withstand uncertain power supply in developing countries. * Used control variates (CV) variance reduction technique to enhance simulation efficiency and provide a highly robust solution that buffers against uncertain electricity disruptions. * Numerical experiment using data from Surabaya, Indonesia showed the proposed model achieved 13% higher average objective value compared to the non-robust solution. * The enhanced simulation efficiency through CV reduces the required sample size by a factor of 10 compared to Monte Carlo simulations * The proposed model showcases a potential to provide a robust solution to the challenges associated with EV charging infrastructure under random electricity disruptions in developing countries. electric vehicle charging station developing country uncertainty variance reduction § INTRODUCTION The growing global demand for electric vehicles (EVs) has brought to the forefront the need for reliable and easily accessible EV charging infrastructure. According to a report by the International Energy Agency, as numerous governments set ambitious goals for electrifying their transportation systems, the worldwide EV demand has exponentiated in recent years. In 2010, there were only approximately 17,000 EVs on the world’s roads. In 2019, for instance, China led the global EV market, with more than 1 million EVs cars sold that year (more than 50% of global EV demand), followed by the whole of Europe with 561,000 cars sold and the USA with 327,000 cars sold. This trend is projected to persist in the upcoming years <cit.>. Developing countries are also striving to promote EV adoption, coupled with greener electricity <cit.> to expedite the achievement of their sustainability goals. For example, Indonesia has set an ambitious target of having 20% of all automobile sales be electric by 2025, with a long-term goal of achieving fully electrified transportation by 2050 <cit.>. However, developing countries like Indonesia face significant infrastructure constraints that must be addressed to achieve these goals. The availability of EV charging infrastructure is a crucial issue that must be addressed to support the widespread adoption of EVs. In Indonesia, there were only 240 public EV charging points across the country as of 2021 <cit.>. However, an estimated 31,000 EV charging stations are required throughout the country to support sustainable electrification of vehicles in the country <cit.>. This lacking infrastructure issue is not unique to Indonesia and is faced by many other developing countries to support the growth of EV adoption. Tackling this challenge by designing a convenient and reliable EV charging network is, however, a very complex task. To ensure a convenient location, it is essential to consider factors such as population density or potential EV demand distribution <cit.>. However, in major cities in developing countries, finding suitable land for charging stations may be challenging due to limited space availability. Furthermore, in developing countries, service uncertainty, including electricity, is one of the most significant issues. Implementing smart charging strategies <cit.> becomes hardly feasible due to electricity supply uncertainty. Outages and other electricity disruptions often occur, posing a significant problem for users who demand reliable service. To address this challenge, our study proposes a robust solution for designing EV charging infrastructure that accounts for the challenge of electricity disruptions in developing countries. We introduce a simulation-based optimization model that estimates the service reliability of candidate charging stations and incorporates this information into the objective function and constraints. This approach offers a versatile solution by utilizing simulation approaches compared to previous works that assume available disruption probability models. Additionally, we employ a variance reduction technique called control variates (CV) to enhance simulation efficiency, reducing the required sample size by up to 10 times compared to naive Monte Carlo (MC) simulations. This results in an efficient mixed-integer programming (MIP) model that solves for optimal solutions that strike the balanced objective between minimizing the total cost of operating and investing in the charging infrastructure and providing high-quality service to the public. Fig. <ref> illustrates the comparison between the traditional modeling approach without variance reduction vs. the proposed framework that utilizes the variance reduction technique to achieve a tighter confidence interval (hence much more precise output) with less computational burden. Our work contributes in three key ways. Firstly, we propose a model that specifically addresses the critical issue of electricity disruption in EV charging station planning, particularly in developing countries. Secondly, we integrate the estimation of disruption probabilities into our model, providing a more data-driven approach compared to previous works that assumed available disruption probability models apriori. Finally, our study demonstrates the robustness of the proposed model in solving EV charging infrastructure problems by comparing its performance to a non-robust model, even when disruption probabilities are slightly under or over-estimated. Our numerical experiment, based on an EV dataset from Surabaya, Indonesia, shows that our model achieves a 13% higher average objective value compared to the non-robust solution, highlighting its superior performance to help build sustainable and thriving ecosystems for EVs, both in developed and developing countries in the years to come. The rest of this paper is structured as follows. In Section <ref>, we provide a concise overview of the literature related to the optimization of EV charging infrastructure We then present the proposed model formulations in Section <ref> and approach incorporating the CV technique to estimate the service reliability (i.e. the complement of disruption probability). In Section <ref>, we describe the experiment settings and discuss the main findings in Section <ref>. Finally, we conclude our work in Section <ref>. § LITERATURE REVIEW In this section, we briefly review earlier works directly related to the planning of EV charging infrastructure and relevant case studies that motivate our approach. Examining these earlier works offers insight into the evolution of methodologies, leading to the proposed work, which uniquely introduces a combination of stochastic modeling and variance reduction techniques. The summary is provided in Table <ref>. The planning of EV charging infrastructure can be viewed as a facility location problem, which aims to minimize an objective function subject to constraints related to the desired performance of the network facilities. Early studies, including those by <cit.> and <cit.>, adopted deterministic models focusing on minimizing charging stations and development costs, respectively. <cit.> sought to maximize service demand, whereas <cit.> aimed to minimize infrastructure and access costs. Similar objectives were pursued by <cit.>, <cit.>, and <cit.>, with deterministic models being the common methodology. Several other studies, like those conducted by <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>, continued the trend of deterministic models, exploring various aspects of EV charging station optimization. Other researchers, including <cit.>, <cit.>, <cit.>,<cit.>, <cit.>, <cit.>, and <cit.>, focused on minimizing the number of charging stations or the operating cost, or maximizing the EV flow coverage. Another line of work integrates charging infrastructure into the smart-grid design <cit.> or other renewable energy sources such as solar cells <cit.>. While this approach provides an integrated solution to renewable energy issues and amplifies the positive impact of EVs on the environment, it may not be practical for urban areas in developing countries. A comprehensive review of charging infrastructure designs is presented by <cit.>, emphasizing the need for increasingly detailed modeling that accounts for randomness and variability. However, there is a lack of rigorous real-world case studies that emphasize uncertainty quantification in the modeling framework. Several case studies have been conducted in both developed and developing countries. For example, <cit.> studied the problem of slow-charging technology in Lisbon, where vehicles are often parked overnight. In contrast, <cit.> considered both fast- and slow-charging technologies, focusing on robustly covering all demands and avoiding partial fulfillment in the city of Toronto. Another case study was conducted by <cit.> using a GIS-based model in Ankara and adopting a fuzzy approach. A city-scale simulation was developed for Singapore by <cit.>, focusing on the trade-off between cost minimization and customer accessibility maximization. Lastly, <cit.> proposed a set covering model for EV charging stations in Surabaya but ignored electricity disruption and only provided redundant demand coverage to provide a buffer against uncertainty, resulting in an overly simplified model and sub-optimal solutions. In light of these studies, it is clear that the EV facility location problem is a complex and multifaceted issue that requires a tailored approach for different regions and contexts. Developing countries, in particular, may face unique challenges, such as power electricity disruptions, that must be considered in the planning and design of EV facilities. Such disruptions and uncertainty are addressed only in a handful of studies. For instance, <cit.> uses a multi-criteria decision-making approach aiming to strike a balanced solution against flooding disruption that maximizes the charging convenience, minimizes the impact of flood hazards, and minimizes the impact of existing charging stations using TOPSIS. <cit.> integrates the electric bus charging stations with photovoltaic and energy storage systems using a two-stage stochastic programming model, enabling them to incorporate the uncertainty of PV power outputs. <cit.> optimizes the size of the energy storage system considering the annualized cost, penalty cost for buying power during peak hours, and penalty cost for resilience violations. Other works that consider stochastic modeling include <cit.>, which directly use either structure of the stochastic models or simulations to represent elements of uncertainty into their optimization models. The caveat is that the resulting model can be extremely hard to solve, especially when a solution with high confidence is desired. The proposed work extends the use of stochastic modeling and introduces control variates <cit.>, a variance reduction technique that can speed up a simulation-based optimization model, to the field. We propose an approach that addresses the challenges of the need to account for electricity disruptions via simulation and controlling the resulting objective value uncertainties by adjusting the simulation sample size. Simulation modeling enables the modeler to adjust the degree of modeling fidelity, depending on the prior knowledge available, and can be easily verified by estimating the probability of electricity disruptions and comparing it with available historical data. The resulting simulation-based robust model can be accelerated using variance reduction techniques (i.e., control variates), and it offers a more accurate and practical approach for planning and designing EV charging infrastructure that considers uncertainty and disruptions. The integration of stochastic modeling and control variates sets this work apart from previous research, potentially paving the way for more efficient and effective EV charging station location optimization solutions. § MODEL FORMULATION In this section, we describe our modeling components, including the decision variables, objective function, constraint set, model benchmarks (robust and non-robust model), and the CV method we employ to improve simulation efficiency. §.§ Decision Variables We consider a set of demand nodes I and supply nodes J, representing sub-district centers and charging station candidate locations in the region under study. We also consider K vehicle types, representing different vehicle modalities that the residents use for commuting (here, we consider two modalities: electric motorcycles and electric cars). The average time to travel from node i ∈ I to node j ∈ J is denoted by d_ij. A threshold parameter d_max is introduced as an upper bound for this travel time as a proxy to study the robustness of the solution w.r.t. consumer time-to-travel for charging. The decision variables include binary variables x_j indicating whether the charging station candidate j is selected or not and y_ij indicating whether demand node i is to be assigned to be served by charging station j. In addition, we also use integer decision variables v_ij^k and u_j, denoting the number of electric vehicles of type k from node i charged at node j and the number of units of charging connectors installed at node j, respectively. x_j = 1, if station j ∈ J is selected 0, otherwise y_ij = 1, if node i ∈ I is assigned to node j ∈ J 0, otherwise, v_ij^k ∈{0, 1, ⋯}, ∀ i ∈ I, j ∈ J, k ∈ K u_j ∈{0, 1, ⋯}, ∀ j ∈ J Each opened station j incurs a daily cost h_j and can only accommodate q_j charging connectors due to limited space. Each charging connector incurs g daily operational cost and has a limited daily charging throughput of c_j kWh. A vehicle type k takes e_k kWh energy and t_k time to charge using fast-charging technology. We use the electricity price denoted by r to convert the energy used to monetary value. §.§ Objective Function The objective is to maximize daily profits under random disruption events at each station, i.e., the revenue from all undisrupted stations minus operational and investment costs. We add a penalty term for any unmet customer demands due to the disruptions to study proper incentivizing mechanisms to achieve further robust models in the ablation study. To this end, we consider each charging station j ∈ J to have a reliability p_j = ℙ(Z_j ≤ z_j) = 𝔼 [𝕀(Z_j) ≤ z_j]. The disruption events are simulated utilizing random variable Z = [Z_j]_∀ j ∈ J∼ q. Z_j represents the underlying state triggering electricity disruption at station j whenever it exceeds some threshold z_j. In practice, electricity disruption events may occur due to extreme weather, spiking demand, or fallen trees <cit.> (in which Z_j might represent wind speed, cumulative region-wide demand, or fallen tree branch weights, respectively, that hits electrical equipment and z_j is the equipment threshold to deal with the corresponding random Z_j realization). <cit.> presents a review of how EV charging infrastructures strain the electricity grids, which, in turn, exacerbate the likelihood of electricity outages, especially in developing countries. With this consideration, the objective function can be formulated as follows: we have prior information about p_j, ∀ j ∈ J. max ∑_i ∈ I∑_j ∈ J∑_k ∈ Kr e_k p_j v_ij^k_revenue - s d_ij (1-p_j) v_ij^k_penalty - ∑_j ∈ J(g u_j + h_j x_j)_total cost. On the other hand, if p_j is not available, then we can use simulation to estimate the following objective: max ∑_i ∈ I∑_j ∈ J∑_k ∈ K r e_k v_ij^k 𝔼[𝕀(Z_j≤ z_j) ]_revenue - s d_ij v_ij^k 𝔼[𝕀(Z_j > z_j) ]_penalty - ∑_j ∈ J(g u_j + h_j x_j)_total cost, where 𝕀(Z_jl≤ z_j) is binary variables indicating whether the disruption occurs or not. 𝕀 (Z_jl≤ z_j) = 1, if Z_jl≤ z_j 0, otherwise. Monte Carlo (MC) simulation is one of the most practical methods to achieve this. MC uses n i.i.d. copies of the random variable to estimate the expectation. For each j ∈ J, we first generate Z_j1, Z_j2, ⋯ Z_jn. We then check if the disruption event is triggered or not at the l-th sample and output the binary indicators I_jl = 𝕀 (Z_jl≤ z_j). Then, we use the binary indicators in our final (robust) objective function: max ∑_i ∈ I∑_j ∈ J∑_k ∈ K∑_l=1^n 1/n( (r e_k v_ij^k I_jl_revenue - s d_ij v_ij^k (1-I_jl)_penalty) - ∑_j ∈ J (g u_j + h_j x_j)_total cost. We call our model the Robust Model in the experiment, to contrast with the original (Non-Robust) model proposed by <cit.>, which is attained when setting I_jl = 1 for all j ∈ J, l ∈{1, 2, ⋯ n} in (<ref>) during optimization. The solutions of both models are evaluated under random disruption events generated using a different random seed. §.§ Constraints The maximization of the objective function in (<ref>) is subject to a set of constraints: s.t.  ∑_k ∈ k v_ij^k ≤ y_ij M, ∀ i ∈ I, j ∈ J, d_ij y_ij≤ d_max , ∀ i ∈ I, j ∈ J, ∑_j ∈ J v_ij^k = w_i^k, ∀ i ∈ I, k ∈ K, ∑_i ∈ I∑_k ∈ K t_k v_ij^k ≤ c_j u_j, ∀ j ∈ J, u_j ≤ x_j q_j, ∀ j ∈ J, ∑_i ∈ I y_ij≤ x_j M, ∀ j ∈ J, ∑_j ∈ J y_ij≥ 1, ∀ i ∈ I, ∑_j ∈ J x_j ≤ N ∑_j ∈ J∑_l=1^n 1/n y_ij I_jl≥p̅, ∀ i ∈ I ∑_j ∈ J∑_l=1^n 1/n v_ij^k I_jl≥∑_j ∈ J v_ij^kp̅, ∀ i ∈ I, k ∈ K In the above formulation, constraint (<ref>) ensures that charging stations can only charge vehicles if assigned. Constraint (<ref>) ensures the maximum time-to-charge for consumers does not exceed the set threshold d_max. Constraint (<ref>) ensures all charging demands are fulfilled, where w_i^k denotes the number of vehicles of type k to charge at demand point i. Constraint (<ref>) ensures that the required charging capacity to fulfill each station's assigned demand does not exceed the installed capacity. Constraint (<ref>) restricts the number of charging connectors installed in each station. Constraint (<ref>) ensures that demands are assigned only to opened stations. Constraint (<ref>) guarantees that at least one stations cover each demand. Constraint (<ref>) limits the maximum number of stations to open. Finally, constraint (<ref>-<ref>) ensures that the probability that at least one of the assigned charging stations serving a given demand is not under an electricity outage is greater than or equal to p̅, assuming that outages between stations are independent. §.§ Robust vs. Non-Robust Model The consideration of p_j in our formulation is part of our attempt to boost the robustness of the original model and address the unique challenges and characteristics of urban areas in developing countries. The Non-Robust Model ignores disruption probability, resulting in a more simplified model. Our formulation is general, in the sense that we can attain the earlier model by setting I_jl = 1 for all j ∈ J, l ∈{1, 2, ⋯ n}. This earlier model ignores disruption uncertainty and often results in an overly cost-optimized solution that can have serious performance degradation when disruption occurs. Fig <ref> (left) shows a non-robust solution where only two stations are selected to cover 30+ demand nodes in the city of Surabaya. In this solution, many demand nodes are only covered by one station (no redundancy), and thus, when an electricity disruption hits the charging station, the charging demands will not be met and the residents are served very poorly. Our proposed robust model aims to incorporate the disruption uncertainty and optimizes the location and capacity of EV charging stations while balancing the trade-offs between consumer service level and economic profits. This incorporation maintains a linear objective function and linearized constraints, which still yields an MIP model that can solve efficiently using standard solvers. §.§ Improving the Efficiency of Disruption Probability Estimation While the proposed objective function in (<ref>) is still linear, the sample size n required to achieve high statistical confidence might blow up as the disruption probabilities 1 - p_j, ∀ j ∈ J become lower (e.g., as the utilities in developing countries mature). Note that our objective essentially estimates p_j by generating enough values Z_j1, Z_j2, ⋯, Z_jn, and compute p̂_j = 1/n∑_l=1^n 𝕀(Z_jl≤ z_j) which can be shown to be unbiased and converges to p_j. Under the assumption that Z = [Z_j]_∀ j ∈ J∼ q are independently and identically distributed, and z_j, ∀ j ∈ J are fixed threshold values, estimator p̂_j is an unbiased and consistent estimator of p_j. The proof is straightforward but is provided here for completeness. Unbiasedness: 𝔼[p̂_j] = 𝔼[ 1/n∑_l=1^n 𝕀(Z_jl≤ z_j) ] = 1/n∑_l=1^n 𝔼[ 𝕀(Z_jl≤ z_j) ] = 1/n∑_l=1^n p_j = p_j where the first equality follows from the definition of p̂_j, the second equality follows from the linearity of the expectation operator to the sum of indicator functions, and the third line follows from the fact that Z_jl are independently and identically distributed, and the third equality follows from the definition of p_j. Consistency: We know that by the law of large numbers, for any ϵ > 0, lim_n →∞ℙ(|p̂_j - p_j| ≥ϵ) = 0. Hence, p̂_j converges in probability to p_j, and thus it is a consistent estimator of p_j. Supposed that we already have an estimate p̂_j, ∀ j ∈ J. We can now plug the estimate into our optimization problem, giving max ∑_i ∈ I∑_j ∈ J∑_k ∈ Kr e_k p̂_j v_ij^k_revenue - s d_ij (1-p̂_j) v_ij^k_penalty - ∑_j ∈ J(g u_j + h_j x_j)_total cost s.t.  Constraint (<ref>)-(<ref>) ∑_j ∈ J y_ijp̂_j ≥p̅, ∀ i ∈ I ∑_j ∈ J v_ij^k p̂_j ≥∑_j ∈ J v_ij^kp̅, ∀ i ∈ I, k ∈ K . Note that this formulation using p̂_j, ∀ j ∈ J is equivalent to the robust model using indicator variables I_jl, ∀ j ∈ J, l ∈{1, 2, ⋯, n} earlier that uses the objective function (<ref>). §.§.§ Estimating p̂_j to Sufficient Accuracy While p̂_j is unbiased and consistent, the sample size to ensure a precise estimate can be arbitrarily large, especially when we want a higher accuracy (e.g. when the disruption rate 1-p_j is tiny, such as in developed countries where utility service has high reliability). Suppose we want an δ-accuracy and 1-α confidence level to estimate p_j = 0.9999. Then, we can use Hoeffding's inequality to determine the sample size. According to Hoeffding's inequality, for any δ > 0, the probability that the estimate deviates from the true value by more than δ is bounded by ℙ(|p̂_j - p_j| > δ) ≤ 2e^-2nδ^2, where n is the sample size. Hence, if we want to ensure 1-α confidence level, we set 2e^-2nδ^2 = α, and solve for n n = 1/2δ^2ln(2/α). For instance, if we want an accuracy of δ = 0.0001 and a confidence level of 1-α = 0.95, then the required sample size is n = 1/2(0.0001)^2ln(2/0.05) ≈ 114,763, which is quite huge. Figure <ref> shows the sample size (in a log_10 scale) for various α and δ values. Note, however, that this is an upper bound and in practice, this sample size is not always necessary. If we have N := |J| stations and each p_j has to be estimated using n≈ 114,763 samples, then we will need N × 114,763 samples to estimate the samples prior to solving the optimization problem, which can be overly burdensome if each simulation runs considers complex systems. Thus, we seek ways to improve efficiency and reduce the variance of the estimator. §.§.§ Improving Efficiency via Control Variates One way to improve the estimation efficiency and thus reduce the sample size is through the use of control variates (CV) <cit.>. CV involves introducing a new variable that is correlated with the random variable of interest and can be easily estimated. The CV is then used to adjust the estimate of the random variable to improve its efficiency by reducing the variance of the estimator using the cheaper-to-compute random variable. In our case, we can use CV to estimate p_j = ℙ(Z_j ≤ z_j). Let g(Z_j) be a function of Z_j that is easy to compute. Specifically, if we consider Gaussian q = N(μ, σ) and Z_j ∼ q, we can use g(z) = Φ(z) the CDF of the standard normal distribution as the CV to compute g(Z_j). The CV estimator for p_j is computed as p̂_j = 1/n∑_l=1^n 𝕀(Z_jl≤ z_j) + π_j ( 𝕀 (X_jl≤z̅_j)-g(z̅_j) ) where Z_jl is the l-th sample from the distribution q, X_jl's are standard normal random variables correlated with Z_jl, and z̅_j are the scaled version of z_j chosen to threshold X_jl. Finally, π_j is chosen to minimize the variance π_j = - Cov( ∑_l=1^n 𝕀(Z_jl≤ z_j), ∑_l=1^n 𝕀(X_jl≤z̅_j) )/Var(∑_l=1^n 𝕀(X_jl≤z̅_j)). We can show that the CV estimator is unbiased and achieves variance reductions in the following remarks. The reduction in variance, subsequently, allows us to reduce the sample size to achieve the same level of δ and α. The CV estimator (<ref>) is unbiased for p_j. The proof is straightforward, showing 𝔼[p̂_j] = p_j. 𝔼[p̂_j] = 1/n∑_l=1^n𝔼[𝕀(Z_jl≤ z_j)] +π_j (1/n∑_l=1^n𝔼[ 𝕀(X_jl≤z̅_j)]-g(z̅_j) ) = 1/n∑_l=1^np_j + π_j (1/n∑_l=1^n g(z̅_j) ) - π_j g(z̅_j) = p_j. Assuming we can generate highly correlated random variables Z_jl and X_jl simultaneously and choose the optimal π_j (<ref>), the CV estimator (<ref>) attains a variance reduction. Note that the variance without using CV is Var(p̂_j) = 1/n^2Var(∑_l=1^n𝕀(Z_jl≤ z_j)). With CV, the variance of the estimator is Var(p̂_j) = 1/n^2( Var(∑_l=1^n𝕀(Z_jl≤ z_j)) +2π_j Cov(∑_l=1^n𝕀(Z_jl≤ z_j),∑_l=1^n𝕀(X_jl≤z̅_j) ) +π_j^2 Var(∑_l=1^n𝕀(X_jl≤z̅_j)) ) . Plugging in the optimal π_j for our problem and simplifying, we have Var(p̂_j) = 1/n^2Var(∑_l=1^n𝕀(Z_jl≤ z_j)) - Cov^2(∑_l=1^n 𝕀(Z_jl≤ z_j), ∑_l=1^n 𝕀(X_jl≤z̅_j) )/n^2 Var(∑_l=1^n 𝕀(X_jl≤z̅_j)). We can see that the second term in RHS is non-positive, which means that the variance is reduced the most if 𝕀(Z_jl≤ z_j) and 𝕀(X_jl≤z̅_j) are highly correlated (either positively or negatively), which intuitively means X_jl provides some information about Z_jl. It is important to note, however, that in practice, we often use sample covariances and sample variances to compute π_j, so the CV estimator might not achieve this theoretical variance reduction. § NUMERICAL EXPERIMENTS In this study, we examine the EV and electricity data obtained from Surabaya, Indonesia. The EV dataset includes 11 candidate charging stations, 31 sub-regions of the city representing demand nodes, and two vehicle types, namely motorcycles (k=1) and cars (k=2). Figure <ref> illustrates the locations of the candidate charging stations (red nodes) and demand nodes (blue nodes), where the size of the blue nodes denotes the size of the demand at each location. This charging demand, i.e. the number of EVs of type k at each demand node i, is represented by w_i^k. The average travel time from demand node i to charging station j using vehicle k, d_ij^k, is amassed from Google Maps. The full capacity for each charging connector is considered as c_j=1440 minutes/day for all j ∈ J with 24/7 operational hours and the number of connectors installed in station j ∈ J is limited to q_j=8 for all j ∈ J, due to land availability in the candidate locations. We estimate the disruption probability by simulating random electricity demands Z = [Z_j]_∀ j ∈ J where Z_j ∼ q_j. We obtain this masked data from the local electricity company, which performed data masking and rescaling for privacy and security reasons. The masked mean and standard deviation of q_j along with demand threshold z_j are summarized in Table <ref>. The simulation uses this probability model to generate random demands and an electricity disruption event is triggered for the whole day at station j when Z_j ≥ z_j. Hence, we have station reliability p_j = ℙ(Z_j ≤ z_j), ∀ j ∈ J. The other experiment parameters are summarized in Table <ref>. We then build our model by running n simulation replications and computing the mean of the objective function values. The result is summarized in Fig. <ref> and Fig. <ref> for n up to 10,000. The selected stations and demand assignments for each model solution are shown in Fig. <ref> (left: Non-Robust Model, right: Robust Model) and Fig. <ref> (left: Misspecified Model #1, right: Misspecified Model #2). The Misspecified Model #1 is built assuming 0.95p_j while the Misspecified Model #2 assumes 1.05p_j for all j ∈ J, highlighting underestimation and overestimation of service reliability respectively. The CV estimator is constructed using standard normal random variables X_jl with z̅_j properly scaled. This gives a highly correlated random variables 𝕀(X_jl≤z̅_j) to 𝕀(Z_jl≤ z_j). We show the estimated station reliability (p_j) using MC and CV in Fig. <ref> and its standard error in Fig. <ref> to highlight the superior estimation efficiency using the CV estimator. § DISCUSSION AND FINDINGS In this section, we discuss our findings regarding the robustness of the optimal solutions against disruptions even when the probability is misspecified and the enhanced disruption simulation efficiency that allows robust decision-making for our problem against disruption uncertainties. We also highlight the limitation of the model and our outlook for future research. §.§ Robustness of the Optimal Solutions Figure <ref> summarizes the objective function values obtained by benchmarking the Robust Model, Non-Robust Model, Misspecified Model #1 (underestimated station reliability), and Misspecified Model #2 (overestimated station reliability). The optimal solution of the Robust Model (represented by orange and brown lines) outperforms the other models. Conversely, the solution of the Non-Robust Model (represented by blue and purple lines) yields the lowest objective value. The Non-Robust Model prioritizes minimizing operational and investment costs, resulting in only two charging stations being opened. This leads to lower revenue and higher penalties, particularly during disruptions. In contrast, the Robust Model balances operational and investment costs with potential revenue losses and penalties incurred during disruptions. As a result, the Robust Model opens three charging stations, distributing the large charging stations across the geography of the city, resulting in an 18% higher total cost than the Non-Robust Model solution. However, it provides better protection against revenue loss and penalties incurred during disruptions. We also suggest that these charging stations implement a smart energy management policy <cit.> for added robustness. This added robustness leads to a 10% higher revenue and 60% lower penalty when disruptions occur, yielding an approximately 13% higher overall objective. Figure <ref> shows that the Robust Model's balanced solution covers more demand points with two charging stations, resulting in a better revenue and penalty trade-off than the Non-Robust Model. The Robust Model with misspecified station reliability still provides some level of robustness, as evidenced by the objective values of both the underestimation and overestimation scenarios. These models' solutions have objective values lower than the Robust Model solution but higher than the Non-Robust Model solution. Thus, while accurately estimating station reliability is beneficial, the model can still tolerate imperfections. When utilizing the Robust Model with underestimated station reliability, the solution tends to be more conservative and provides a higher level of buffer against disruptions. This results in a solution with four charging stations, with over 90% of demand points covered by two or more charging stations. On the other hand, overestimating station reliability leads to a solution with only three charging stations, resulting in a lower cost and an objective value very close to the Robust Model. Figure <ref> illustrates the charging station placement for both the underestimated and overestimated scenarios. §.§ Improved Simulation Efficiency using CV Estimator We now discuss how we incorporate the simulation into our robust model. The main challenges center around incorporating electricity station reliability p_j, ∀ j ∈ J (and thus corresponding disruption probability 1-p_j, ∀ j ∈ J ), which might require a huge sample size to achieve desired precision level (thus increasing the computational burden of computing the objective function (either (<ref>) or (<ref>)) and the reliability constraints (either (<ref>)-(<ref>) or (<ref>)-(<ref>)). While both MC and CV estimators of the objective values are unbiased and converge to the same value for each model, the proposed CV estimation approach appears to effectively reduce the estimation variance, thus yielding tighter confidence intervals in Fig. <ref> (brown, silver, pink, and purple lines vs. orange, red, green, and blue lines). Furthermore, Fig. <ref> highlight that all CV estimators attain about 10× smaller standard errors compared to their MC counterparts. This means that CV improves the simulation efficiency and reduces the sample size required to attain the same precision up to a factor of 10 vs. naive MC simulation approach, without accuracy loss. The dominant efficiency performance of the CV-based estimation technique that reduces the sample size requirement while maintaining accuracy allows us to incorporate the estimated station reliability into the objective function and reliability constraints. This results in the proposed Robust Model that can be solved without increasing the computational cost significantly. The high efficiency of the CV over MC in estimating the reliability probabilities (even to values close to 1.00) is emphasized in Fig. <ref>, in which all CV estimates attain much tighter confidence intervals regardless of the target probability. In this estimation, again, CV estimators attain 10× smaller standard error for the same sample size used by MC estimators. This highlights the applicability of our robust modeling method to deal with problems where electricity disruptions are extremely rare and need to be estimated to an ultra-level precision. §.§ Limitation of the Current Work Although our CV-assisted robust model provides optimal solutions that strike a balance between minimal cost and buffering against electricity disruptions, we acknowledge that scaling it to larger problems, such as a larger charging station candidate set and more fine-grained demand points, heavily relies on the efficiency of the MIP solver. Moreover, we acknowledge that the electricity pricing rate used in this study is simplified, whereas more recent dynamic electricity pricing schemes are available and more realistic, though highly nonlinear. Incorporating such schemes could improve the accuracy of our revenue model, but it may not be feasible with our current solver. Additionally, the CV estimation approach used in this study is based on some prior knowledge about the probability model of the random variable triggering the disruption events. In practice, such knowledge may not be easy to obtain. However, we recognize that machine learning models can be leveraged to extract features from historical datasets and estimate disruption events. We can also leverage machine learning techniques to estimate the battery capacity of the EVs <cit.> to better predict the charging time for each arriving demand to extend our model to incorporate nonlinear dynamics and more realistic operations in our future work. § CONCLUSION In this study, we propose a simulation-based optimization model to address the critical issue of designing robust planning for EV charging stations in developing countries, where electricity disruptions may frequently occur and impact customer satisfaction. Our model considers service reliability as a key factor and incorporates it into the objective function and constraints using the control variates (CV) variance reduction technique to improve simulation efficiency. Our numerical experiment, based on a dataset from Surabaya, Indonesia, demonstrates the superior performance of our robust model solution compared to its non-robust counterpart, even in cases of underestimated or overestimated service reliability. While our proposed model shows promise, we acknowledge its reliance on an efficient MIP solver and its use of a simplified electricity pricing rate. Furthermore, our CV estimator is based on prior knowledge of the probability model, which may not be available in practice. As such, we seek to extend our model to cover nonlinear MIP and learning-based disruption estimation in future work. Nonetheless, our model's ability to reduce the required sample size by up to 10× compared to Monte Carlo simulations highlights its potential to provide a robust solution to the challenges associated with EV charging infrastructure under random electricity disruptions. elsarticle-harv
http://arxiv.org/abs/2307.04851v1
20230710184640
Physics-Based Modeling and Validation of 2D Schottky Barrier Field-Effect Transistors
[ "Ashwin Tunga", "Zijing Zhao", "Ankit Shukla", "Wenjuan Zhu", "Shaloo Rakheja" ]
physics.app-ph
[ "physics.app-ph" ]
Physics-Based Modeling and Validation of 2D Schottky Barrier Field-Effect Transistors Ashwin Tunga^a, Zijing Zhao, Ankit Shukla, Wenjuan Zhu, and Shaloo Rakheja Holonyak Micro and Nanotechnology Laboratory, University of Illinois at Urbana-Champaign, Urbana, IL USA ^[email protected] August 12, 2023 =============================================================================================================================================================================================================================================================== In this work, we describe the charge transport in two-dimensional (2D) Schottky barrier field-effect transistors (SB-FETs) based on the carrier injection at the Schottky contacts. We first develop a numerical model for thermionic and field-emission processes of carrier injection that occur at a Schottky contact. The numerical model is then simplified to yield an analytic equation for current versus voltage (I-V) in the SB-FET. The lateral electric field at the junction, controlling the carrier injection, is obtained by accurately modeling the electrostatics and the tunneling barrier width. Unlike previous SB-FET models that are valid for near-equilibrium conditions, this model is applicable for a broad bias range as it incorporates the pertinent physics of thermionic, thermionic field-emission, and field-emission processes from a 3D metal into a 2D semiconductor. The I-V model is validated against the measurement data of 2-, 3-, and 4-layer ambipolar MoTe_2 SB-FETs fabricated in our lab, as well as the published data of unipolar 2D SB-FETs using MoS_2. Finally, the model's physics is tested rigorously by comparing model-generated data against TCAD simulation data. Compact model, ambipolar transport, Schottky contact, field emission, MoTe_2, 2D electronics § INTRODUCTION Over the past few decades, the semiconductor industry has focused on dimensional scaling of silicon transistors based on Moore's law in order to improve their speed, performance, and efficiency <cit.>. However, due to the short-channel effects, such as the leakage current and static power dissipation <cit.>, in ultra-scaled transistors <cit.>, dimensional scaling has been slowing down in recent years. Several solutions to this problem have been explored, forming what is known as the “more-than-Moore” strategy <cit.>. To that end, novel materials that can mitigate short-channel effects have been investigated <cit.>. Among various novel materials, two-dimensional (2D) semiconductors have emerged as an excellent channel material for a field-effect transistor (FET) <cit.>. In a 2D semiconductor FET, mobile electrons, confined in an atomically thin channel, are strongly electrostatically coupled to the gate <cit.>. The primary advantage of 2D semiconductor FETs over ultra-thin body (UTB) transistors <cit.> is that UTB semiconductors are a result of the termination of a 3D crystal, which leads to surface roughness and considerable carrier scatterings. In contrast, 2D semiconductors are inherently atomically thin and do not have dangling bonds, and could offer higher performance at ultra-scaled process nodes. To harness the full potential of 2D materials for nanoscale CMOS, challenges related to device scaling, low resistance contacts, gate-stack design, wafer-scale integration, and process variability must be addressed. A review of opportunities and challenges of 2D semiconductors can be consulted in recent publications <cit.>. Transition metal dichalcogenides (TMDs) are a class of 2D materials that can be incorporated into FET device structures. TMDs have a sizeable bandgap, transitioning from indirect bandgap in bulk to direct bandgap in their monolayer limit <cit.>. Their sizable bandgap lends to their advantage over graphene for logic devices  <cit.>. TMD-based FETs are also expected to be superior to black-phosphorus-based FETs in which the on-off ratio degrades rapidly at high drain-bias <cit.>, thus limiting the prospects of black phosphorus for low-power logic operations. Among the various TMD materials, MoTe_2 is an excellent candidate for implementing logic FETs. MoTe_2 FETs with an on-off current ratio of 10^6 and ambipolar conduction have been experimentally demonstrated <cit.>. Ambipolar transistors could reduce the complexity, while also enhancing the security <cit.>, of CMOS circuits since the channel can be tuned to conduct both electrons and holes by applying an appropriate electric field. To enable circuit design and allow technology-to-circuit co-optimization, a compact device model that faithfully reproduces the device terminal behavior over a broad operating range is needed. In the case of MoTe_2 SB-FETs, a physically accurate and scalable compact model must accurately interpret the role of source and drain contacts. As a result of the lack of effective substitutional doping techniques, metal contacts are directly deposited over the MoTe_2 channel <cit.>. Thus, unlike metal-oxide-semiconductor (MOS) FETs with ohmic source and drain contacts, MoTe_2 FETs invariably have Schottky contacts, which limit the injection of mobile carriers into the channel and thus the net current flow in the transistor. The compact model presented here is based on the generalized theory of carrier emission at the metal source/drain contacts. Our model includes both thermionic emission and thermal field emission (tunneling) of carriers at the Schottky contacts. Due to their closed-form nature, the equations we arrive at are easily adaptable to a compact model, where majority of the model parameters have a well-defined physical interpretation. We show that the model captures the essential physics of 2D SB-FETs by comparing model output against numerical simulations conducted in a commercial TCAD tool. We demonstrate the model's applicability to fabricated MoTe_2 ambipolar FETs with varying channel thickness (2-layers to 4-layers) and over broad bias conditions (V_GS ∈ [-8, 8] V and V_DS ∈ [0.5, 4.5] V), spanning both hole and electron conduction regimes. The model is also successfully applied to fabricated unipolar 2D FETs based on MoS_2. § PRIOR WORK Penumatcha et al. <cit.> developed an analytical method to describe the off-state transfer characteristics of low-dimensional FETs. The authors model the transmission of carriers through a Schottky barrier using the Landauer formalism. However, the model involves numerical computation and is thus not suitable for compact modeling. Besides, the model was demonstrated only for below-threshold gate bias (V_GS) and low drain voltages (V_DS), which limits the model use for practical operating conditions. Prior works also include the 2D Pao-Sah model with drift-diffusion formalism extended to ambipolar transport <cit.>. These models do not account for Schottky contact-limited charge injection and instead focus on the channel-controlled charge transport, which is not the main transport physics here. The model for SB-FETs presented in <cit.>, validated only against TCAD data, is strictly derived for carrier injection into a 3D semiconductor and is thus not applicable to the 2D SB-FETs presented here. Moreover, <cit.> also neglects the thermionic field emission current, which as we discuss in Sec. <ref> is crucial for intermediate gate voltage regimes. Neglecting thermionic field emission is also expected to yield an unphysical temperature dependence of I-V curves of an SB-FET. In <cit.>, a tunneling equation, empirically derived from the 3D thermionic emission equation, along with the drift-diffusion formalism is used to obtain the drain current in a Si nanowire FET. However, because of its implicit nature, the model is not considered compact from a circuit simulation standpoint. A compact model for a double-gated reconfigurable FET is presented in <cit.> based on 3D band-to-band tunneling current in an SB-FET, which is not the relevant physics underlying the ambipolar 2D SB-FETs discussed in our work. In <cit.>, authors focus on the experimental demonstration of reconfigurable logic gates based on the SOI technology. On the modeling front, the authors use an empirical formulation, based on tan-hyperbolic functions to fit the experimental data. Other related works either focus on dual-gate nanowire geometry <cit.>, silicon-on-insulator structure <cit.>, consider only 3D channels <cit.>, or implement a numerical I-V model <cit.> for SB-FETs. The model presented in this paper is specifically developed for SB-FETs using 2D semiconductors, has a strong physical basis, is validated rigorously against numerical simulations as well as experimental data of ambipolar SB-FETs fabricated in-house and unipolar SB-FETs reported in the literature. Because of its explicit nature with few parameters, most of which have a physical origin, our model is suitable for circuit simulations. § MODEL DESCRIPTION Figure <ref> shows the cross-section of an MoTe_2 SB-FET with hexagonal BN gate dielectric and metal source and drain contacts, which create a Schottky barrier at the metal/2D channel interface. Here, a van der Waals gap is formed between the metal and the semiconductor, resulting in a tunneling barrier, which increases the net contact resistance <cit.>. Due to the atomic thickness of the 2D channel, the charge injection mechanism differs significantly from injection from a metal into bulk materials. Although Richardson-Dushman <cit.> and Fowler-Nordheim <cit.> theories of electron emission formulated for bulk materials <cit.> can fit experimental data for 2D devices, these models do not represent the essential physics of 2D SB-FETs. In the thermionic emission (TE) process, thermally excited carriers with energy greater than the potential barrier at the contacts can traverse over the barrier into the semiconducting channel, resulting in a current flow. The activation energy for TE, i.e., the barrier between the metal and the bottom of the conduction band for electrons and the top of the valence band for holes, decreases linearly with gate bias until it equals the characteristic Schottky barrier height. Due to the linear variation of the activation energy, the channel current varies exponentially with gate bias, as shown in Sec. <ref>. With increasing gate bias, the potential barrier thins, which increases the probability of carriers to tunnel through the barrier. Thus, a field-dependent tunneling current is observed in the device. The electric field-enhanced tunneling phenomenon is also referred to as field emission (FE). The sum of TE and FE currents gives the net drain current measured in an SB-FET, illustrated qualitatively in Fig. <ref>(left). Unlike in a unipolar device, in an ambipolar SB-FET, the TE current is marginal compared to the FE current, and the total drain current is predominantly the sum of the electron and hole FE currents. §.§ Numerical Model The current density, J_net, due to carrier transmission across an energy barrier from a metal into the channel is given as J_net = J_1 → 2 - J_2 → 1, where J_1 → 2 (J_2 → 1) is the current density due to carriers incident from region 1 (region 2) into region 2 (region 1), shown in Fig. <ref>(right). Consider J_1 → 2: J_1 → 2 = 1/𝒜∑_k q T(k_x) v_x f_1(k) (1-f_2(k)), where q is the charge of the carrier, 𝒜 is the area of the 2D crystal, k is the wavevector in reciprocal space, k_x is the x-component of k, v_x is the velocity of carrier incident at the barrier, f_i is the Fermi-Dirac distribution in region i (i=1,2 for metal, semiconductor), and T(k_x) is the transmission probability. If the carriers considered are electrons, converting the sum over k-space into an integral in energy space gives J_1 → 2 = -4q/h^2√(m_e^*/2)∫_-∞^∞ T(E_x) × ( ∫_0^∞f_1(E) (1 - f_2(E))/√(E_y) dE_y ) dE_x, where m_e^* is the effective mass of the electron, h is Planck's constant, E_x is the energy due to momentum perpendicular to the barrier or the longitudinal momentum (i.e., E_x = p_x^2/2m_e^*, where p_x is the momentum perpendicular to the barrier interface), E_y is the energy due to lateral momentum. The metal Fermi-level (E_Fm) is considered as the reference energy level. The model assumes conservation of lateral carrier momentum with effective mass approximation. The same procedure as above can be followed to obtain an expression for J_2 → 1 to get J_net,e as J_net,e = -4q/h^2√(m_e^*/2)∫_-∞^∞ T(E_x) × ( ∫_0^∞f_1(E) - f_2(E)/√(E_y) dE_y ) dE_x. The net hole current density is obtained similarly as the electron current with m_e^* replaced with the effective mass of holes, m_h^*. The total drain current density due to both carriers is simply the sum of their respective net currents. Taking into account the direction of the drain current (along the -x axis, from drain to source contact), the total current density is -J_D =J_net,e + J_net,h. While the drain current is modeled by considering the spatially localized carrier injection at the contacts, effects of gate-source voltage (V_GS) and drain-source voltage (V_DS) are incorporated via the Fermi functions, f_1 and f_2, and the transmission probability, T(E_x). For the classical TE process, the transmission probability T(E_x) = 1. Evaluating (<ref>) for electrons for T(E_x) = 1 and non-degenerate statistics and integrating over E_x ∈ [E_A, ∞) (E_A is the TE activation energy) gives (see Appendix <ref>) J_TE,e = -q √(8π k_B^3 m_e^*)/h^2 T^3/2exp(-E_A/k_B T) ×[ 1 - exp( -qV_c/k_B T) ]. The activation energy E_A reduces linearly with the gate bias until the flatband voltage when E_A = ϕ_SB, qV_c = E_Fm - E_Fs = - E_Fs is the voltage across the contact (E_Fm is used as the reference energy level), A^*_2D≡ (q√(8π k_B^3 m_e^*))/(h^2) is the effective Richardson constant for a 2D semiconductor. It is important to note the T^3/2 dependence in the pre-factor of the above equation compared to the T^2 dependence in the 3D TE model <cit.>. A similar treatment leads to the hole TE current. The FE transmission probability using the Wentzel–Kramers–Brillouin (WKB) approximation for a triangular barrier <cit.> is T_e,h(E_x) = exp( -8 π√(2m^* (ϕ_SB,(e,h) - E_x)^3)/3hqF_x), where F_x is the magnitude of the electric field at the triangular barrier. For electron tunneling that dominates for V_GS>V_min, (<ref>) is integrated over E_x ∈ [-∞,ϕ_SB]. In Sec. <ref>, we show an explicit analytic tunneling equation that lends itself well to compact modelling. A simplified conduction band profile at the source contact is shown in Fig. <ref>. The electric field is given as F_x = φ_(s,d)(V_GS, V_DS)/L_B(V_GS, V_DS), where L_B is the tunneling barrier width, and φ_(s,d) is the potential drop at the respective source/drain contact. For a constant V_DS, as V_GS increases, φ_s increases and L_B decreases, resulting in a strong increase in F_x with V_GS. At yet higher V_GS, φ_s remains roughly constant but L_B continues to decrease, which reduces the rate of increase of F_x with V_GS. In our model, the effect of the channel transport is enclosed in the electric field, which ensures the self-consistency between our methodology and the emission-diffusion theory of MOSFETs presented in <cit.>. §.§ Compact Model The SB-FET compact model includes both TE and FE processes and is thus applicable over a broad bias range. The total drain current is I_D = W[(J_tun,e + J_TE,e) + (J_tun,h + J_TE,h)], where W is the device width, and J_tun (J_TE) is the tunneling (thermionic) current. The holes (electrons) are injected into the channel from the drain (source) contact. The TE process for a 2D semiconductor is described by (<ref>), while the 2D tunneling process can be described by the following set of equations (see Appendix <ref>): J_tun,(e,h) = P_f,(e,h)( J_TFE + J_FE), J_TFE = C_0 C_1 √(k_BTπ)/β( exp(βϕ_SB,(e,h)) - 1 ), J_FE = C_0 C_1 √(π E_00^3)(1 - exp(- qV_DS/E_00) ), C_0 = 4q/h^2√(m^*_(e,h)/2), C_1 = exp( - √(ϕ_SB,(e,h) - E_0)( ϕ_SB,(e,h) + E_0/2) ), E_0 = ϕ_SB,(e,h) - (ln(K_0)/α)^2/3, β = k_BT - E_00/(k_BT)(E_00), E_00 = 2/3α√(ϕ_SB,(e,h) - E_0), α = 8π√(2m^*_(e,h))/3hqF_x. J_TFE and J_FE are the thermionic field emission (TFE) and field emission (FE) components, respectively, of the tunneling current, and K_0 and P_f are constant fitting parameters. The terminal voltages modulate F_x at the Schottky contact and thus control the current through the device. To model F_x, we need to obtain the channel potential and the tunneling barrier width. The channel potential is obtained from the balance equation given as V_G(S,Deff) - V_FB,(e,h) = φ_(s,d) - Q_ch,(e,h)/C_ins, where V_FB is the flat-band voltage, Q_ch is the mobile charge in the channel, and C_ins is the insulator capacitance. Q_ch is empirically modeled as <cit.> Q_ch,e = -C_inv,e n_ek_B T/qlog( 1 + exp( q V_GS - V_T,e/n_e k_B T) ), Q_ch,h = C_inv,h n_h k_B T/qlog( 1 + exp( -q V_GDeff + V_T,h/n_h k_B T) ), where C_inv(e,h) is the inversion capacitance, V_T(e,h) is the threshold voltage, and n_(e,h) is related to the sub-threshold swing. The V_DS dependence of V_FB and V_T is given as V_FB = V_FB0 - δ_FBV_DSeff, V_T = V_T0 - δ_T √(V_DSeff), where δ_FB and δ_T are empirical parameters that are determined from calibrating the model with experimental data, as described in the companion paper. V_DSeff is the effective V_DS that drops across the channel. The effective V_DS varies linearly with V_DS at low drain bias and eventually saturates at V_DSAT. We define V_DSeff and V_GDeff using a saturation function as, V_DSeff = V_DSV_DS/V_DSAT/( 1 + (V_DS/V_DSAT)^ν)^1/ν, V_GDeff = V_GS - V_DSeff. Here, V_DSAT is the saturation voltage and ν is the transition region fitting parameter. The tunneling barrier width, L_B, depends on the characteristic length, λ, and the depletion width, W_D. The tunneling process can happen either over W_D or a few characteristic lengths (Λ = n_0 λ, n_0 ≥ 1). At low V_GS, W_D is greater than Λ, and L_B is determined by Λ. At intermediate V_GS, the depletion region thins and W_D<Λ, and the tunneling path is influenced by the depletion width. Thus, L_B is modeled as L_B = Λ W_D/Λ + W_D, Λ = n_0 λ = n_0 √(t_cht_insϵ_ch/ϵ_ins), W_D,(e,h) = √(ϵ_chφ_(s,d)/ζ_(e,h) Q_ch,(e,h)/t_ch), t_ch (t_ins) is the channel (insulator) thickness, ϵ_ch (ϵ_ins) is the channel (insulator) dielectric constant, and ζ is a fitting parameter that describes the charge in the depletion region as a fraction of the channel charge. Figure <ref> shows the effect of key model parameters on a typical I_D-V_GS curve. § MODEL VALIDATION §.§ Comparison against TCAD results The device physics of MoTe_2 SB-FETs is analyzed using the TCAD tool, Sentaurus, from Synopsys <cit.>. A four-layer, 2.5  μ m long MoTe_2 SB-FET with 30 nm thick BN gate dielectric was simulated. The band-gap of MoTe_2 was fixed at 1.0 eV, while the hole and electron effective masses were kept equal at 0.55m_0 (m_0 is the free electron mass.). Further, the source/drain contacts were modeled as Schottky contacts with a Schottky barrier height of 0.50 eV. Finally, the gate contact was treated as a Dirichlet boundary condition, and the rest of the boundaries were treated as a Neumann boundary. The drift-diffusion formalism was used to model the charge transport in the channel, with carrier mobility fixed at 50 cm^2/Vs for both electrons and holes. Injection at the source and drain contacts was modeled using thermionic emission and the non-local tunneling equations, as implemented in Sentaurus. TCAD simulation results, shown in Fig. <ref>(a), confirm that the the majority of V_DS drops at the contacts. Moreover, from Fig. <ref>(b), we can infer that the charge transport is severely limited by carrier injection at the contacts and that the region near the contacts is depleted of charge carriers. The dependence of the electric field, F_x, at the source contact with V_GS is shown in Fig. <ref>(c). F_x, which is given as the ratio of the potential drop, φ_s at the source and the depletion width, L_B, increases linearly with V_GS in weak inversion. This is because in weak inversion φ_s varies linearly with V_GS, while L_B remains constant. At high V_GS, although L_B continues to shrink as shown in Fig. <ref>(b), φ_s saturates, which slows the rate of increase of F_x in strong inversion. §.§ Comparison against measurement data We validate our compact model against experimental measurement data of bilayer, trilayer, and four-layer MoTe_2 SB-FETs fabricated in-house. Figure <ref> shows the optical image of a fabricated trilayer MoTe_2 device. The devices were fabricated by following a bottom-up approach, where the embedded gates are formed first with metal evaporation after optical lithography patterning. The bottom BN was exfoliated and transferred on to the gates using the dry transfer method, before the source and drain contacts were patterned. The MoTe_2 flakes were exfoliated on to a 90 nm SiO_2/Si wafer. The thicknesses were identified from the optical image contrast. The MoTe_2 flakes are dry transferred on top of the source and drain contacts, with a top BN flake as the adhesive layer. The top BN layer also encapsulates the MoTe_2 channel. Our model contains a total of 24 parameters (11 each for electrons and holes and 2 common to both). Nine of the parameters are empirical in nature, while the remainder have a physical origin and can be deduced from straightforward experimental calibration. See Appendix <ref> for parameter extraction methodology. Figure <ref> shows an excellent match between the transfer curves obtained from our model and measurement data of the bilayer, trilayer, and four-layer MoTe_2 SB-FETs. Additionally, our model can capture the transconductance of the device measured experimentally. Table <ref> shows the extracted model parameters. The asymmetric electron and hole conduction in the fabricated devices is due to the unequal Schottky barrier heights, ϕ_SB,e≠ϕ_SB,h. The extracted values of ϕ_SB,e and ϕ_SB,h show that E_g (= ϕ_SB,e + ϕ_SB,h) increases with the increase in the channel thickness  <cit.>. We also apply our compact model to n-type MoS_2 SB-FETs reported in <cit.>. In a unipolar device, TE is observable in the measured I-V data in the sub-threshold regime. This is readily captured in our model as it is based on a generalized theory of carrier emission. Figure <ref> shows that the model faithfully captures the channel current from sub-threshold to strong inversion regimes for a 6-nm thick and 5-μm long MoS_2 FET. At very low gate voltages, the Sc contacted device is dominated by gate leakage, which is not included in our model. § CONCLUSION A compact model for ambipolar MoTe_2 SB-FETs was presented. The model relies on explicit, analytic equations to model thermionic emission and field-emission tunneling. We also presented a model for the variation of the tunneling barrier width with the terminal voltages. We conducted TCAD simulations to verify the model physics. Finally, we demonstrated the model's applicability to produce I-V data of realistic devices by comparing the model output against measurements of SB-FETs fabricated in-house as well as data available in the published literature. Because of its compact nature and few parameters, most of which have a physical significance, the model is suitable for technology-device-circuit co-design. § DERIVATION OF THERMIONIC EMISSION EQUATION To derive (<ref>), we assume T(E_x) = 1 and apply Maxwell-Boltzmann statistics <cit.> in (<ref>) along with E=E_x+E_y. J_net,e = -4q/h^2√(m_e^*/2)∫_0^∞ T(E_x) × ( ∫_0^∞f_1(E) - f_2(E)/√(E_y) dE_y ) dE_x. J_net,e = -4q/h^2√(m_e^*/2)∫_ϕ_SB^∞∫_0^∞exp(-E_x+E_y/k_BT) - exp(E_Fs - (E_x+E_y)/k_BT) 1/√(E_y) dE_y dE_x. J_net,e = -4q/h^2√(m_e^*/2)∫_ϕ_SB^∞exp(-E_x/k_BT) dE_x ×∫_0^∞(exp(-E_y/k_BT) - exp(E_Fs - E_y/k_BT) ) 1/√(E_y) dE_y. Solving the integrals and using qV_c = -E_Fs, J_TE,e = -q √(8π k_B^3 m_e^*)/h^2 T^3/2exp(-ϕ_SB/k_B T) ×[ 1 - exp( -qV_c/k_B T) ]. § DERIVATION OF ANALYTIC TUNNELING EQUATION FE can be analytically derived from (<ref>) and (<ref>) as follows. J_tun,e = C_0 ∫_-∞^ϕ_SBexp( -α(ϕ_SB - E_x)^3/2) ×( ∫_0^∞f_1(E) - f_2(E)/√(E_y) dE_y ) dE_x, where J_tun,e is the electron tunneling current, C_0 is the constant prefactor in (<ref>) and α is the constant in the exponent in (<ref>). Since E = E_x + E_y, the integral with respect to E_y is the difference of Fermi integrals of order -1/2 given as ∫_0^∞f_1(E) - f_2(E)/√(E_y) dE_y = √(k_BT) Γ(1/2) ×[ ℱ_-1/2( E_Fm - E_x/k_BT) - ℱ_-1/2( E_Fs - E_x/k_BT) ]. The Fermi integral can be approximated as ℱ_-1/2(x) = exp(x), x ≤ 0, 2/√(π) x^1/2, x>0. Equation (<ref>) can now be converted from a double integral equation to a single integral equation in E_x as follows J_tun,e = ∫_-∞^ϕ_SB G(E_x) dE_x, where G(E_x) is the electron tunneling current density per energy level, E_x, in the conduction band. As shown in Fig. <ref>(a), the peak of G(E_x) moves closer to E_Fm as the electric field, F_x, increases. Let us define the point E_0 as E_0 = *arg max_E_x G(E_x). To obtain an analytic solution of (<ref>), ln(T(E_x)) is linearized around E_0. Let us suppose that T(E_0) = 1/K_0(F_x). Although K_0(F_x) can be approximated as a constant value, a piece-wise value of K_0 is a better approximation as shown in Fig. <ref>(b). E_0 is given as E_0 = ϕ_SB - (ln(K_0)/α)^2/3, ln(T(E_x)) = -α f(E_x) = -α[ f(E_0) + (E_x - E_0)f'(E_0) ]. The integral in (<ref>) now has an analytic solution. J_TFE = ∫_0^ϕ_SB G(E_x) dE_x = C_0 C_1 √(k_BTπ)/β( exp(βϕ_SB) - 1 ) ×(1 - exp(- qV_c/k_BT) ), C_1 = exp( - √(ϕ_SB - E_0)( ϕ_SB + E_0/2) ), β = k_BT - E_00/(k_BT)(E_00), E_00 = 2/3α√(ϕ_SB - E_0). J_FE = ∫_-∞^0 G(E_x) dE_x = C_0 C_1 √(π E_00^3)(1 - exp(- qV_c/E_00) ), Figure <ref>(b) shows the validation of the analytic equation with the numerical integral, using different values of K_0. To fit the numerical integral, another parameter is introduced for tunneling current, which gives J_tun,e = P_f,e (J_TFE + J_FE). A similar parameter, P_f,h is introduced for the hole branch. § PARAMETER EXTRACTION METHODOLOGY The input parameters of the model that are fixed include (i) device width (W) (ii) the channel thickness (t_ch), (iii) insulator thickness (t_ins), which along with the insulator dielectric constant (ϵ_ins), gives the insulator capacitance (C_ins). The approximate range of Schottky barrier heights (ϕ_SB) can be obtained by extracting the x-direction electric field at the contact (F_x) from the measurement data, for a given ϕ_SB, using (8)-(11) and verifying that the extracted F_x is reasonable. ϕ_SB can then be tuned to obtain a best fit. If the thermionic emission current is observed in the device, ϕ_SB can also be extracted using the Arrhenius plots. The minimum current points are used to determine V_FB0,(e,h) and δ_FB,(e,h). The knee point in the semi-log I_D - V_G curve determines V_T0 and δ_T, and n is related to the sharpness of the knee point. The slope of the semi-log transfer curve in the sub-threshold region are used to obtain Λ, while ζ is correlated to the on-state current of the device. The empirical parameter K_0 is used to obtain an analytic tunneling equation from the numerical model. Lower (higher) K_0 approximates the low-field (high-field) region better. K_0 = 100 reasonably approximates tunneling at both low-field and high-field region. The empirical parameter, P_f, lies in the range of [0.1, 1], and can be tuned to obtain a best fit. § ACKNOWLEDGEMENT The authors acknowledge support from SRC (Grant SRC 2021-LM-3042) and NSF (Grant ECCS 16-53241 CAR). IEEEtran
http://arxiv.org/abs/2307.05800v1
20230711205040
A Hierarchical Transformer Encoder to Improve Entire Neoplasm Segmentation on Whole Slide Image of Hepatocellular Carcinoma
[ "Zhuxian Guo", "Qitong Wang", "Henning Müller", "Themis Palpanas", "Nicolas Loménie", "Camille Kurtz" ]
eess.IV
[ "eess.IV", "cs.CV" ]
Improving Segmentation and Detection of Lesions in CT Scans Using Intensity Distribution Supervision [ ==================================================================================================== In digital histopathology, entire neoplasm segmentation on Whole Slide Image (WSI) of Hepatocellular Carcinoma (HCC) plays an important role, especially as a preprocessing filter to automatically exclude healthy tissue, in histological molecular correlations mining and other downstream histopathological tasks. The segmentation task remains challenging due to HCC's inherent high-heterogeneity and the lack of dependency learning in large field of view. In this article, we propose a novel deep learning architecture with a hierarchical Transformer encoder, HiTrans, to learn the global dependencies within expanded 4096×4096 WSI patches. HiTrans is designed to encode and decode the patches with larger reception fields and the learned global dependencies, compared to the state-of-the-art Fully Convolutional Neural networks (FCNN). Empirical evaluations verified that HiTrans leads to better segmentation performance by taking into account regional and global dependency information. Digital histopathology, HCC, Neoplasm segmentation, Transformer architecture, Semantic segmentation, Deep learning. § INTRODUCTION Hepatocellular Carcinoma (HCC) is a primary tumor of the liver and is now the fifth most common cancer worldwide <cit.>. HCC is a highly heterogeneous molecularly and histologically as a cancer. A series of ongoing studies have shown that the HCC phenotype appears to be closely related to particular gene mutations <cit.>. Clustering the WSI representations of certain phenotypes to particular gene mutations by mining the relationships of the representations and their corresponding transcriptomic data is clinically meaningful. Imaging-based multi-omics can help physicians understand the morphology and micro-environmental cell population changes related to certain mutations. Multiple Instance Learning (MIL) such as the framework Clustering-constrained Attention Multiple Instance Learning (CLAM) <cit.> has been applied in <cit.> to predict the activation of 6 common HCC immune gene signatures within a roughly selected neoplasm area. A robust automatic neoplasm segmentation can then act as a preprocessing filter, not only to produce the annotation in lieu of pathologists but also to exclude the potential annotation bias to specific highly predictive regions when using the annotations provided by experienced pathologists. Automatic neoplasm segmentation remains an important challenge today mainly due to: (1) the inherently high tissue heterogeneity, and (2) the lack of consideration of effectively aggregated large-field relational features. For instance, HCC is a highly heterogeneous cancer and it has distinct morphological phenotypes. The morphological appearance of different phenotypes is very different across cases, which makes the tumor segmentation model hard to generalize. The de facto WSI segmentation models are patch-based convolutional neural networks (CNN) like <cit.>, and the patch size usually ranges from 128×128 to 512×512 due to GPU memory limitations. Such a patch size is very small compared to the gigapixel size of the WSI, which limits the receptive field of the model and leads to a limited ability to capture large-scale aberrant tissue structures in HCC such as macro-trabeculae[Neoplastic cells of macrotrabecular-massive HCC are arranged in thick trabeculae surrounded by vascular spaces.], pseudoglandular and necrotic foci architectural patterns, etc. Previous work <cit.> has studied how the proposed multi-scale CNN models take into account the histological features at different scales, ranging from nuclear aberrations through cellular structures to the global tissue architecture, by using patches at different spatial scales as input. For the multi-scale CNN model, larger-scale patches, which are centered aligned to the smallest scale patch at full resolution (detail patch), are downsampled to similar size to the detail patch in order to fit the segmentation framework. A general image down-sampler is usually applied while the WSI complexity is quite different from other types of images. In this context, we aim to develop an entire HCC neoplasm segmentation framework by using state-of-the-art (SOTA) approaches to mimic the WSI exploration by pathologists in a hierarchical fashion and thus to serve for downstream tasks in digital histopathology. Transformers <cit.> rely on global self-attention mechanisms and have achieved excellent performance in many tasks with global dependency requirements such as sequence modeling and language modeling. They were modified for computer vision tasks, called Vision Transformers (ViT) to serve as an alternative to CNNs in feature extraction <cit.>. In digital pathology, as a follow up of CLAM, ViT was shown to be stronger in feature aggregation than simpler attention-weighted average mechanisms. They can also be stacked as a hierarchical architecture to effectively aggregate the WSI features to a slide-level representation <cit.>. Inspired by the success of the application of ViT on representation learning of gigapixel images, we propose in this article a framework, called HiTrans, with hierarchical-based Transformer encoder to enlarge the field of view and to enhance the entire HCC neoplasm segmentation. Such a contribution allows to dramatically increase the segmentation field of view to 4096×4096. The WSI patches are encoded with larger fields of view compared to conventional Fully Convolutional Neural networks (FCNN), and are decoded by taking into account regional and global dependency information. The experimental results with a large real dataset demonstrate that the proposed HiTrans framework can lead to better entire HCC neoplasm segmentation, quantitatively and qualitatively. The dataset used in this study is introduced in Sec. <ref>. Sec. <ref> presents the data preprocessing pipeline, the baseline architecture, and the proposed network training protocol. Experimental results are provided in Sec. <ref>. § DATASET The PAIP liver cancer segmentation challenge was held in 2019 (PAIP 2019) <cit.> as part of the MICCAI 2019 Grand Challenge for Pathology. The PAIP 2019 training cohort consists of 50 anonymized WSIs at the 20× magnification in ScanScop Virtual Slide (SVS) format. Each WSI was selected from the HCC resection slides from one patient, which means the 50 WSIs in the training cohort belong to 50 different individuals. The Edmonson-Steiner tumor grade distribution is 7, 23, 20 for Grade I, II, III, respectively. The slides were all stained with conventional hematoxylin and eosin (H&E) staining and were digitized with an Aperio AT2 whole-slide scanner. The WSI size ranges from 35855×39407 to 64768×47009. The training cohort WSIs come with two-layers of annotation for whole tumor areas and viable tumor areas. Only the first annotation layer (i.e., the whole tumor area) was used in this study. The whole tumor area means that the entire neoplasm that can be observed on the WSI, including all dispersed viable tumor cell nests, tumor necrosis and tumor capsules. § METHODS The proposed HiTrans framework (Fig. <ref>) takes 4096×4096 WSI patches as input. A hierarchical Transformer encoder add-on module is added between a ResNet <cit.> encoder backbone and a modified U-Net decoder to learn the global dependencies (red dashed box). Sec. <ref> introduces the data preprocessing pipeline. The proposed architecture details are illustrated in Sec. <ref> and the training protocol is described in Sec. <ref>. §.§ Data preprocessing Since the WSI tissue mask is not provided, we followed a conventional pipeline to patchify WSIs to create the pairs of high tissue percentage 4096×4096 patches and their corresponding neoplasm masks. The 50 WSIs were split into 30, 10, and 10 for training, validation, and test, respectively. §.§ Proposed architecture A hierarchical add-on Transformer encoder module that contains two Transformers is added between the CNN feature extractor and the decoder to learn subtle global dependencies, as shown in Fig. <ref> the red dashed box. [CNN feature extractor] The intermediate layers of a pretrained 18-layer ResNet were used as an encoder for feature extraction. The aforementioned ResNet was pretrained on 57 histopathology datasets in a self-supervised learning fashion <cit.> following the SimCLR <cit.> contrastive learning setting. The adaptive average pooling layer and the dense layer at the end of the ResNet were removed to make it as a 2D feature map extractor. Five feature maps are generated (Fig. <ref>, map1 to map5). [Transformer encoder 1] Transformer encoder 1 is a 12-layer standard Transformer encoder with six attention heads and 384-length hidden dimension. Map5 is unfolded into 64 (8×8) seamless 16×16 sub-patches, and each sub-patch contains 256 mini patch embeddings. The sub-patches are linearly transformed (Fig. <ref>, linear1) to fit the hidden dimension of Transformer encoder 1. An extra segmentation embedding SEG is added. All output SEG embeddings of each sub-patch that contain the regional features of each sub-patch are kept to form the inputs of Transformer encoder 2. All output mini patch embeddings are reshaped to a 128×128 feature map to act as a skip connection to provide finer regional features for decoding. [Transformer encoder 2] Transformer encoder 2 has 12 layers with three attention heads, and the hidden dimension is 192. It takes the output SEG embeddings from Transformer encoder 1 to learn the global dependencies among the sub-patch features, within a 4096×4096 patch. The output embeddings of the sub-patches are reshaped to a 8×8 feature map. Thanks to the self attention mechanism, each element in the feature map contains its global dependency information. Each element is then mapped and added up to its spatial corresponding elements on the 128×128 feature map from Transformer encoder 1. We added up the feature maps from the two Transformer encoders instead of doing concatenation in order to alleviate the block-biased prediction. [Global dependency learning] The feature maps from the two Transformer encoders are fused to merge regional and global features. The new feature map is then concatenated with map5 to maintain localization accuracy. Performing a global dependency learning through this hierarchical Transformer encoder architecture, the WSI patches are encoded with larger fields of view compared to CNN. The proposed architecture also provides the decoder with regional and global dependency information for a finer segmentation. [Convolutional decoder with shortcuts] Transposed convolutional layers (Fig. <ref>, up-conv 2×2) expand the feature map size. Map1 to map4 are concatenated with the expanded feature maps of the l-1 layers, and then pass the double convolutional layers that halve the number of channels. At the end, the 2048×2048 feature maps will be decoded to a 4096×4096 segmentation map by a 1×1 convolutional layer following a bilinear interpolation with corner pixels alignment. §.§ Network training [General training protocol] The model was trained on a Nvidia A100 SXM4 80GB GPU for 100 epochs using the AdamW optimizer <cit.> with a batch size of 2 and a early stopping patience of 10. Base learning rate was set to 5e-4, with the first 10 epochs used to warm up followed by decay using a cosine schedule to reduce the base learning rate to the minimum learning rate 1e-6. Weight decay rate was increased gradually from 1e-2 to 1e-4 following a cosine schedule. [Alternate training] Alternate training strategy was adopted to overcome the convergence difficulty in training this hierarchical semantic segmentation architecture with two stacked Transformer encoders. The ResNet feature extractor, Transformer encoder 1 and 2 were trained Alternately to maximize the framework ability. § RESULTS §.§ Evaluation metric and results The WSI segmentation for all models was performed in seamless patch-wised inference units. The average Jaccard index of the 10 WSIs in the test set is used as a quantitative score to evaluate the entire neoplasm segmentation performance. Notably, the results we presented cannot be directly compared with PAIP 2019 leaderboard results, because we focused on evaluating the model performance and avoided the usage of postprecessing steps, including manually-designed WSI postprocesing strategies, ensemble learning, and overlapped inference. Moreover, the results in this study is only trained and tested on the training cohort of PAIP 2019 and only the whole tumor area annotation is used for training. Compared with three SOTA semantic segmentation frameworks, U-Net <cit.>, DeepLabV3 <cit.>, PSPNet <cit.>, and one SOTA Transformer-based framework, SegFormer <cit.>, the proposed HiTrans framework has better performance on HCC segmentation task (Tab. <ref>, Exp. 8). Among the FCNN segmentation frameworks, DeepLabV3 with a ResNet-50 backbone has the best performance (Tab. <ref>, Exp. 3) thanks to the stronger feature extractor and the Atrous Spatial Pyramid Pooling (ASPP) modules, which probe convolutional features at multiple scales while avoiding increasing network size too much. For SegFormer, the smaller variant SegFormer-B0 (Tab. <ref>, Exp. 6) has better performance than the larger variant SegFormer-B2 (Tab. <ref>, Exp. 7), possibly due to the convergence difficulty in training larger Transformer-based model. §.§ Ablation study The proposed hierarchical Tansformer encoder framework can learn global dependencies and bring muti-scale cues for the decoder during segmentation inference. In the ablation study, we removed the Transformer hierarchical add-on module, in order to evaluate the above argument, namely, that knowledge of global dependencies can enhance segmentation performance. Experiments without the add-on module using 512×512 (Tab. <ref>, Exp. 1) and 4096×4096 (Tab. <ref>, Exp. 2) patches were conducted separately. Besides, an experiment using an architecture without Transformer encoder 2 was conducted (Tab. <ref>, Exp. 3). By comparing the experimental results, taking larger patches as input and using HiTrans to learn the global dependencies can lead to better segmentation results. Comparing with the results from add-on module dropped (Fig. <ref>, b) and Transformer encoder 2 dropped (Fig. <ref>, c) architecture, HiTrans can further improve the precision (Fig. <ref>, d) thanks to this regional and global dependency-aware architecture. § CONCLUSIONS In this article, we introduce a hierarchical Transformer-based segmentation architecture, HiTrans, for HCC entire neoplasm segmentation. HiTrans can efficiently learn the regional and global dependencies within 4096×4096 WSI patches by encoding and decoding the WSI in a hierarchical fashion. The experimental results with a large real dataset demonstrate that HiTrans can lead to quantitatively and qualitatively better entire HCC neoplasm segmentation. In our future studies, we aim at developing a robust slide-wise context aware framework by leveraging different strategies in global dependency learning like graph-based neural networks. We will also explore the application on other tasks. § COMPLIANCE WITH ETHICAL STANDARDS This research study was conducted retrospectively using human subject data made available in open access by PAIP 2019 Challenge. Ethical approval was not required as confirmed by the license attached with the open access data. § ACKNOWLEDGMENTS This work was supported by Data Intelligence Institute of Paris (diiP), IdEx Université Paris Cité (ANR-18-IDEX-0001), and Translational Research Program in Cancerology INCa-DGOS - PRTK-2020, and was performed using HPC resources from GENCI-IDRIS (2022-AD011012825R1) made by GENCI. Qitong Wang is funded by China Scholarship Council. IEEEbib
http://arxiv.org/abs/2307.05602v1
20230710203508
Auxiliary Physics-Informed Neural Networks for Forward, Inverse, and Coupled Radiative Transfer Problems
[ "Roberto Riganti", "Luca Dal Negro" ]
cond-mat.dis-nn
[ "cond-mat.dis-nn" ]
AIP/123-QED Department of Physics, Boston University, 590 Commonwealth Avenue, Boston, Massachusetts 02215, USA [email protected] Department of Physics, Boston University, 590 Commonwealth Avenue, Boston, Massachusetts 02215, USA Department of Electrical Computer Engineering, and Photonics Center, Boston University, 8 Saint Mary’s Street, Boston, Massachusetts 02215, USA Division of Materials Science Engineering, Boston University, 15 St. Mary’s street, Brookline, MA 02446,USA In this paper, we develop and employ auxiliary physics-informed neural networks (APINNs) to solve forward, inverse, and coupled integro-differential problems of radiative transfer theory (RTE). Specifically, by focusing on the relevant slab geometry and scattering media described by different types of phase functions, we show how the proposed APINN framework enables the efficient solution of Boltzmann-type transport equations through multi-output neural networks with multiple auxiliary variables associated to the Legendre expansion terms of the considered phase functions. Furthermore, we demonstrate the successful application of APINN to the coupled radiation-conduction problem of a participating medium and find distinctive temperature profiles beyond the Fourier thermal conduction limit. Finally, we solve the inverse problem for the Schwarzschild-Milne integral equation and retrieve the single scattering albedo based solely on the knowledge of boundary data, similar to what is often available in experimental settings. The present work significantly expands the current capabilities of physics-informed neural networks for radiative transfer problems that are relevant to the design and understanding of complex scattering media and photonic structures with applications to metamaterials, biomedical imaging, thermal transport, and semiconductor device modeling. Auxiliary Physics-Informed Neural Networks for Forward, Inverse, and Coupled Radiative Transfer Problems L. Dal Negro August 12, 2023 ======================================================================================================== § INTRODUCTION Over the past few years, there has been a growing interest in developing deep learning (DL) and artificial intelligence (AI) algorithms for electromagnetic wave engineering, metamaterials design, and radiative transport problems<cit.>. Rapidly emerging approaches include training artificial neural networks (ANNs) to solve complex inverse problems, parameter estimation in structured photonic environments, and in strongly scattering media<cit.>. Although successfully demonstrated with respect to several inverse design problems, traditional methods remain essentially data-driven techniques and require time-consuming training steps and massive datasets<cit.>. In order to improve on purely data-driven methods, it is essential to constrain and regularize them by leveraging the underlying physics of the investigated problems, thus relaxing the burden on training and data acquisition. Building on the firm foundation of the universal approximation theorem for multi-layer ANNs<cit.>, physics-informed neural networks (PINNs) have recently emerged as a powerful framework for the efficient solution of both forward and inverse problems mathematically described by partial differential equations (PDEs) of integer or fractional orders<cit.>. The approach of PINNs has been successfully applied to a number of differential problems in engineering ranging from Navier-Stokes fluid dynamics, solid mechanics, and thermal transport<cit.>. Moreover, PINNs have shown remarkable results and noise robustness in the solution of electromagnetic inverse problems for metamaterials design, radiative transfer, imaging, and in the parameter retrieval of resonant photonic nanostructures<cit.>. However, the solution of Boltzmann-type, integro-differential transport equations using PINNs still poses significant challenges due to the need to resort to numerical quadrature methods such as Gauss-Legendre or Gauss-Chebyshev for the approximation of the integral terms<cit.>. Such methods add computational complexity and inevitably introduce quadrature errors in the numerical solutions<cit.>. In order to eliminate such problems, a new PINN framework called auxiliary physics-informed neural networks (APINNs) was recently introduced by Yuan et al.<cit.>. This approach allows one to recast integro-differential equations into equivalent differential ones through the introduction of a network architecture containing additional auxiliary variables at its output, each corresponding to an integral term in the original, constrained by suitable relations. Therefore, the APINN formulation avoids the numerical approximation of integrals that are instead directly "guessed" by the network at a minimal cost, significantly improving both the numerical accuracy and computational efficiency compared to traditional PINNs. In this paper, we develop a general APINN framework for solving relevant forward and inverse integro-differential transport equations of radiative transfer theory, which is a domain of vital importance in science and engineering with applications to complex photonic devices, medical imaging, metamaterials, thermal transport, as well as astrophysics, climate dynamics, and nuclear engineering<cit.>. In particular, we address and demonstrate APINN formulations for the accurate solution of forward, inverse, and coupled radiation-conduction problems of radiative transport in the relevant slab geometry for different choices of scattering phase functions. Our paper is organized as follows: in Section <ref>, we will provide a brief introduction to the radiative transfer equation (RTE), along with a description of the general APINN employed throughout this paper. In Section <ref>, we discuss forward problems for different phase functions governing the scattering processes. Specifically, we present benchmarked solutions for isotropic, Rayleigh, and Henyey-Greenstein scattering phase functions that are often utilized in engineering applications <cit.>. In Section <ref>, we discuss the APINN solution of a coupled radiation-conduction problem, enabling the accurate description of radiation transfer in a partecipating medium. Lastly, in Section <ref>, we show the solution of a canonical inverse problem described by the Schwarzschild-Milne integral equation, and we show that the radiative intensity solution and the single scattering parameters are accurately retrieved solely based on intensity data at the boundaries of the slab. Our work shows that APINNs possess the flexibility, accuracy, and robustness required to become a powerful tool for inverse scattering and thermal transport modeling beyond the limitations of Fourier theory. Therefore, this work expands significantly upon the current capabilities and range of applications of PINNs methods and paves the way to the study of higher-dimensional transport problems in strongly scattering media with applications to nanophotonics, metamaterials, biomedical imaging, and optoelectronic device modeling. § APINNS FOR RADIATIVE TRANSFER PROBLEMS The framework of radiative transfer theory for the study of complex scattering media was originally developed in astrophysics as a way to quantitatively describe the radiative equilibrium in interstellar clouds, planetary and stellar atmospheres<cit.>. Radiative transfer theory has found a very wide range of applications beyond astrophysics, including biomedical optics<cit.>, atmospheric science<cit.>, radiation hydrodynamics<cit.> and remote sensing<cit.>. For example, the propagation of light through fogs and clouds, white paints or paper, milky and turbid liquids, human tissue, and the brain can be adequately described by the classical theory of radiation transfer that we discuss in this paper using APINNs. The radiation transfer theory is founded upon the RTE, which is a Boltzmann-type integro-differential equation expressing the detailed energy balance for the propagation of directed energy flow, or radiance, through a multiply scattering discrete random medium. For scalar waves in three spatial dimensions the RTE can be written as follows: 1/c∂ I(r,ŝ,t)/∂ t =-ŝ·∇ I(r,ŝ,t)-(κ+σ) I(r,ŝ,t)+ σ∫_4πI(r,ŝ',t)p(ŝ',ŝ)dΩ'+S(r,ŝ,t) where κ and σ are the absorption and scattering coefficients, respectively. Here S(r,ŝ,t) denotes a generic source term and p(ŝ',ŝ) is the phase function describing the angular distribution of the scattering process. Alternatively, after introducing the optical thickness τ and the single scattering albedo ω as: τ(S) = ∫_S'=0^Sβ(S')dS' =∫_S'=0^S[κ(S')+σ(S')]dS' ω = σ/β =σ/κ+σ one can rewrite Eq. <ref> in the alternative form: 1/β c∂ I(τ,ŝ,t)/∂ t =-ŝ·∇_τ I(τ,ŝ,t)-I(τ,ŝ,t)+ ω∫_4πI(τ,ŝ',t)p(ŝ',ŝ)dΩ'+S(τ,ŝ,t) which is the RTE in its standard form. For a detailed discussion and derivation of the RTE, we refer the reader to references chandrasekhar_radiative_2016,howell_thermal_2020,modest_radiative_2021. In essence, the RTE states that a directed beam of light in a uniform random medium loses energy through divergence and extinction, including both absorption and scattering away from the beam (i.e., out-scattering contributions), and it gains energy from radiation sources, fluorescence or scattering events that redirect it towards the beam (i.e., in-scattering contributions). In the standard formulation, wave interference effects, polarization and non-linearity in the medium are neglected. Radiative transfer theories for vector waves have also been developed but are outside the scope of this work and more details on these subjects can be found in references mishchenko_multiple_2017, ishimaru_wave_1978. Even for the relevant slab geometry, the RTE introduced above is generally very difficult to solve<cit.>. Analytic solutions only exist for very simple cases while in many realistic situations, numerical methods such as Monte Carlo transport simulations are usually employed<cit.>. For this reason, the RTE is often approximated, under suitable conditions, by the simpler but less accurate diffusion equation<cit.>. In our paper, we developed APINNs to obtain the forward and inverse solution of the scalar RTE in the steady-state and for different choices of phase functions. However, the developed framework can be naturally extended to time-dependent and vector RTE problems, anisotropic phase functions, and arbitrary nonlinear responses. All the implementations of the APINN algorithms developed in this paper are obtained in the powerful TensorFlow environment<cit.>. The general APINN network utilized to solve forward and inverse RTE problems in the slab geometry is illustrated in Fig. <ref>. We considered a fully connected neural network (FCNN) with input vector x=(τ, μ) with randomly distributed values of the optical thickness τ and μ=cosθ over a two-dimensional spatial-angular domain Ω and output that is the predicted surrogate Î(μ,τ;θ̃) of the RTE solution I(μ,τ;θ). Here, θ denotes the angle of the directed energy flow with the axis z perpendicular slab's surface and θ̃ is the vector of weights and biases of our FCNN. In addition, the FCNN outputs n auxiliary variables v_i(μ,τ;θ̃), each corresponding to an integral expansion term in the RTE. The outputs of the APINN are then used to compute, by means of automatic differentiation (AD), the derivatives of Î(μ,τ;θ̃) and v_i(μ,τ;θ̃), along with the PDE, initial conditions, and boundary conditions, depending on the nature of the problem. Each calculated value is then combined into a term of the loss function ℒ(θ̃) defined as: ℒ(θ̃) = ℒ_int(θ̃;𝒩_int)+ℒ_b(θ̃;𝒩_b) + ℒ_aux(θ̃;𝒩_aux)+λ∑_iθ̃_i^2 In the expression above, ℒ_int(θ̃;𝒩_int) =1/|𝒩_int|∑_x∈𝒩_int|| f( x;Î,∂Î/∂τ,v_0,…, v_n ) ||^2 denotes the loss term calculated in the interior domain Ω and ℒ_b(θ̃;𝒩_b) = 1/|𝒩_b|∑_x∈𝒩_b|| ℬ(Î,x) ||^2 is the loss term for the boundary conditions of the RTE where x∈∂Ω. Moreover, ℒ_aux(θ̃;𝒩_aux) = 1/|𝒩_aux|∑_x∈𝒩_aux|| f( x;∂ v_0/∂μ,…, ∂ v_n/∂μ) ||^2 denotes the loss term associated to the auxiliary conditions that define the APINN model. 𝒩_int, 𝒩_b, 𝒩_aux denote the number of residual points for each loss term, and the last term in Eq. <ref> is an L2 regularization included in our simulations to avoid overfitting during training<cit.>. Table <ref> summarizes the training and APINN network parameters for the simulations studied throughout this paper. In the forward simulations of Section <ref>, we decided to analyze RTE problems in the slab geometry with different scattering phase functions of ever-increasing terms in the Legendre series expansion, resulting in an increasing number of integrals in the RTE, while keeping the general network and training parameters the same. The Legendre series expansion of the RTE phase function will be discussed in detail in Section <ref>. We thus start from the Schwarzschild-Milne equation, whose RTE has only one integral, and its corresponding APINN requires only one auxiliary variable. Then, we study the RTE with the Rayleigh phase function, whose Legendre expansion has two non-zero terms, resulting in two auxiliary outputs in the network. Finally, we study the Henyey-Greenstein (HG) phase function, whose series expansion was truncated at the tenth term, introducing ten auxiliary variables in the APINN. This approach allowed us to present a reliable scaling analysis when APINN is employed to solve integro-differential problems with kernels whose series expansions converge at different speeds. In the next section, we start presenting our APINN results, and we begin by addressing the Schwarzschild-Milne equation in a slab. § RESULTS AND DISCUSSION §.§ Solutions of forward problems in a slab §.§.§ The Schwarzschild-Milne equation We first consider the time-independent radiative transfer problem in a slab governed by the RTE. As discussed by Howell<cit.>, this steady-state condition of the RTE is valid under the assumption that the radiation intensity is unaffected by photon time-of-flight effects, reducing Eq. <ref> to the form investigated here: μdI(τ,μ)/dτ + I(τ,μ)=ω/2∫_-1^1 I(τ,μ')Φ(μ,μ')dμ' When Φ(μ,μ')=1, the equation above becomes the well-known Schwarzschild-Milne integral equation describing isotropic scattering processes. The corresponding boundary conditions are<cit.>: I(0, μ) = I_0, 0<μ<1 I(τ_0,μ) = 0, -1<μ<0 In order to solve the Schwarzschild-Milne integral equation using the APINN framework, we recast it into an equivalent differential problem introducing the auxiliary variable v(μ;τ), which is constrained by the following system: μdI/dτ+I-ω/2v(1)=0 v(μ;τ)=∫_-1^μI(μ';τ)dμ' v(-1;τ)=0, dv/dμ(μ;τ)=I(τ,μ) We then train the APINN to solve the problem for different values of the albedo ω varying from 0.2 to 1.0. Table <ref> shows the speed and accuracy of our APINN implementation in solving the Milne problem. In the large scattering limit of ω≥ 0.9, APINN minimized the loss function with values that are two orders of magnitude lower and for a fraction of the time than for the equivalent geometry displayed in Ref. mishra_physics_2021, where a quadrature method was employed. Two representative APINN solutions for the spatial-angular distributions of the radiation intensity for τ_max=1.0 are displayed in Fig. <ref> (a) and (b). To benchmark our solutions using the tables calculated by Van de Hulst's in Ref. hulst_multiple_1656, we computed the zeroth moment or point-direction gain G(τ) of the radiative intensity, which is defined as<cit.>: G(τ) = ∫_-1^1 I(τ,μ) dμ Fig. <ref> (c) displays the validation data of G(τ) calculated by Van de Hulst and the solution from our network, showing an excellent agreement achieved by the APINN framework. This is further confirmed by the average relative error between the two solutions displayed in the last column of Table <ref>. Fig. <ref> (d) shows a comparison between the APINN and the standard PINN quadrature loss function to solve the same problem, as implemented in Ref. mishra_physics_2021. In this figure, we display the loss function versus the number of epochs for the three largest scattering values of ω. We can immediately notice that the quadrature solution is heavily affected in its performance by the scattering strength, and the L-BFGS-B solver terminates the training early because the loss function has already saturated to its minimum value and is not decreasing further. In contrast, the APINN's loss function monotonically decreases independently of ω. This result confirms the robustness, flexibility, and accuracy of the APINN framework in solving transport problems for strongly scattering media. In a variety of engineering applications, however, the material's response is not isotropic. Therefore, in Section <ref>, we employ the APINN framework to solve the RTE in a slab with an anisotropic Rayleigh scattering phase function. §.§.§ The Rayleigh scattering phase function The Rayleigh phase function is employed to study anisotropic light scattering processes in various fields, from optics to astronomy<cit.>. The phase function reads: p(cosθ) = 3/4(1+cosθ^2) and because the scattering from spherically symmetric particles is cylindrically symmetric with respect to the incoming direction, this symmetry holds after averaging over all possible orientations. Therefore, in these situations, the phase function depends on ϕ-ϕ' and one can compute this average resulting in the projected phase function<cit.>: p_0(μ,μ')=∫dϕ/2πdϕ'/2πp(μ,ϕ;μ',ϕ') Using the equality μ=cosΘ=n·n'=sinθsinθ'cos(ϕ-ϕ')+cosθcosθ' one obtains: p_0(μ,μ')=3/8(3-μ^2-μ'^2+3μ^2μ'^2) To facilitate the calculations and the auxiliary variable formulation of the APINN framework, one typically considers the expansion of the scattering phase function in Legendre polynomials: Φ(μ,μ') = ∑_ℓ=0^∞w_ℓP_ℓ(μ)P_ℓ(μ') Note that, for the Rayleigh phase function, the only nonzero w_ℓ terms are w_0=1.0 and w_2=0.1. Therefore, Eq. <ref> in a slab with Rayleigh scattering becomes μdI(τ,μ)/dτ + I(τ,μ)=ω/2∫_-1^1I(τ,μ')∑_ℓ=0^∞w_ℓP_ℓ(μ)P_ℓ(μ')dμ' and after rearranging terms and truncating the series expansion at ℓ=2 we get: μdI(τ,μ)/dτ + I(τ,μ)=ω/2 [w_0P_0(μ)∫_-1^1I(τ,μ')P_0(μ')dμ' +w_2P_2(μ)∫_-1^1I(τ,μ')P_2(μ')dμ'] Finally, we recast the problem by adding two auxiliary variables to the network with their respective constraints as follows: μdI(τ,μ)/dτ + I(τ,μ)=ω/2[w_0P_0(μ)v_0(1)+w_2P_2(μ)v_2(1)] v_0(μ;τ)=∫_-1^μI(τ,μ')P_0(μ')dμ' v_0(-1;τ)=0, dv_0/dμ(μ;τ)=I(τ,μ)P_0(μ) v_2(μ;τ)=∫_-1^μI(τ,μ')P_2(μ')dμ' v_2(-1;τ)=0, dv_2/dμ(μ;τ)=I(τ,μ)P_2(μ) Due to the lack of benchmark solutions for Rayleigh scattering in a slab, we decided to consider a physical system similar to the one studied by Mishra and Molinaro in Ref. mishra_physics_2021, namely the case where the single scattering albedo depends on the optical thickness τ of the material. In this case, Eq. <ref> becomes dI(τ,μ)/dτ + I(τ,μ)=ω(τ)/2[ w_0P_0(μ)v_0(1;τ) +w_2P_2(μ)v_2(1;τ)] To solve this problem, we train APINN with the parameters specified in Table <ref>, using 40 neurons per layer. The training for this solution took 12 minutes, and the final value of the loss function ℒ was 10^-6, demonstrating the adaptivity and flexibility of APINN in solving anisotropic scattering problems. Fig. <ref>(a) displays the APINN radiative intensity solution as a function of μ and the optical thickness. This result highlights the flexibility of APINN in finding the solution to an analytically intractable problem<cit.>. In turn, this motivates us to study the RTE with strongly anisotropic scattering properties modeled by the Henyey-Greenstein (HG) phase function. §.§.§ The Henyey-Greenstein phase function Here we consider the forward RTE problem in the slab with the Henyey-Greenstein (HG) phase function governing the scattering processes. The HG phase function finds applications in astrophysics, atmospheric optics, and biomedical imaging, and it depends on both the cosine of the incident angle and the asymmetry factor g∈[0,1] that appears in the equation below<cit.>: p(μ,g) = 1-g^2/(1-2gμ+g^2)^3/2 where μ = cosθ. In the limit of g→0, the HG phase function reduces to isotropic scattering, while in the limit of g→1, HG describes strongly anisotropic scattering events. As for the Rayleigh phase function, the HG phase function can be rewritten using the Legendre polynomials expansion in Eq. (<ref>). However, unlike the Rayleigh case, the Legendre expansion converges more slowly, and additional terms need to be included to achieve accurate numerical results: μdI(τ,μ)/dτ + I(τ,μ)=ω/2[ w_0P_0(μ)∫_-1^1I(τ,μ')P_0(μ')dμ' +w_1P_1(μ)∫_-1^1I(τ,μ')P_1(μ')dμ' +… +w_nP_n(μ)∫_-1^1I(τ,μ')P_n(μ')dμ'] where: w_ℓ = (2n+1)g^n In our numerical studies, we chose to benchmark the RTE with HG phase function, g=0.5, which allowed us to utilize Van de Hulst's tables as validation data<cit.>. The polynomial expansion of the phase function was truncated after ten terms, introducing ten auxiliary variables and their corresponding constraint conditions in the simulation: μdI(τ,μ)/dτ + I(τ,μ)=ω/2[w_0P_0(μ)v_0(1;τ) +…+w_10P_10(μ)v_10(1;τ)] v_0(μ;τ)=∫_-1^μI(τ,μ')P_0(μ')dμ' v_0(-1;τ)=0, dv_0/dμ(μ;τ)=I(τ,μ)P_0(μ) … v_10(μ;τ)=∫_-1^μI(τ,μ')P_10(μ')dμ' v_10(-1;τ)=0, dv_10/dμ(μ;τ)=I(τ,μ)P_10(μ) Table <ref> provides a summary of the APINN training for this problem. Considering the larger number of auxiliary variables, we trained using 80 neurons per layer instead of 40. Similarly to the isotropic and Rayleigh cases, the loss function is minimized to extremely low values with a minor trade-off in speed due to the larger number of auxiliary variables in the system, as the second and third columns of Table <ref> demonstrate. The accuracy of these results, displayed in the last column of Table <ref>, confirms the versatility of the APINN framework, which excels in solving even strong anisotropic scattering problems. Fig. <ref> (c) shows a representative solution of the radiation intensity when ω=1.0, and Fig. <ref> (b) displays the benchmarked solutions for this problem by comparing the integrated radiative intensity G(τ) calculated from the APINN network with the Van de Hulst's data. These results open the doors for multiple biomedical, metamaterials, and nano-optics applications where the HG phase function is often utilized to model realistic scattering processes<cit.>. §.§ The coupled radiation-conduction problem of a participating medium We now apply our APINN method to the solution of a coupled problem in radiative transfer theory. Specifically, we consider a conducting and participating slab that couples to the radiation hitting the boundary in the steady-state. Such problems have been analyzed extensively in the literature<cit.>, but, to our knowledge, have never been solved using physics-informed neural networks. Here, we use the APINN framework to analyze this problem, where the slab's temperature profile is governed by a Poisson-like equation with a coupling term to the RTE<cit.>. We will further analyze how the conduction-radiation parameter N_CR affects the traditional Fourier temperature solution in the steady-state when significant temperature differences are imposed at the boundaries of the slab. The conduction-radiation parameter N_CR measures the ratio of conductive to radiative heat contributions in a given medium, and it is defined as<cit.>: N_CR=k β/4 k_B T^3 = k (κ + σ)/4 k_B T^3 For the simulations that follow, we chose to study coupled systems where N_CR varies from 10 (for N →∞, we get the Fourier limit) to 0.001 (for N → 0, radiative processes dominate). We consider the heat transfer problem due to conduction and radiation in a participating medium presented by Ref. moura_neto_introduction_2013 governed by the two following coupled integro-differential equations: d^2Θ/dτ^2-(1-ω)/N_CR[ Θ^4(τ)-1/2G(τ)]=0 μdI(τ,μ)/dτ + I(τ,μ)=H[Θ(τ)]+ω/2∫_-1^1 I(τ,μ')Φ(μ,μ')dμ' for 0<τ<1, -1≤μ≤1, Φ(μ,μ')=1, ω=0.9 where the temperature is being modeled by the normalized adimensional quantity Θ=T/T_1. The coupling terms are: G(τ)=∫^1_-1I(τ,μ)dμ, H[Θ(τ)]=(1-ω)Θ^4 and G(τ) is the zeroth moment of the intensity I(τ,μ). The boundary conditions are: I(0,μ)=1, μ∈(0,1], and I(1,μ)=0, μ∈[-1,0) Θ(0)=1 and Θ(1)=T_2/T_1 Since the problem involves two undetermined coupled functions, we modified the architecture of the APINN framework. The changes are illustrated in Fig. <ref>: the input parameters are passed to the radiative intensity network Î(τ,μ) with auxiliary variables as for the uncoupled cases discussed so far, but the spatial variable τ is also used to train simultaneously the adimensional temperature network Θ̂(τ). The coupled problem recasted in the APINN formalism reads: d^2Θ/dτ^2-(1-ω)/N_CR[ Θ^4(τ)-1/2v(1;τ)]=0 μdI(τ,μ)/dτ + I(τ,μ)=H[Θ(τ)]+ω/2v(1;τ) where we introduced the auxiliary variable v(μ;τ) and its corresponding conditions like in Eq. (<ref>): v(μ;τ)=∫_-1^μI(μ';τ)dμ', v(-1;τ)=0, dv/dμ(μ;τ)=I(τ,μ) By means of automatic differentiation, the outputs of the two networks are then used to compute the required PDE conditions, initial conditions, and boundary conditions, which are then incorporated into the coupled loss function: ℒ= ℒ_Î(τ,μ) + ℒ_Θ̂(τ) To solve this problem, we coupled the APINN network for the radiative intensity with a PINN estimating the dimensionless temperature Θ(τ), with parameters according to Table <ref>. Fig. <ref> shows the solutions for the coupled problem when two different temperature jumps are imposed at the rightmost boundary. Fig. <ref>(a) displays a ΔΘ=150K, whereas Fig. <ref>(b) a ΔΘ=270K. Moreover, we analyze the dimensionless temperature behavior when the conduction-radiation parameter N_CR decreases, as previously investigated in Refs. modest_radiative_2021, howell_thermal_2020. It is important to realize that both panels in Fig. <ref> display a beyond-Fourier behavior as N_CR decreases, demonstrating that the temperature profile is significantly affected by radiative scattering phenomena. Lastly, Table <ref> presents some relevant information regarding the APINN training. We note that, even for the coupled case, the APINN successfully minimizes both the temperature loss function ℒ_Θ̂(τ) and the radiative intensity loss function ℒ_Θ̂(τ) independently of the parameter N_CR. §.§ Inverse problem: retrieval of the albedo from the boundary data Finally, we present here the solution of an inverse problem of radiative transfer theory where we employ APINN to retrieve simultaneously the forward solution of the intensity I(τ,μ) and the single scattering albedo ω. We do not, however, introduce synthetic data everywhere in the domain, as it has been done previously in the literature<cit.>, but we limit ourselves to introducing two data points representing the integrated intensity G(τ) at the edges of the slab, simulating a lab environment with two detectors capturing integrated radiation entering and exiting the slab, respectively. The reason to present an inverse problem in such a fashion is to demonstrate the full potential and capabilities of physics-informed neural networks that, with no additional overhead and computing power, can solve a forward and parameter retrieval problem simultaneously. We thus modify the Schwarzschild-Milne equation for a slab discussed in an earlier section. In particular, Eq. (<ref>) is changed to include the unknown albedo parameter ω_θ: μdI/dτ+I-ω_θ/2v(1)=0 and the loss function in Eq. (<ref>) is modified to include the two synthetic detector data points at the boundaries of the slab: ℒ(θ,ω_θ) = ℒ_int(θ,ω_θ;𝒩_int)+ℒ_b(θ,ω_θ;𝒩_b) +ℒ_aux(θ,ω_θ;𝒩_aux) +ℒ_inv(θ,ω_θ;𝒩_inv) where ℒ_inv(θ,ω_θ;𝒩_inv)= 1/|𝒩_inv|∑_(τ,μ)∈𝒩_inv|| ∫^1_-1Î(τ,μ)dμ - G(τ)||^2= 1/2(|| ∫^1_-1Î(0,μ)dμ - G(0)||^2 + || ∫^1_-1Î(1,μ)dμ - G(1)||^2) Fig. <ref> displays the fast convergence of the retrieved APINN parameter ω_θ to the actual value ω. Each line corresponds to a different APINN training procedure during which the only data points added were G(0) and G(1) obtained from the Van de Hulst's tables<cit.>, and used to minimize ℒ_inv during the training process. Despite the loss term with two data points was not weighted differently from the interior or boundary ones, APINN achieved a precise inversion of the parameter of interest. In fact, as displayed in Table <ref>, the loss function converges independently of the albedo ω and with great precision, as displayed by the relative error between the known albedo and the predicted APINN albedo ω_θ shown in the last column of Table <ref>. Therefore, APINN retrieved the correct parameter of interest ω_θ when only two points were added during the training process. § CONCLUSIONS Throughout this paper, we have described different applications of APINN for solving the radiative transfer equation, which is a Boltzmann-type transport equation. We successfully solved forward problems in a slab with both isotropic and anisotropic scattering phase functions and irrespective of the albedo. The results presented improved upon previous attempts to use physics-informed neural networks for solving the RTE in both accuracy and speed<cit.>. Furthermore, we presented the solution of the first coupled radiation-conduction problem in a participating medium using the APINN framework and we showed that the loss functions of coupled neural networks quickly converged to a low minimum value below 10^-5. Our findings open the possibility to utilize APINN to analyze higher dimensional systems and discover more interesting physics with applications to metamaterials and semiconductor device modeling. Finally, we solved an inverse problem following a setup that replicates an experimental setting with data points at the boundary of the system. It will be interesting in future studies to build on the APINN platform to address higher dimensional coupled, inverse coupled, and strongly scattering forward systems with applications to biomedical imaging, nanophotonics, metamaterials, and thermal modeling of semiconductor devices. We acknowledge the support from the U.S. Army Research Office, RF-Center managed by Dr. J. Qiu (Grant #W911NF-22-2-0158). We thank professors Mike Kirby, Akil Narayan, and Shandian Zhe for useful discussions on this topic. *
http://arxiv.org/abs/2307.05851v1
20230711235038
Transverse Single-Spin Asymmetry for Inclusive and Diffractive Electromagnetic Jets at Forward Rapidity in $p^{\uparrow}$+p Collisions at $\sqrt{s} = 200$ GeV and $510$ GeV at STAR
[ "Xilin Liang" ]
nucl-ex
[ "nucl-ex" ]
=6.05in =9.45in =-0.3in =-0.35in top=1.5cm, bottom=1.5cm, left=2.2cm, right=2.2cm, #1 #1 #1 #1 #1 #1 and #1 Submitted to #1 Abstract Presented PRESENTED AT Transverse Single-Spin Asymmetry for Inclusive and Diffractive Electromagnetic Jets at Forward Rapidity in p^↑+p Collisions at √(s) = 200 GeV and 510 GeV at STAR Xilin Liang, for the STAR Collaboration University of California, Riverside There have been numerous attempts, in the last decades, to understand the origin of the unexpectedly large transverse single-spin asymmetry (A_N) observed in inclusive hadron productions at forward rapidities in transversely polarized p^↑+p collisions at different center-of-mass energies (√(s)). The current theoretical frameworks aimed at explaining this puzzle include the twist-3 contributions in the collinear factorization framework, as well as the transverse-momentum-dependent contributions from the initial-state quark and gluon Sivers functions, and/or final-state Collins fragmentation functions. Besides, there are indications that the diffractive processes may contribute to the large A_N. We present the detailed investigations into the A_N for electromagnetic jets (EM-jets) produced in inclusive processes using the Forward Meson Spectrometer with transversely polarized p^↑+ p data at √(s) = 200 GeV collected in 2015 at STAR. We observe a negative value for the A_N of EM-jets in diffractive processes. This finding shows a different sign for A_N in inclusive processes and needs further theoretical input in order to be understood. Finally, we present the statistical projections of the A_N for inclusive and diffractive EM-jets utilizing p^↑+ p data at √(s) = 510 GeV collected in 2017 at STAR. This dataset allows for a substantial enhancement in statistical precision. DIS2023: XXX International Workshop on Deep-Inelastic Scattering and Related Subjects, Michigan State University, USA, 27-31 March 2023 < g r a p h i c s > § INTRODUCTION Transverse single-spin asymmetry, denoted by A_N, is also known as the left-right asymmetry of the particles produced with respect to the plane defined by the momentum and spin directions of the polarized beam. In recent decades, this asymmetry has been observed to be large for charged- and neutral-hadron production in polarized hadron-hadron collisions <cit.>. These observations stand in contrast to nearly zero A_N predicted by perturbative Quantum Chromodynamics in the hard scattering processes <cit.>. Two major frameworks provide potential explanations for such sizeable asymmetries. The first one introduces the transverse-momentum-dependent contributions from the initial-state quark and gluon Sivers functions and/or the final-state Collins fragmentation functions <cit.>. The Sivers effect shows that this asymmetry comes from the correlation between the proton spin and the parton's transverse momentum at the initial state <cit.>; while the Collins effect arises from the correlation between the spin of the fragmenting quark and the transverse momentum of the resulting hadron at the final state <cit.>. The second framework is based on the twist-3 contributions in the collinear factorization framework, which includes the contributions from the quark-gluon or gluon-gluon correlations and fragmentation functions <cit.>. Additionally, experimental measurements indicate that the significant A_N might arise from diffractive processes, according to the analyses of A_N for forward π^0 and electromagnetic jets (EM-jets) in transversely polarized proton-proton (p^↑+p) collisions at STAR <cit.>. In this proceeding, firstly, we present the preliminary results of A_N for inclusive EM-jets in p^↑+ p collisions at √(s) = 200 GeV based on the STAR 2015 dataset. These results explore the dependence of A_N on photon multiplicity, transverse momentum (p_T), and energy of the EM-jets. Furthermore, we present the preliminary result for A_N of diffractive EM-jets using p^↑+p collisions at √(s) = 200 GeV from the same dataset. Finally, we show the statistical projection plots for A_N of inclusive and diffractive EM-jets using p^↑+p collisions at √(s) = 510 GeV from STAR 2017 data. § ANALYSIS §.§ Experiment setup The measurements are conducted with the STAR experiment at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory. RHIC is the only polarized proton-proton collider in the world, which is able to provide transversely or longitudinally polarized proton-proton collisions at √(s) = 200 GeV and 500/510 GeV. The presented measurements and statistical projections are performed using high luminosity datasets with transversely polarized p^↑ + p collisions at √(s) = 200 GeV and 510 GeV, respectively. Their average beam polarizations are about 57% and 55%, and their integrated luminosities are about 52 pb^-1 and 350 pb^-1, respectively. The major detectors used for these analyses are the Forward Meson Spectrometer (FMS) and the Roman Pot (RP) detectors. The FMS serves as an electromagnetic calorimeter designed to detect photons, neutral pions, and η mesons. Located on the west side of the main STAR apparatus and about 7 meters away from the nominal interaction point, the FMS offers full azimuthal coverage and a pseudo-rapidity range of 2.6 to 4.2 <cit.>. The RP detectors are located on both sides, about 15.8 meters from the nominal interaction point along the beamline. Each side features two sets of RP detectors, separated by approximately 1.8 meters. Within each set, there is a package with 4 silicon strip detector planes (SSDs) located above and below the beamline <cit.>. §.§ Electromagnetic jet reconstruction and corrections The EM-jet is the EM component of a full jet. To reconstruct the EM-jets, first, the FMS clusters were formed by grouping adjacent towers with non-zero energies. Then, a shower shape fitting was performed for every cluster to obtain the FMS points as the photon candidates, which were used in EM-jet reconstruction for the analyses. Further information regarding the FMS photon candidates can be found in <cit.>. The anti-k_T algorithm was employed to reconstruct the EM-jets, with a resolution parameter of R = 0.7 <cit.>. The minimum p_T requirement for the EM-jets was determined by either the trigger threshold or a fixed threshold depending on the dataset being analyzed. The reconstructed EM-jet energy and p_T were first corrected by subtracting the contribution from the underlying event, which was estimated using the “off-axis” cone method <cit.>. In addition, the EM-jet kinematics were further corrected back to the “particle level” based on the simulation, in order to account for the detector response. This simulation framework was set up with PYTHIA 6 with Perugia 2012 Tune for the particle level event generation <cit.>. The generated events were then passed through the GEANT-based STAR detector simulation. §.§ Channels and event selection for inclusive and diffractive processes The channels through which inclusive EM-jets are studied are p^↑ + p →EM-jet + X. The presence of the rapidity gap between the RP and the FMS fulfilled the requirement for the diffractive processes. Consequently, diffractive events were identified by tagging the proton detected by the RP and identifying the EM-jets from the FMS. Two possible channels for the diffractive processes were considered: p^↑ + p → p + EM-jet + X and p^↑ + p → p + p + EM-jet + X. Both channels required exactly one proton detected in the RP on the west side. The former channel required no proton detected on the east side, while the latter required exactly one proton detected on the east side. The EM-jet reconstruction and correction procedures for inclusive processes and diffractive processes followed the methodology mentioned in the previous section <ref>. Additional event selection criteria were applied to identify the diffractive events. Firstly, the number of tracks detected in the RP (RP track) had to match the expected number of protons for the either possible channel of the diffractive processes. Moreover, these RP tracks are required to reconstruct properly based on the geometric acceptance of the RP. Then, the sum of the energy from the west side RP track and EM-jets, referred to as sum energy, are not allowed to exceed the threshold. Finally, the cut based on the ADC value of Beam-Beam Counter (BBC) <cit.> was employed. Only the events with BBC ADC values not exceeding the specified threshold were retained. These final two cuts are able to reduce the fraction of background events significantly. More comprehensive information on these event selection criteria can be found in <cit.>. § RESULTS §.§ Analysis method The cross-ratio method was used to extract the A_N for both inclusive and diffractive processes, and the corresponding formulas are presented in Eq. <ref> and <ref>. In both equations, A_raw represents the raw asymmetry obtained from the yields N^↑ (↓)(ϕ) , N^↑ (↓)(ϕ + π) observed at azimuthal angle ϕ, (ϕ + π) relative to the polarized beam direction for spin up (down) state. The term P corresponds to the average polarization of the proton beam. The cosine fit was applied to extract the A_N from the raw asymmetry in Eq. <ref>. A_raw(ϕ) = √(N^↑ (ϕ)N^↓ (ϕ + π)) - √(N^↓ (ϕ)N^↑ (ϕ + π))/√(N^↑ (ϕ)N^↓ (ϕ + π)) + √(N^↓ (ϕ)N^↑ (ϕ + π)) A_raw(ϕ) = P A_N cos(ϕ) This method takes advantage of detector azimuthal symmetry and cancels effects from the non-uniform detector efficiency and luminosity. §.§ Inclusive EM-jet A_N for p^↑+ p data at √(s) = 200 GeV Figure <ref> presents the preliminary results of the inclusive EM-jet A_N as a function of photon multiplicity, EM-jet p_T, and EM-jet energy. The A_N decreases as the photon multiplicity of the EM-jets increases. Notably, the EM-jets consisting of 1 or 2 photons exhibit the most pronounced asymmetry. The A_N for x_F < 0 (x_F is the longitudinal momentum fraction x_F = 2p_L/√(s)) is found to be consistent with zero regardless of the photon multiplicity. R0.6 < g r a p h i c s > A_N of inclusive EM-jet at FMS sorted by photon multiplicity, p_T, and energy bins. The lowermost panels display the average x_F values corresponding to each p_T bin. The black solid points represent the A_N values for x_F > 0 and the red hollow points depict the A_N values for x_F < 0. In addition, the photon multiplicity dependent inclusive EM-jet A_N as a function of x_F are presented in Fig. <ref> (left). The inclusive EM-jet A_N exhibits an increasing trend as x_F increases, regardless of the photon multiplicity. Also, the A_N of the EM-jet consisting of 1 or 2 photons is the strongest. This finding aligns with the previous measurement at STAR, where the A_N of the isolated π^0 was observed to be higher than that of the non-isolated π^0 <cit.>. §.§ Diffractive EM-jet A_N for p^↑+ p data at √(s) = 200 GeV Figure <ref> (right) presents the preliminary result for diffractive EM-jet A_N as a function of x_F. We observe a non-zero diffractive EM-jet A_N with a significance of 3.3σ below 0 at forward rapidity. Moreover, a significant absolute A_N is observed at the high x_F region. However, the sign of the diffractive EM-jet A_N is negative, which stands in contrast to the inclusive EM-jet A_N in Fig. <ref> and <ref> (left). The A_N for x_F < 0 is found to be consistent with zero. More theoretical inputs are needed to understand the behavior observed in the diffractive results. §.§ Statistical projection for p^↑+ p data at √(s) = 510 GeV The ongoing analyses of A_N for both inclusive and diffractive processes are being conducted using data at √(s) = 510 GeV. This high luminosity dataset holds promising prospects for a more precise investigation of A_N in both inclusive and diffractive measurements. To illustrate the anticipated improvements, Fig. <ref> shows the statistical projection for the inclusive processes, while Fig. <ref> presents the statistical projection for the diffractive processes. These plots compare the data at √(s) = 200 GeV and 510 GeV. With the utilization of the √(s) = 510 GeV data, a significant improvement in the precision of A_N measurements is expected, resulting in a reduction in statistical uncertainty of about a factor of 3 for high energy and high photon multiplicity EM-jets for inclusive EM-jet A_N measurement, and more than a factor of 2 for diffractive EM-jet A_N measurement. § CONCLUSION We present the inclusive and diffractive EM-jet A_N using the FMS at STAR in p^↑ + p collisions at √(s)= 200 GeV. The A_N for inclusive EM-jets increased with x_F. Notably, the A_N with lower photon multiplicity for the inclusive processes was found to be larger. The A_N for the diffractive processes is non-zero with a significance of 3.3 σ. However, the sign of diffractive A_N is negative, which is opposite to that observed in the inclusive processes. Further theoretical inputs are needed to understand its underlying physics. Finally, with the higher luminosity data set for p^↑ + p collisions at √(s)= 510 GeV at STAR, a higher precision will be achieved for both the inclusive and diffractive EM-jet A_N. 99 LargeTSSA1D.L. Adams et al., Phys. Lett. B 261, 201(1991) LargeTSSA2B. I. Abelev et al. (STAR Collaboration), Phys. Rev. Lett. 101, 222001(2008) LargeTSSA3A. Adare et al. Phys. Rev. D 90, 012006 (2014) LargeTSSA4E.C. Aschenauer et al., arXiv:1602.03922 ZhanwenJ. Adam et al. (STAR Collaboration), Phys. Rev. D 103, 092009 (2021) pQCDG. L. Kane, J. Pumplin, and W. Repko. Phys. Rev. Lett. 41, 1689 (1978) SiversD. Sivers, Phys. Rev. D 41, 83 (1990) CollinsJ. Collins, Nucl Phys B 396 (1993) 161 Twist-3J.W. Qiu and G. Sterman, Phys. Rev. Lett. 67 2264 (1991) MigankaM.M. Mondal (STAR Collaboration) PoS (DIS2014) 216 FMSJ. Adam et al. (STAR Collaboration), Phys. Rev. D 98, 032013 (2018) RP1J. Adam et al. (STAR Collaboration), Phys. Lett. B 808 (2020) 135663 FASTJETM.Cacciari, G. P. Salam, and G. Soyez, Eur. Phys. J. C (2012) 72: 1896 UEB. B. Abelev et al. (ALICE Collaboration), Phys. Rev. D 91, 112012 (2015) PythiaT. Sjostrand, S. Mrenna, and P. Z. Skands, JHEP 05, 026 (2006) Tune2012Peter Z. Skands Phys. Rev. D 82, 074018 BBCC. A. Whitten Jr. (STAR Collaboration), AIP Conference Proceedings 980, 390 (2008) DIS 2022 proceeding X. Liang (STAR Collaboration) 10.5281/zenodo.7236716
http://arxiv.org/abs/2307.04222v1
20230709162845
Derandomizing Codes for the Binary Adversarial Wiretap Channel of Type II
[ "Eric Ruzomberka", "Homa Nikbakht", "Christopher G. Brinton", "David J. Love", "H. Vincent Poor" ]
cs.IT
[ "cs.IT", "math.IT" ]
extend_vtrue Derandomizing Codes for the Binary Adversarial Wiretap Channel of Type II This work is supported in part by the U.S. National Science Foundation under grants CNS-2128448, CNS-2212565, CNS-2225577, EEC-1941529, ITE-2226447 and by the Office of Naval Research under grant ONR N000142112472. Eric Ruzomberka1, Homa Nikbakht1, Christopher G. Brinton2, David J. Love2 and H. Vincent Poor1 1Princeton University 2Purdue University August 12, 2023 ================================================================================================================================================================================================================================================================================================== plain plain We revisit the binary adversarial wiretap channel (AWTC) of type II in which an active adversary can read a fraction r and flip a fraction p of codeword bits. The semantic-secrecy capacity of the AWTC II is partially known, where the best-known lower bound is non-constructive, proven via a random coding argument that uses a large number (that is exponential in blocklength n) of random bits to seed the random code. In this paper, we establish a new derandomization result in which we match the best-known lower bound of 1-H_2(p)-r where H_2(·) is the binary entropy function via a random code that uses a small seed of only O(n^2) bits. Our random code construction is a novel application of pseudolinear codes – a class of non-linear codes that have k-wise independent codewords when picked at random where k is a design parameter. As the key technical tool in our analysis, we provide a soft-covering lemma in the flavor of Goldfeld, Cuff and Permuter (Trans. Inf. Theory 2016) that holds for random codes with k-wise independent codewords. § INTRODUCTION Consider a communication setting in which a sender Alice wishes to communicate a message to a receiver Bob by sending a sequence of bits over a noisy wiretap channel. The channel is controlled by an (active) adversary who can both read a fraction r ∈ [0,1] and flip a fraction p ∈ [0,1] of Alice’s transmitted bits. In this setting, Alice’s and Bob’s communication goal under any adversarial strategy is two-fold: * (Reliability) Bob must decode Alice’s message with small probability of error. * (Secrecy) The adversary must extract negligible information of the Alice's message via its observation of Alice's sequence. Critically, we make no assumptions about the adversary’s computational limitations, and thus, secrecy must be guaranteed in an information theoretic sense by “hiding” the message in the adversary's bit-limited observation. Furthermore, the adversary may choose the location of the bit reads and bit flips in an arbitrary manner using knowledge of Alice and Bob communication scheme. In the literature, the above setting is known as the binary adversarial wiretap channel of type II (denoted as (p,r)-AWTC II) <cit.>. Much is known about the fundamental limits of communication over the (p,r)-AWTC II. Roughly defined, the secrecy capacity of the (p,r)-AWTC II is the largest rate at which Alice and Bob can communicate while meeting the above goals under a given secrecy measure. The measure we focus on is semantic secrecy (SS) <cit.>, which is widely recognized as the cryptographic gold standard for evaluating secrecy <cit.>. The SS capacity, denoted C(p,r), is partially known where the best-known lower bound <cit.> and upper bound <cit.> are max{1-H_2(p) - r,0 }≤ C(p,r) ≤ 1-H_2(p) - r - min_x ∈ [0,1] f(x) where H_2(·) is the binary entropy function and f(x) = H_2((2p-1)x+1-p) - H_2(p) - rH_2(x). Note that the two bounds are close for small r and tight for p=0. While the limits of communication over the (p,r)-AWTC II are mostly understood, less is known on how to construct efficient codes to achieve these limits. The proof of the lower bound (<ref>), as given in <cit.>, is non-constructive and follows an ordinary random coding argument in which codewords are chosen uniformly and independently from space {0,1 }^n where n is the blocklength of the code. As a tool for probabilistic constructions, the practical use of this random code distribution is limited. For example, to represent a code picked in this way, one would need to remember at least n 2^Rn random bits where R is the coding rate.[Additional random (seed) bits are needed if one considers codes with private randomness at the encoder (i.e., stochastic codes).] Thus, codes picked from a distribution with mutual independence property lack a succinct representation. Furthermore, the high degree of randomness used in the construction obscures insight into the structure of a good code. Without sufficient structure, efficient encoding and decoding algorithms are likely to be elusive. In this paper, we work towards an efficient code construction for the (p,r)-AWTC II by partially derandomizing the random code used in <cit.> to establish the lower bound (<ref>). We do so by relaxing the requirement that codewords be mutually independent and consider random codes with k-wise independent codewords for some positive integer k << n. We show that random codes under this weaker notation of independence can achieve the lower bound (<ref>) for some parameter k large enough but constant in n. As a result, these codes have both a more succinct representation and additional structure compared to random codes with mutually independent codewords. The approach we take is the following. We focus on a class of non-linear codes known as pseudolinear codes (precisely defined in Section <ref>), which was initially proposed by Guruswami and Indyk <cit.> outside of the AWTC setting. In the AWTC setting, pseudolinear codes have a number of nice properties, including succinct representations (i.e., O(k n^2) bits), efficient encoding algorithms, some linear structure, and k-wise independent codewords when chosen at random for a designable parameter k. We initiate the study of pseudolinear codes for achieving both secrecy and reliability in the wiretap setting. As our main result, we show that random pseudolinear codes achieve the best-known SS capacity lower bound (<ref>). Conversely, we show that non-linear codes are necessary to achieve this lower bound for some values of p and r. To prove our main result, we provide a new lemma on the soft-covering phenomenon <cit.> under random coding with k-wise independent codewords. § PRELIMINARIES, RESULTS & RELATED WORK §.§ Notation Unless stated otherwise, we denote random variables in uppercase font (e.g., X), realizations of random variables in lowercase font (e.g., x), and sequences in bold font (e.g., X, x). An exception to the above rules occurs when we denote codes: we denote random codes with script typeset (e.g., 𝒞) and realizations of random codes with calligraphic typeset (e.g., 𝒞). We denote the set of all possible distributions over a set 𝒳 as 𝒫(𝒳), and denote the uniform distribution over 𝒳 as Unif(𝒳). We denote that X is distributed as P ∈𝒫(𝒳) by writing X ∼ P. For PMFs P and Q such that supp(P) ⊆supp(Q) (absolute continuity), the relative entropy of P and Q is D(P||Q) ≜∑_x ∈supp(P) P(x) log_2 P(x)/Q(x). For α>0 and α≠ 1, the Rényi divergence of order α is D_α(P||Q) ≜1/α-1log_2 ∑_x ∈supp(P) P(x) (P(x)/Q(x))^α-1. Define the special case D_1(P||Q) ≜lim_α→ 1 D_α(P||Q) = D(P||Q). For an event 𝒜, we let 1{𝒜} denote the indicator of 𝒜. §.§ Setup Code: A (binary) code 𝒞_n of blocklength n is a subset of {0,1}^n. We will associate a code 𝒞_n with an encoding function x(·), which performs a mapping from the message space ℳ to codewords in {0,1}^n. As is common for wiretap codes, we will consider stochastic encoding in which x takes as argument a private random key w ∈𝒲 that is known only to Alice. Specifically, for a message rate R = log_2 |ℳ|/n and a (private) key rate R' = log_2 |𝒲|/n, an [n,R n,R' n] code 𝒞_n is a set 𝒞_n = {x(m,w): (m,w) ∈ℳ×𝒲} where we refer to x(w,m) as the (n-bit) codeword corresponding to message m and key w. In turn, a family of codes is a sequence {𝒞_n}_n=1^∞ where for each n≥1, 𝒞_n is an [n,Rn,R'n] code. Encoding/Decoding: For an [n,Rn,R'n] code 𝒞_n, probability mass function (PMF) P_M ∈𝒫(ℳ), a message M ∼ P_M and a private key W ∼Unif(𝒲) where M and W are independent, Alice encodes M into a codeword x(M,W) and transmits it over the channel. Subsequently, Bob receives a corrupted version of the codeword and performs decoding by choosing a message estimate M∈ℳ. We say that a decoding error occurs if M≠ M. The AWTC II: For a read fraction r ∈ [0,1] and an error fraction p ∈ [0,1/2], the adversary can observe rn bits and flip up to pn bits of x(M,W). The location of the read bits are indexed by a coordinate set 𝒮, which the adversary can choose from the set 𝒮 consisting of all subsets of [n] of size rn. In turn, the adversary observes Z = x(M,W,𝒮) where x(M,W,𝒮) denotes the rn bits of x(M,W) indexed at 𝒮, and subsequently, chooses the location of the bit flips. We emphasize that the location of the bit flips need not coincide with 𝒮. In general, the adversary can randomize its above choices by choosing a distribution on 𝒮 that can depend on the code, as well as a distribution on the bit flip locations that can depend on both the code and the observation Z. Secrecy: Define the semantic leakage as Sem(𝒞_n) = max_P_M ∈ P(ℳ), 𝒮∈𝒮 I_𝒮(M;Z) where I_𝒮(M;Z) denotes the mutual information between M ∼ P_M and Z = x(M,W,𝒮). In turn, a family of codes {𝒞_n}_n=1^∞ is said to be semantically-secret if Sem(𝒞_n) = 2^-Ω(n). We remark that this mutual-information based notation of SS is shown in <cit.> to be (asymptotically) equivalent to the operational definition of SS given in <cit.>. Further, SS is a stronger notation of secrecy than strong secrecy.[A family of codes is said to achieve strong secrecy if lim_n →∞max_𝒮∈𝒮I_𝒮(M;Z)=0 where the message distribution is fixed s.t. P_M ∼Unif(ℳ).] Reliability: The (maximum) probability of decoding error is defined as P^max_error(𝒞_n) = max_m ∈ℳℙ( M≠ m | M =m ) where the probability is taken w.r.t. the distribution of Alice's key and the worst-case distribution of the adversary's bit read/flip locations. A family of codes {𝒞_n}_n=1^∞ is said to be reliable if for any δ > 0, P_error(𝒞_n) ≤δ for large enough n. SS Capacity: The rate R>0 is said to be achievable over the (p,r)-AWTC II if there exists a family of codes {𝒞_n}_n=1^∞ (where for each n, 𝒞_n is an [n,Rn,R'n] code for some R'≥ 0) that is both semantically-secret and reliable. The SS capacity C(p,r) is the supremum of rates achievable over the (p,r)-AWTC II. §.§ Results Our first result is on the necessity of non-linear codes for achieving the SS capacity. We say that a [n,Rn,R'n] code 𝒞_n is linear[Examples of linear codes in the wiretap setting include Ozarow's and Wyner's linear coset coding scheme <cit.> and some polar code and LDPC code based schemes (e.g., <cit.>).] if there exists a generator matrix G ∈{0,1}^(R+R')n × n such that the codeword corresponding to any message m ∈ℳ≜{0,1}^Rn and key w ∈𝒲≜{0,1}^R'n is x(m,w) = [ m w ]G. A corollary of the following Theorem is that for any r ∈ (0,1] and p=0 (i.e., the channel to Bob is noiseless), linear codes cannot achieve SS capacity C(0,r) = max{ 1 - r,0 }. Let p =0, r ∈ (0,1], R > max{1-2r,0} and R' ∈ [0,1-R]. For large enough n, every linear [n,Rn,R'n] code 𝒞_n has either semantic leakage Sem(𝒞_n) ≥ 1 or probability of error P_error(𝒞_n) ≥ 1/2 over the (0,r)-AWTC II. Theorem <ref> can be extended to non-zero values of p. In particular, together with the lower bound (<ref>), Theorem <ref> implies that linear codes cannot achieve C(p,r) for either any p ∈ [0,1/2) and r∈ (0,1/2] such that H_2(p)<r, or any p ∈ [0,1/2] and r ∈ [1/2,1], except for the trivial case when C(p,r) is 0. A proof of Theorem <ref> is given in Section <ref>, which involves a specific construction of the adversary's coordinates 𝒮 together with the Plotkin bound to upper bound the minimum distance of a code. We remark that tighter distance bounds can be used in place of the Plotkin bound. For instance, if one uses the Elias-Bassalygo bound <cit.>, the rate lower bound in Theorem <ref> can be tightened to R > max{1 - H( 1- √(1-2r)/2),0 }. All bounds discussed thus far are plotted in Fig. <ref>. In light of Theorem <ref>, non-linear codes must be considered to achieve the lower bound (<ref>) for at least some values of p ∈ [0,1/2] and r ∈ [0,1]. We turn now to non-linear codes. For R∈(0,1], R' ∈ [0,1-R] and positive integers n and k, let H be the parity check matrix of any binary linear code with the following parameters: * blocklength 2^(R+R')n-1 * dimension 2^(R+R')n-1-ℓ for some ℓ = O(k(R+R')n) * minimum distance at least k+1. An [n,Rn,R'n,k] psuedolinear code 𝒞_n is any [n,Rn,R'n] code that satisfies the following two step encoding process. First, a message-key pair (m,w) ∈ℳ×𝒲 is mapped to the row of H^T indexed by (m,w), which we denote as h(m,w).[To account for the message-key pair (0,0), we define h(0,0) to be the all zeros vector.] Second, h(m,w) is linearly mapped to an n-bit codeword by some “generator” matrix G ∈{0,1}^ℓ× n, i.e., x(m,w) = h(m,w) G. Thus, the non-linearity of 𝒞_n is confined to the first stage of encoding. Towards the goal of derandomizing the random code of <cit.>, pseudolinear codes have the following three attractive properties <cit.>.[See <cit.> for further discussion of pseudolinear codes.] First, a pseudolinear code has a succinct representation as only ℓ n = O(k(R+R')n^2) bits are needed to describe the generator matrix. Second, encoding is computationally efficient if h(m,w) can be obtained in time polynomial in n for each (m,w) ∈ℳ×𝒲. For instance, we can let H be the parity check matrix of a binary Bose–Chaudhuri–Hocquenghem (BCH) code of design distance k+1, in which case H has an explicit representation and h(m,w) can be efficiently obtained by computing powers of a primitive (2^(R+R')n-1)-th root of unity from the extension field GF(2^(R+R')n), e.g., see <cit.>. Third, if we consider a random pseudolinear code by choosing the generator matrix G at random while fixing the parity check matrix H, then the codewords of the random code are uniformly distributed in {0,1}^n and k-wise independent, i.e., any subset of codewords of size k are mutually independent.[In contrast, random linear codes have codewords that are pair-wise (i.e., 2-wise) independent in non-trivial cases.] This final property is the key to showing that pseudolinear codes achieve the best-known lower bound of C(p,r). Let p ∈ [0,1/2] and r ∈ [0,1] such that 1-H_2(p)-r is positive. For any R < 1- H_2(p) - r and for large enough (but fixed) k, there exists a family pseudolinear codes {𝒞_n}_n=1^∞ (where for n≥ 1, 𝒞_n is an [n,Rn,R'n,k] pseudolinear code for some R'≥ 0) that is both reliable and semantically-secret. A proof of Theorem <ref> is provided in Section <ref>. The key technical tool in the proof is a new version of Wyner's soft-covering lemma which holds for codes with k-wise independent codeword. However, our version differs significantly from Wyner's <cit.>, which we state and prove in Section <ref>. Our version is closest to (and proved similarly to) the soft-covering lemma of Goldfeld, Cuff and Permuter <cit.>, which roughly states that if the key rate R' is larger than the mutual information between Alice's channel input and the adversary's observation, then a random code with mutually independent codewords satisfies an exponential number of secrecy constraints with probability at least 1-2^-2^Ω(n). Here, the double-exponential probability bound is important as it allows one to take a union bound over an exponential number of events. Our version of the lemma states that when we restrict the random code to a k-wise independent distribution, the same constraints hold with probability at least 1-2^-k Ω(n). Critically, while our probability bound tends to 1 more slowly than double-exponentially, it remains fast enough to take a union bound over an exponential number of events when k is large enough. §.§ Related Work Linear Codes and Semantic-Secrecy: Recall that Theorem <ref> states that linear codes cannot achieve the SS capacity for the (noiseless) (0,r)-AWTC II for any r ∈ (0,1]. Prior to this work, some special classes of linear codes were known to not achieve the SS capacity. In particular, Ozarow's and Wyner's linear coset coding scheme <cit.> does not achieve SS capacity of the (0,r)-AWTC II for any r ∈ (0,1]. extend_vWe provide a proof of this result in Appendix <ref>.We provide a proof of this result in the extended paper <cit.>. We remark that the necessity of non-linear codes for achieving the secrecy capacity is a product of the joint consideration of the semantic secrecy metric and the type II property of the wiretap channel. In contrast, linear codes are sufficient to achieve the weak secrecy capacity over the noiseless WTC II <cit.>. Furthermore, linear codes are sufficient to achieve both the weak and strong secrecy capacity of the noisy (but non-adversarial) WTC I <cit.>. Code Constructions: Explicit (and efficient) constructions that achieve the best known lower bound of the (p,r)-AWTC II are not known in general, except for the special cases of p=0 <cit.> and r = 0 <cit.>. In the general case, one promising approach is use modular constructions, which combine an existing error-control code with an invertible extractor <cit.> or algebraic manipulation detection code <cit.>. However, constructing binary error-control codes that are both efficiently encodable/decodable and achieve the (reliability) capacity of the (p,r)-AWTC is an open problem. In contrast to the above modular constructions, pseudolinear codes offer a non-modular approach. Recently, random (and thus non-explicit) pseudolinear codes were shown to achieve the (reliability) capacity of the (p,r)-AWTC II <cit.>. § PROOF OF THEOREM <REF> Notation: For message rate R>0, key rate R'∈ [0,1-R], and blocklength n ≥ 1 define ℳ≜{0,1}^Rn and 𝒲≜{0,1}^R'n. For an [n,Rn,R'n] linear code 𝒞_n, let G denote the (R+R')n × n generator matrix of 𝒞_n, which can be partitioned such that G = [ G_M; G_W ] where G_M ∈{0,1}^Rn × n and G_W ∈{0,1}^R'n × n. In turn, the codeword corresponding to message m ∈ℳ and key w ∈𝒲 is x(m,w) = m G_M + w G_W. For a coordinate set 𝒮∈𝒮, let the matrices G_M(𝒮) and G_W(𝒮) denote the columns of G_M and G_W indexed by 𝒮, respectively. Using this notation, if Alice transmits codeword x(m,w) then the adversary observes z = m G_M(𝒮) + w G_W(𝒮). Preliminaries: Let 𝒞_n be an [n,Rn,R'n] linear code with generator matrix G. We make the following assumption. Without loss of generality (w.l.o.g.), we assume that G is full rank, i.e., rank(G) = (R+R')n. The claim being w.l.o.g. is roughly as follows: if G is not full rank, then either P^max_error(𝒞_n) ≥ 1/2 or both 𝒲 and G can be replaced with a smaller key set and full rank generator matrix, respectively, without changing the code. extend_vA detailed discussion is provided in Appendix <ref>.A detailed discussion is provided in the extended paper <cit.>. We remark that following Assumption <ref>, we have that rank(G_M) = Rn and rank(G_W)= R'n. Before proving the converse result (Theorem <ref>), we state a few preliminary results relating the semantic leakage to the rank of G_M(𝒮) and G_W(𝒮) for 𝒮∈𝒮. For a code 𝒞_n and coordinate set 𝒮∈𝒵, we denote the mutual information between M and Z as I_𝒮(M;Z) (where the dependency on 𝒞_n is implied). For 𝒮∈𝒮 and M uniformly distributed over ℳ, I_𝒮(M;Z) = rank( G(S) ) - rank(G_W(𝒮) ). Let 𝒮∈𝒮. We first characterize the joint PMF of M, W and Z, which we denote as P_M,W,Z. We drop the subscripts from the PMF P_M,W,Z and its marginal PMFs when the meaning is clear from the use of the realization variables m, w and z. For z∈{0,1}^rn and m ∈ℳ, we have that P(z|m) = ∑_w ∈𝒲 P(z,w|m) (a)=∑_w ∈𝒲 P(z|m,w) P(w) (b)= T_m,z 2^-R'n where (a) follows from the independence of M and W, (b) follows from W ∼Unif(𝒲), and where T_m,z≜∑_w ∈𝒲1{z = m G_M(𝒮) + w G_W(𝒮)}. To simplify (<ref>), define 𝒯≜{(m',z') ∈ℳ×{0,1}^rn: T_m',z'≥ 1 } and suppose that (m,z) ∈𝒯. By definition, there exists an w ∈𝒲 such that w G_W(𝒮) = m G_M(𝒮) + z. In turn, since the mapping G_W(𝒮):𝒲→{0,1}^rn is a linear transformation, there must be 2^nullity(G_W(𝒮)) number of w ∈𝒲 such that w G_W(𝒮) = m G_M(𝒮) + z where nullity(G_W(𝒮)) is the dimension of the null space of G_W(𝒮). By the rank-nullity theorem <cit.>, 2^nullity(G_W(𝒮)) = 2^dim(𝒲)-rank(G_W(𝒮)) = 2^R' n-rank(G_W(𝒮)). In turn, T_m,z = 2^R'n - rank(G_W(𝒮)) , (m,z) ∈𝒯 0, (m,z) ∉𝒯, and in turn, following (<ref>), P(z|m) = 2^ -rank(G_W(𝒮)), (m,z) ∈𝒯 0, (m,z) ∉𝒯. Repeating the above approach for the PMF of Z, one can show using the assumption that m is uniformly distributed over ℳ = {0,1}^Rn that P(z) = 2^ -rank(G(𝒮)), ∃ m ∈ℳ s.t. (m,z) ∈𝒯 0, ∀ m ∈ℳ, (m,z) ∉𝒯. Using the above PMFs, we evaluate the mutual information between M and Z: I_𝒮(M;Z) ≜∑_m ∈ℳ∑_z∈{0,1}^rn P(m,z) log_2 P(z|m)/P(z) (c)=∑_(m,z) ∈𝒯 P(m,z) log_2 2^rank(G(𝒮)) - rank(G_W(𝒮)) (d)=rank(G(𝒮)) - rank(G_W(𝒮)). where (c) follows from (<ref>), (<ref>), and P(m,z) = 0 ∀ (m,z) ∉𝒯, and (d) follows from ∑_(m,z) ∈𝒯 P(m,z) = 1. If R'+R ≤ r, then lim_n →∞Sem(𝒞_n) = ∞. Suppose that M is uniformly distributed and that R+R' ≤ r. Recall that G has rank (R+R')n (c.f. Assumption <ref>). Since R+R' ≤ r, there exists a 𝒮∈𝒮 such that rank(G(𝒮)) = rank(G) = (R+R') n. Let 𝒮 be this coordinate set. It follows that rank(G_W(𝒮)) = R'n, and in turn, I_𝒮(M;Z) = Rn following Lemma <ref>. In conclusion, Sem(𝒞_n) ≥ Rn. For the converse analysis, we will need the following version of the Plotkin bound <cit.>. Suppose that Ψ is an [n,Rn] code (not necessarily linear) with minimum distance d_min∈ (0,n/2]. Then for δ≜ d_min/n, R ≤ 1 - 2 δ + o(1) where the o(1) term tends to 0 as n tends to infinity. Converse (Proof of Theorem <ref>) Setup: Set p=0 and let r ∈ [0,1]. For any ϵ > 0, let R = max{1 - 2r,0 } + ϵ and let R' ∈ [0,1-R] such that R+R'>r (c.f. Corollary <ref>). In turn, we let 𝒞_n be an [n,Rn,R'n] linear code with generator matrix G. W.l.o.g., we assume that G is full rank (c.f. Assumption <ref>). Converse Attack: The adversary orchestrates it attack in two steps. First, the adversary chooses an index set 𝒱⊆ [n] of size (R+R')n such that all columns of G(𝒱) are linearly independent. Note that such a set exists following our assumption that G is rank (R+R')n. Second, the adversary chooses a coordinate set 𝒮^* ∈𝒮 to be a subset of 𝒱 that minimizes the rank of G_W(𝒮^*). Once Alice transmits her codeword x(M,W), the adversary reads the codeword bits Z = x(M,W,𝒮^*) corresponding to the coordinates 𝒮^* with corresponding mutual information I_𝒮^*(M;Z). Converse Analysis: The goal of the converse analysis is to show that I_𝒮^*(M;Z) ≥ 1. We remark that 𝒮^* is a strict subset of 𝒱 following the inequality r<R+R'. This fact together with the fact that all |𝒱| column of G(𝒱) are linearly independent implies that the rank of G(𝒮^*) is rn. In turn, following Lemma <ref>, I_𝒮^*(M;Z) = rn - rank(G_W(𝒮^*)). In the converse analysis, we show that rn - rank(G_W(𝒮^*)) ≥ 1. We proceed with the following dual code perspective. Consider G_W(𝒱) as the R'n × (R+R')n generator matrix of some [(R+R')n,R'n] linear code Ψ. In turn, let G_W^⊥(𝒱) denote the Rn × (R+R')n generator matrix of the [(R+R')n,Rn] dual code Ψ^⊥ of Ψ. By definition, G_W(𝒱) is the parity check matrix corresponding to the generator matrix G_W^⊥(𝒱). Let d^⊥_min denote the minimum distance of Ψ^⊥. By the definition of the parity check matrix (e.g., see <cit.>), there exists d^⊥_min linearly dependent columns of the parity check matrix G_W(𝒱). Hence, if d^⊥_min≤ rn, then the adversary's choice of 𝒮^* contains the indices of these d^⊥_min linearly dependent columns of G_W, i.e, the rank of G_W(𝒮^*) is bounded above by rn - 1. In turn, I_𝒮^*(M;Z) ≥ 1 via (<ref>). To complete the proof, we show that d^⊥_min≤ rn. Applying the Plotkin bound (Lemma <ref>) to the dual code Ψ^⊥, we have that R/R+R'≤ 1 - 2 δ^⊥ + o(1) for the distance parameter δ^⊥≜d^⊥_min/(R+R')n and where the o(1) term tends to 0 as n tends to infinity. In turn, for large enough n, d^⊥_min (d)≤R'n/2 + o(n) (e)≤2r - ϵ/2 + o(n) (f)< rn where (d) follows from a rearrangement of (<ref>), (e) follows from the setting of rate R = max{1 - 2r,0 } + ϵ and the trivial inequalities R+R' ≤ 1 and max{1-2r,0 }≥ 1- 2r, and (f) follows for large enough n. In conclusion, for large enough n, I_𝒮^*(M;Z) ≥ 1 and thus Sem(𝒞_n) ≥ 1. § A SOFT-COVERING LEMMA FOR K-WISE INDEPENDENT CODEWORDS Notation: In this section only, we consider a more general code model than that introduced in Section <ref>. For an alphabet 𝒰 which is not necessarily binary, a blocklength n and a (private) key rate R' > 0, we define an [n,R'n] code 𝒞_n as a subset of 𝒰^n of size |𝒞_n| = 2^R'n. We will often describe 𝒞_n by its set of codewords {u(w,𝒞_n) }_w ∈𝒲 for a key set 𝒲 = [2^R'n]. We introduce the soft-covering problem, depicted in Fig. <ref>. The problem setup is as follows. For a blocklength n ≥ 1, let 𝒞_n = {u(w,𝒞_n)}_w ∈𝒲 be an [n,R'n] code. Given a finite input alphabet 𝒰, an input distribution Q_U, a finite output alphabet 𝒱 and channel Q_V|U, consider the PMFs induced on the output sequence V∈𝒱^n when an input sequence U∈𝒰^n is sent through the n-shot memoryless channel Q_V|U^n: for v∈𝒱^n, * The PMF of V when U is drawn randomly from Q^n_U, i.e., Q_V(v) = Q_V^n(v) = ∑_u∈𝒰 Q^n_V|U(v|u) Q_U^n(u). * The PMF of V when U is the codeword u(W,𝒞_n) for W ∼Unif(𝒲), i.e., P^(𝒞_n)_V(v) ≜∑_w ∈𝒲 Q^n_V|U(v|u(w,𝒞_n)) 2^-Rn. The soft-covering problem asks how to design a code 𝒞_n such that the induced PMF 𝒫^(𝒞_n)_V is approximately Q_V^n in the limit as n tends to infinity. The following lemma states that if R' > I(U;V), then for any integer k large enough a random [n,R'n] code 𝒞_n with k-wise independent codewords each drawn from distribution Q_U^n results in P^(𝒞_n)_V≈ Q^n_V for large enough n. Recall that we denote random codes with script typeface (e.g., 𝒞_n) and we denote realizations of random codes with calligraphic typeface (e.g., 𝒞_n). Suppose that the random code 𝒞_n has k-wise independent codewords for some even integer k≥ 4, each drawn from a PMF Q_U^n for finite 𝒰. Let Q_V|U be any conditional PMF where |𝒱| is finite and let R' > I(U;V). There exists some γ_0 >0 and γ_1 >0 that depend only on R' and I(U;V) such that for large enough n ℙ_𝒞_n( D(P_V^(𝒞_n)|| Q^n_V) > 2^-γ_1 n) ≤ 2^(-k γ_0 + log_2 |𝒱|) n where we recall that D is the relative entropy. §.§ Overview of Proof of Lemma <ref> Setup: Let the blocklength n ≥ 1 and key rate R' > I(U;V), and let k be a positive integer that will be set later. In turn, let 𝒞_n be a random [n,R'n] code drawn from any distribution that has k-wise independent codewords each with marginal PMF Q^n_U. The proof of Lemma <ref> follows a two step approach. In the first step, the proof closely follows the proof outline of <cit.> in which we construct an upper bound on the relative entropy D(P_V^(𝒞_n) || Q^n_V) based on a typical set construction of n-symbol sequences. In the second step, the proof diverges from <cit.> to analyze how the relative entropy upper bound concentrates. This second step uses the k-wise independent property of the random code 𝒞_n. Define the information density of a scalar pair (u,v) ∈𝒰×𝒱 as i_Q_U,V(u;v) ≜log_2 Q_V|U(v|u)/Q_V(v). In turn, define the information density of an n-symbol sequence pair (u,v) ∈𝒰^n ×𝒱^n, i_Q^n_U,V(u;v) ≜∑_j=1^n i_Q_U,V(u_j;v_j). For ϵ > 0, define a typical set of n-symbol sequence pairs 𝒜_ϵ≜{ (u,v) ∈𝒰^n ×𝒱^n: i_Q^n_U,V(u;v) < (I(U;V)+ϵ)n }. Recall that for an [n,R'n] code 𝒞_n, the PMF P^(𝒞_n)_V is the PMF of V when U is a codeword drawn from the code 𝒞_n (c.f. (<ref>)). We split P^(𝒞_n)_V into two terms based on the typical set 𝒜_ϵ: for v∈𝒱^n, define P^(𝒞_n)_V,1(v) ≜ 2^-Rn∑_w ∈𝒲 Q^n_V|U(v|u(w,𝒞_n)) 1{(u(w,𝒞_n),v) ∈𝒜_ϵ}, and define P^(𝒞_n)_V,2(v) ≜ 2^-Rn∑_w ∈𝒲 Q^n_V|U(v|u(w,𝒞_n)) 1{(u(w,𝒞_n),v) ∉𝒜_ϵ}. By inspection, P^(𝒞_n)_V = P^(𝒞_n)_V,1 + P^(𝒞_n)_V,2; note that P^(𝒞_n)_V,1 and P^(𝒞_n)_V,2 may not be PMFs. We also define the ratios Δ^(𝒞_n)_V,1(v) ≜P^(𝒞_n)_V,1(v)/Q^n_V(v) and Δ^(𝒞_n)_V,2(v) ≜P^(𝒞_n)_V,2(v)/Q^n_V(v). We restate a result from <cit.> that bounds the relative entropy of P^(𝒞_n)_V and Q^n_V in terms of the introduced quantities. For every [n,R'n] code 𝒞_n, D( P^(𝒞_n)_V|| Q^n_V) ≤ H_2 ( ∑_v∈𝒱^n P^(𝒞_n)_V,2(v) ) + D( P^(𝒞_n)_V,1|| Q^n_V) + D( P^(𝒞_n)_V,2|| Q^n_V). We remark that the RHS of the inequality of Lemma <ref> is well defined if we extend the definition of relative entropy D(·||·) in the natural way to account for functions P^(𝒞_n)_V,1 and P^(𝒞_n)_V,2 which may not be PMFs. The following sufficient condition for Lemma <ref> follows from Lemma <ref>. Suppose that for some π_0 ∈ [0,1] and with probability at least 1-π_0 over the random code distribution, for some π_1>0 ∑_v∈𝒱^n P^(𝒞_n)_V,2(v) < 2^-π_1 n and Δ^(𝒞_n)_V,1(v) < 1 + 2^-π_1 n for all v∈𝒱^n. Then ℙ_𝒞_n( D( P^(𝒞_n)_V|| Q^n_V) ≥ q_n 2^-π_1 n) ≤π_0 where q_n = 2log_2 e + π_1 n + n log_2 ( max_v ∈supp(Q_V)1/Q_V(v)). extend_v Let π_1>0 and suppose that 𝒞_n is a realization of 𝒞_n such that both (<ref>) and (<ref>) hold. We bound each of the 3 terms in the inequality of Lemma <ref> using (<ref>) and (<ref>). Consider the first term. Following (<ref>) and the inequality[This inequality follows from an application of both the inequality x/1+x≤ln(1+x) for x>-1 and the definition of H_2(x).] H_2(x) ≤ x log_2 e/x for x ∈ [0,1], we have that H_2( ∑_v∈𝒱^n P^(𝒞_n)_V,2(v) ) ≤ H_2(2^-π_1 n) < 2^-π_1 n (log_2 e + π_1 n). Moving on to the second term, following (<ref>) and the inequality log_2(1+x) ≤ x log_2 e for x>0, we have that D(P^(𝒞_n)_V,1 || Q_V^n) ≜∑_v∈𝒱^n P^(𝒞_n)_V,1(v) log_2 Δ^(𝒞_n)_V,1 < ∑_v∈𝒱^n P^(𝒞_n)_V,1log_2 (1+2^-π_1 n) ≤log_2(1+2^-π_1 n) ≤ 2^-π_1 nlog_2 e. Moving to the last term, we will use the following inequality which uses the assumption that |𝒱| is finite: Δ^(𝒞_n)_V,2(v) ≜P^(𝒞_n)_V,2(v)/Q^n_V(v)≤max_v' ∈supp(Q^n_V)1/Q^n(v') = (max_v' ∈supp(Q_V)1/Q(v'))^n for all v∈𝒱^n. Following this inequality and (<ref>), we have that D(P^(𝒞_n)_V,2 || Q^n_V) ≜∑_v∈𝒱^n P^(𝒞_n)_V,2(v) log_2 Δ^(𝒞_n)_V,2 ≤∑_v∈𝒱^n P^(𝒞_n)_V,2(v) n log_2 ( max_v' ∈supp(Q_V)1/Q_V(v')) < 2^-π_1 n n log_2 ( max_v' ∈supp(Q_V)1/Q_V(v')). Combining the bounds (<ref>), (<ref>) and (<ref>) together with Lemma Lemma <ref>, the desired inequality (<ref>) immediately follows. A proof of Lemma <ref> is available in the extended version <cit.>. In the remainder of the proof of Lemma <ref>, we apply the framework of the sufficient condition (Lemma <ref>) and show that inequalities (<ref>) and (<ref>) hold with probability 1-π_0 over the distribution of 𝒞_n for a value π_0 = 2^-kΩ(n) + n log_2|𝒱| and some π_1 > 0. As the primary technical tools of the proof, we use the concentration inequalities of Schmidt, Siegel and Srinivasan <cit.> and Bellare and Rompel <cit.> for sums of k-wise independent random variables. §.§ Proof of Lemma <ref> First, we show that inequality (<ref>) holds with high probability over the random code 𝒞_n for some π_1>0. Consider the quantity ∑_v∈𝒱^n P^(𝒞_n)_V,2(v) = [0.93]∑_w ∈𝒲 2^-R'n∑_v∈𝒱^n Q^n_V|U(v|U(w,𝒞_n)) 1{(U(w,𝒞_n),v) ∉𝒜_ϵ} = [0.93]∑_w ∈𝒲 2^-R'nℙ_V∼ Q^n_V|U( (U(w,𝒞_n),V) ∉𝒜_ϵ| U = U(w,𝒞_n)) Note that (<ref>) is a sum of |𝒲|=2^R'n k-wise-independent terms following that the codewords of 𝒞_n are k-wise independent. For w ∈𝒲, the expectation of the w^th term in the sum of (<ref>) is 2^-R'n𝔼_𝒞_nℙ_V∼ Q^n_V|U( (U(w,𝒞_n),V) ∉𝒜_ϵ| U = U(w,𝒞_n)) (a)= 2^-R'nℙ_(U,V) ∼ Q^n_U,V( (U,V) ∉𝒜_ϵ) (b)= 2^-R'nℙ_(U,V) ∼ Q^n_U,V(i_Q^n_U,V(U;V) ≥ (I(U;V)+ϵ)n ) (c)= 2^-R'nℙ_(U,V) ∼ Q^n_U,V( 2^λ i_Q^n_U,V(U;V)≥ 2^λ(I(U;V)+ϵ)n) (d)≤ 2^-R'n( 𝔼_(U,V) ∼ Q_U,V[ 2^λ i_Q_U,V(U;V)]/2^λ(I(U;V)+ϵ))^n = 2^-λ( I(U;V) + ϵ - 1/λlog_2 𝔼_(U,V) ∼ Q_U,V[2^λ i_Q_U,V(U;V)] )n - R'n (e)= 2^-λ( I(U;V) + ϵ - D_λ+1(Q_U,V||Q_U Q_V) )n - R'n = 2^-(α_λ,ϵ + R')n where (a) follows from the fact that U(w,𝒞_n) is distributed as Q^n_U, (b) follows from the definition of 𝒜_ϵ, (c) holds for any λ > 0, (d) follows from Markov's inequality and the product form of the joint PMF Q^n_U,V, (e) follows from the definition of Rényi divergence of order λ+1, and where α_λ,ϵ≜λ( I(U;V) + ϵ - D_λ+1(Q_U,V||Q_U Q_V) ). For ϵ>0, we remark that i) α_λ,ϵ tends to 0 as λ tends to 0, and ii) α_λ,ϵ is positive for small enough λ>0; these follow from the facts that D_λ+1(Q_U,V||Q_UQ_V) is a continuous and non-decreasing function of λ>0 and that D_1(Q_U,V||Q_UQ_V) = I(U;V). In the sequel, for a given ϵ>0, we let λ>0 be small enough such that α_λ,ϵ∈ (0,R'). Moving forward, we write α_λ,ϵ as simply α when the dependency on λ and ϵ is clear from context. Suppose that {T_w}_w ∈𝒲 are random variables that take values in [0,1], and define T ≜∑_w ∈𝒲 T_w and μ≜𝔼[T]. For τ > 0, if the variables are k-wise independent for some k ≥ k^*(|𝒲|,μ,τ) ≜⌈μτ/1 - μ/|W|⌉, then ℙ( T ≥μ(1+τ) ) ≤|𝒲| k^*(μ/|𝒲|)^k^*/μ(1+τ) k^*. Using the framework of Lemma <ref>, we set T_w for each w ∈𝒲 to be the w^th term in the sum of (<ref>), i.e., [0.95]T_w = 2^-R'nℙ_V∼ Q^n_V|U( (U(w,𝒞_n),V) ∉𝒜_ϵ | U = U(w,𝒞_n) ), and in turn, we have that T ≜∑_w ∈ T_w T_w = ∑_v∈𝒱^n P^(𝒞_n)_V,1(v). Note that the expectation μ≜𝔼_𝒞_n [T] is bounded above by 2^-α n following (<ref>). For a parameter β∈ (0,α) that will be set later, set τ such that μ(1+τ) = 2^(β-α)n. Before applying Lemma <ref>, we normalize the random variables {T_w}_w ∈𝒲 to optimize the parameter k^*. For some parameter θ∈ (0,1] which we will soon set, define T'_w = θ 2^R'n T_w and note that T'_w ∈ [0,1]. Similarly, define the normalized sum T' = θ 2^R'n T, its normalized expectation μ' = θ 2^R'nμ which is bounded above by θ 2^(R'-α)n, and note that μ'(1+τ)= θ 2^(R'+β-α). Now consider the quantity k^*(|𝒲|,μ',τ) as a function of θ, and let n be large enough and choose θ∈ (0,1] such that k^*(|𝒲|,μ',τ) is equal to k; such a choice exists for fixed k and large enough n since k^*(|𝒲|,μ',τ) ≥μ' τ = θ 2^(R'+β-α)n - μ' ≥θ 2^(R'-α)n(2^β n-1) is tending larger than k for fixed θ > 0 as n tends to infinity following α < R'. We apply Lemma <ref> to the normalized random variables {T'_w}_w ∈𝒲. We have for large enough n [0.90]ℙ_𝒞_n( ∑_v∈𝒱^n P^(𝒞_n)_V,2(v) ≥ 2^(β-α)n) = ℙ_𝒞_n( T ≥ 2^(β - α)n) (f)=ℙ_𝒞_n(T' ≥θ 2^(R+β-α)n) (g)≤2^R'n k( μ'/2^R'n)^k/θ 2^(R'+β-α)n k (h)≤k^k/k!( μ'/θ 2^(R'+β-α)n)^k (i)≤k^k/k! 2^-k β n where (f) follows from the normalization T' = θ 2^R'n T, (g) follows for large enough n from Lemma <ref> and the choice of θ such that k^*=k, (h) follows from the inequalities m^k/k^k≤m k≤m^k/k! for any 1 ≤ k ≥ m, and (i) follows from the bound μ' ≤θ 2^(R'- α). Next, we show that inequality (<ref>) holds with high probability over the random code 𝒞_n. For v∈𝒱^n, expand Δ^(𝒞_n)_V,1(v): Δ^(𝒞_n)_V,1(v) ≜P^(𝒞_n)_V,1(v)/Q^n_V(v) = ∑_w ∈𝒲 2^-R'nQ^n_V|U(v| U(w,𝒞_n))/Q^n_V(v)1{(U(w,𝒞_n),v) ∈𝒜_ϵ}. Note that (<ref>) is a sum of |𝒲|=2^R'n k-wise independent terms following that the codewords of 𝒞_n are k-wise independent. For w ∈𝒲, the expectation of the w^th term in the sum of (<ref>) is 2^-R'n𝔼_𝒞_n[ Q^n_V|U(v|U(w,𝒞_n))/Q^n_V(v)1{(U(w,𝒞_n),v) ∈𝒜_ϵ}] (j)≤ 2^-R'n𝔼_𝒞_n[ Q^n_V|U(v|U(w,𝒞_n))/Q^n_V(v)] (k)= 2^-R' n∑_u∈𝒰^n Q^n_U(u) Q^n_V|U(v|u)/Q^n_V(v) = 2^-R'n where (j) follows from the trivial bound 1{·}≤ 1 and (k) follows from the distribution of codeword U(w,𝒞_n) ∼ Q^n_U. Let k ≥ 4 be an even integer. Suppose that {T_w}_w ∈𝒲 are k-wise independent random variables that take values in [0,1], and define T ≜∑_w ∈𝒲 T_w and μ≜𝔼[T]. For any τ > 0, ℙ(T ≥μ(1+τ)) ≤ 8 ( k μ + k^2/(μτ)^2)^k/2. Using the framework of Lemma <ref>, fix v∈𝒱^n and set T_w for each w∈𝒲 to be [0.94]T_w = 2^(-I(U;V)-ϵ)n(Q^n_V|U(v | U(w,𝒞_n))/Q^n_V(v)) 1{(U(w,𝒞_n),v) ∈𝒜_ϵ} which coincides with the w^th term in the sum of (<ref>) normalized by the factor 2^(R' - I(U;V) - ϵ)n. This normalization factor was chosen to ensure T_w is bounded above by 1 which follows from that fact that for any (u,v) ∈𝒜_ϵ we have that Q^n_V|U(v|u)/Q^n_V(v) < 2^(I(U;V)+ϵ)n. Set T = ∑_w ∈𝒲T_w and note that μ≜𝔼_𝒞_n[T] is bounded above by 2^(R' - I(U;V)-ϵ)n following (<ref>) and the choice of normalization factor. Finally, set τ such that μ(τ+1) = 2^(R' - I(U;V)-ϵ)n(1+2^(β-α)n) and note that μτ = 2^(R'-I(U;V)-ϵ)n(1+2^(β-α)n) - μ≥ 2^(R'-I(U;V)-ϵ+β-α)n. Applying Lemma <ref>, we have that for for even integer k ≥ 4, small enough ϵ>0 and large enough n ℙ_𝒞_n( Δ^(𝒞_n)_V,1(v) ≥ 1 + 2^(β-α)n) = ℙ_𝒞_n( T ≥μ(1+τ)) (ℓ)≤ 8 ( k 2^(R' - I(U;V)-ϵ)n + k^2/2^2(R'-I(U;V)-ϵ+β-α)n)^k/2 (m)≤ 8 ( (k+1) 2^(R' - I(U;V)-ϵ)n/2^2(R'-I(U;V)-ϵ+β-α)n)^k/2 = 8 (k+1)^k/2· 2^-k η n. where (ℓ) follows from Lemma <ref> and the bounds μ≤ 2^(R'-I(U;V)-ϵ)n and μτ≥ 2^(R'-I(U;V)-ϵ+β-α)n, and (m) follows for small enough ϵ>0 and large enough n such that k 2^(R'-I(U;V)-ϵ)n >> k^2, and where η = R'-I(U;V)-ϵ +2(β-α)/2 In turn, by a simple union bound over all v∈𝒱^n, and by letting k ≥ 4 be an even integer, ϵ>0 be small enough and n be large enough, ℙ_𝒞_n( ∃v∈𝒱^n s.t. Δ^(𝒞_n)_V,1(v) ≥ 1 + 2^(β-α)n) ≤ 8k (k+1)^k/2· 2^-(k η_1 +log_2|𝒱|)n. To complete the proof, we put together the above results and apply the sufficient condition (Lemma <ref>). In the framework of Lemma <ref>, we set π_1 = α - β. If π_1>0, then it follows from Lemma <ref> that the inequalities (<ref>) and (<ref>) hold with probability at least 1-π_0 where π_0 = k^k/k!2^-k β n + 8k (k+1)^k/2· 2^(-kη+log_2 |𝒱|)n where the expression for π_0 follows from (<ref>) and (<ref>) together with a simple union bound. The last step is to show that for some choice of the free parameters ϵ>0, λ>0 and β∈ (0,α) we have that π_1 > 0 and π_0 = 2^-k Ω(n) + n log_2 |𝒱|. Recall that for a fixed ϵ>0, α = α_λ,ϵ tends to 0 as λ tends to 0, and α_λ,ϵ is positive for small enough λ>0. Furthermore, recall that R' > I(U;V) by assumption, and thus, η given by (<ref>) is positive for small enough ϵ>0, small enough α_λ,ϵ>0, and any β∈ (0,α_λ,ϵ). Thus, given even k≥ 4, we can pick ϵ>0 small enough, and in turn, pick λ>0 small enough such that both α_λ,ϵ and η_1 are positive. In turn, picking β∈ (0,α_λ,ϵ) ensures that α_λ,ϵ - β >0 and thus π_1>0. Thus, π_0 = 2^-kΩ(n) + log_2 |𝒱|. This completes the proof of Lemma <ref>. § PROOF OF THEOREM <REF> Setup: Let p ∈ [0,1/2] and r ∈ [0,1] such that 1 - H_2(p)- r > 0. For ϵ>0 and ϵ' ∈ (0,ϵ), let R = 1 - H_2(p) - r - ϵ and R' = r + ϵ'. Let k be a positive integer to be set in the proof. The goal of the proof is to show that for large enough k constant in n and for large enough n, there exists an [n,Rn,R'n,k] pseudolinear code 𝒞_n such that both Sem(𝒞_n) = 2^-Ω(n) and P^max_error(𝒞_n) = o(1). Encoding: Alice uses an [n,Rn,R'n] code 𝒞_n = {x(m,w) }_(m,w) ∈ℳ×𝒲 to encode her message M. That is, for a message distribution P_M ∈𝒫(ℳ), Alice draws M ∼ P_M and W ∼Unif(𝒲) and transmits x(M,W). Decoding: Upon receiving the channel output y, Bob performs min-distance decoding by choosing the message estimate m and key estimate w such that (m,w) = min_(m,w) ∈ℳ×𝒲 d_H(x(m,w),y) where d_H denotes the Hamming distance. §.§ Code Distribution We show the existence of a good code via a random coding argument. As our random code distribution, we will use the following distribution over [n,Rn,R'n,k] pseudolinear codes. Let F[n,Rn,R'n,k] be the distribution over all [n,Rn,R'n,k] pseudolinear codes where the parity check matrix H (c.f. Definition <ref>) is fixed and the generator matrix G is chosen uniformly from {0,1}^ℓ× n. The following property of F[n,Rn,R'n,k] is useful. The codewords of 𝒞_n ∼ F[n,Rn,R'n,k] are uniformly distributed over {0,1}^n and are k-wise independent. §.§ Secrecy Analysis For a given 𝒮∈𝒮, let Q^(𝒮)_Z denote the PMF of the adversary's observation Z∈{0,1}^rn when Alice sends a random n-bit sequence X∼ Q^n_X ≜Unif({0,1}^n) through the channel. We have that Q^(𝒮)_Z(z) = Q_X(𝒮)(z) = Q^rn_X(z), for all z∈{0,1}^rn. Furthermore, for an [n,Rn,R'n] code 𝒞_n, let P^(𝒞_n,𝒮)_M,Z denote the joint PMF of message M and observation Z when Alice sends the codeword x(M,W,𝒞_n) through the channel. Then for marginal PMF P_M ∈𝒫(ℳ), I_𝒞_n( M; Z) ≜ D ( P^(𝒞_n,𝒮)_M,Z || P_M P^(𝒞_n,𝒮)_Z) (a)=D(P^(𝒞_n,𝒮)_M,Z || P_M Q^(𝒮)_Z) - D(P^(𝒞_n,𝒮)_Z || Q^(𝒮)_Z) (b)≤ D(P^(𝒞_n,𝒮)_M,Z || P_M Q^(𝒮)_Z) ≤∑_m ∈ℳ P_M(m) max_m' ∈ℳ D(P^(𝒞_n,𝒮)_Z|M=m' || Q^(𝒮)_Z) = max_m ∈ℳ D(P^(𝒞_n,𝒮)_Z|M=m || Q^(𝒮)_Z) where (a) follows from the relative entropy chain rule and (b) follows from the property D(· || ·) ≥ 0. Thus, Sem(𝒞_n) = max_P_M ∈𝒫(ℳ), 𝒮∈𝒮 I_𝒞_n(M;Z) (c)≤max_𝒮∈𝒮max_m ∈ℳ D(P^(𝒞_n,𝒮)_Z|M=m || Q^(𝒮)_Z) (d)=max_𝒮∈𝒮max_m ∈ℳ D(P^(𝒞_n,𝒮)_Z|M=m || Q^rn_X) where (c) follows from (<ref>) and (d) follows from (<ref>). Consider the relative entropy D ( P^(𝒞_n,𝒮)_Z|M=m || Q^rn_X ) in the framework of the soft-covering lemma for k-wise independent codewords (Lemma <ref>), as illustrated in Fig. <ref>. Here, (m,W) is uniformly drawn from a message-key product set {m}×𝒲 of rate R'/r, i.e., |{m}×𝒲| = 2^R'n = 2^rn R'/r. Since rate R' / r = (r+ϵ')/r is greater than I(X;Z) = 1, it follows from Lemma <ref> that there exists γ_0>0 and γ_1>0 such that for even integer k≥ 4 and large enough n, ℙ_𝒞_n( D ( P^(𝒞_n,𝒮)_Z|M=m || Q^rn_X ) > 2^-γ_1 rn) ≤ 2^(-k γ_0 +1)rn. In turn, ℙ_𝒞_n( Sem(𝒞_n) > 2^-γ_1 rn) (e)≤ℙ_𝒞_n( max_𝒮∈𝒮max_m ∈ℳ D(P^(𝒞_n,𝒮)_Z|M=m || Q^rn_X) > 2^-γ_1 r n) ≤ℙ_𝒞_n( ⋃_𝒮∈𝒮⋃_m ∈ℳ{ D(P^(𝒞_n,𝒮)_Z|M=m || Q^rn_X) > 2^-γ_1 r n}) (f)≤ 2^(-k γ_0 r + r + R+1)n where (e) follows from (<ref>), and (f) follows for large enough n from a simple union bound, the inequality |𝒮| = n rn≤ 2^n and (<ref>). §.§ Reliability Analysis Unlike the above secrecy analysis, the reliability analysis requires additional structure of the code 𝒞_n beyond the k-wise independence property. In particular, we will use the pseudolinear structure of 𝒞_n. We restate a reliability result of <cit.> without proof. For a code 𝒞_n and a message m ∈ℳ, define the probability of decoding error conditioned on M=m as P^(m)_error(𝒞_n) ≜ℙ(M≠ m | M=m) where the probability is w.r.t. W ∼Unif(𝒲) and the adversary's choice of bit read/flip locations. Suppose that p ∈ (0,1/2) and r< 1 - H_2(p). If the key rate R' > r and the sum rate R+R' < 1 - H_2(p), then for large enough (but fixed) k and any fixed δ>0, there exists γ_2>0 such that for large enough n and any m ∈ℳ, ℙ_𝒞_n( P^(m)_error(𝒞_n) > δ) ≤ 2^-kγ_2 n. We apply Lemma <ref> to bound the maximum probability of error P^max_error(𝒞_n) ≜max_m ∈ℳ P^(m)_error (𝒞_n). Note that our choice of ϵ and ϵ' ensures that R'>r and R+R' < 1-H_2(p). Also, we have that R < 1 - H_2(p) - r. Thus, for δ > 0, ℙ_𝒞_n( P^max_error(𝒞_n) > δ) ≜ℙ_𝒞_n( max_m ∈ℳ P^(m)_error(𝒞_n) > δ) (g)≤∑_m ∈ℳℙ_𝒞_n( P^(m)_error(𝒞_n) > δ) (h)≤ 2^(-k γ_2 + 1 - H_2(p) - r)n where (g) follows from a union bound and (h) follows for large enough k and for large enough n via Lemma <ref>. §.§ Combining Secrecy and Reliability Analysis To complete the proof, we combine the secrecy and reliability analysis. For large enough k and k even, and for large enough n, ℙ_𝒞_n( {Sem(𝒞_n) > 2^-γ r n}∪{ P^max_error(𝒞_n) > δ}) ≤ 2^(-k γ_0 r + 2r + R)n + 2^(-k γ_2 + 1 - H_2(p) - r)n following both (<ref>), (<ref>) and a simple union bound. In summary, for large enough k and k even (which is constant in n) and large enough n, we have that (<ref>) is less than 1, and in turn, there exists an [n,Rn,R'n,k] pseudolinear code 𝒞_n such that Sem(𝒞_n) ≤ 2^-γ_1 r n and P^max_error(𝒞_n) ≤δ. § CONCLUSION We showed that random pseudolinear codes achieve the best known lower bound of the semantic secrecy capacity of the binary adversarial wiretap channel of type II. A necessary condition on the non-linearity of a capacity achieving code was also shown. One possible avenue for future research is to apply further derandomization techniques to our random codes, e.g., in the spirit of <cit.>. The goal here is to replace random pseudolinear codes with a significantly derandomized class that can maintain the same error-correction and secrecy power while being more amendable to efficient decoding algorithms. extend_v § LINEAR COSET CODING SCHEMES In this appendix, we prove that the linear coset coding scheme of Ozarow and Wyner <cit.> is not semantically-secret for any positive message rate. We first define coset coding. The linear coset coding scheme, proposed in <cit.>, is as follows: Let R>0 be the message rate. For blocklength n, let H be the Rn × n parity check matrix of some [n,n-Rn] binary linear code. Encoding: Suppose that Alice wants to transmit a message m ∈{0,1}^Rn. Alice encodes m by choosing the n bit codeword x randomly and uniformly from the set of solutions {x' ∈{0,1}^n: x' H^T = m} and transmits x over the (noiseless) (0,r)-AWTC II. Decoding: Upon receiving x, Bob performs decoding by choosing the message estimate m = xH^T. It is easy to show that the above linear coset coding scheme is an [n,Rn,(1-R)n] linear code. We prove the following result. Let rate R>0. For large enough n, any [n,Rn,(1-R)n] binary code 𝒞_n that is a linear coset coding scheme has semantic leakage Sem(𝒞_n) ≥ 1. For any R>0, let 𝒞_n be an [n,Rn,(1-R)n] binary code that is a linear coset coding scheme and let H be the corresponding Rn × n parity check matrix. Suppose that Alice's message is uniformly distributed over {0,1}^n. To prove Lemma <ref>, we will use the following result due to Ozarow and Wyner. For an index set ℐ⊆ [n], let H(ℐ) denote the |ℐ| columns of H indexed by ℐ. The adversary's equivocation is Δ≜min_𝒮∈𝒮 H(M|Z) = min_ℐ⊆ [n]: |ℐ| = (1-r)nrank( H(ℐ) ). Recall the following definitional inequalities: Sem(𝒞_n) ≥max_𝒮∈𝒮 I_𝒮(M;Z) = H(M) - min_𝒮∈𝒮 H(M|Z) = Rn - Δ. Thus, to show that Sem(𝒞_n) ≥ 1 for large enough n, it is sufficient to show that Δ≤ Rn -1. Let n be large enough and suppose by contradiction that Δ = Rn. By Lemma <ref>, we have that rank(H(ℐ)) = Rn for every set ℐ⊆ [n] s.t. |ℐ| = (1-r)n. This in turn by the definition of H implies that the [n,(1-R)n] binary code with parity check matrix H has minimum distance, denoted d_min, of at least Rn+1. However, by the Plotkin bound of Lemma <ref>, we have that 1-R ≤ 1 - 2 d_min/n + o(1), or equivalently, d_min≤Rn/2 + o(n). Thus, for n large enough such that the o(n) term is negligible, we have a contradiction. This completes the proof of Lemma <ref>. § DISCUSSION OF ASSUMPTION <REF> We show that if the generator matrix G of an [n,Rn] linear code 𝒞_n is not full rank, then either the probability of decoding error is large such that P^max_error(𝒞_n) ≥ 1/2 or both 𝒲 and G can be replaced with a smaller key set 𝒲' and generator matrix G', respectively, without changing the code. Let 𝒞_n be an [n,Rn] linear code and suppose that G is not full rank. Suppose that G_W is full rank. Since the channel is noiseless, Bob's received sequence is guaranteed to be a codeword in 𝒞_n. Suppose that Bob receives the codeword c∈𝒞_n. From Bob's perspective, the set of all possible message-key pairs that Alice could have sent is ℳ_c = { (m,w) ∈ℳ×𝒲: [ m w ] G = c} = { (m,w) ∈ℳ×𝒲: m G_M + w G_W = c}. Since the mapping G:{0,1}^(R+R')n→{0,1}^n is a linear transformation, the number of pairs in ℳ_c is |ℳ_c|= 2^nullity(G) = 2^(R+R')n - rank(G) where the second equality follows from the rank-nullity theorem. In turn, since rank(G) < (R+R')n, it follows that |ℳ_c| ≥ 2. Now consider two unique pairs in ℳ_c, say (m_1,w_1) and (m_2,w_2). We show that m_1 ≠ m_2 by considering 2 cases. (Case 1): Suppose that w_1 = w_2. Then m_1 ≠ m_2 by the uniqueness of the pairs. Done. (Case 2): Suppose instead that w_1 ≠ w_2. Since G_W is full rank, we have that (w_1+w_2)G_W ≠ 0. In turn, [m_1 w_1]G = [m_2 w_2]G implies that (m_1+m_2) G_M = (w_1+w_2) G_W ≠ 0, and thus, m_1 ≠ m_2. Done. In summary, upon receiving c, Bob finds that at least 2 messages could be Alice's message. Thus, for PMFs P_M = Unif(ℳ) and P_W = Unif(𝒲), P^max_error(𝒞_n) ≥ℙ_(M,W) ∼ P_M P_W( M≠ M ) = ∑_c∈𝒞_nℙ_(M,W) ∼ P_M P_W( M≠ M | Bob RXs c) 1/|𝒞_n| ≥ 1/2. Suppose instead that G_W is not full rank. Then each (R'n)-bit sequence in the rowspace of G_W corresponds to multiple (i.e., redundant) keys in 𝒲. Hence, we can eliminate this redundancy by shortening the key w from R'n bits to rank(G_W) bits and replacing G_W with full rank matrix G'_W that has rowspace(G'_W) = rowspace(G_W) without changing the code 𝒞_n. IEEEtran
http://arxiv.org/abs/2307.06312v1
20230712172005
Correlation-Aware Mutual Learning for Semi-supervised Medical Image Segmentation
[ "Shengbo Gao", "Ziji Zhang", "Jiechao Ma", "Zihao Li", "Shu Zhang" ]
cs.CV
[ "cs.CV" ]
Correlation-Aware Mutual Learning S. Gao and Z. Zhang et al. ^1 Deepwise AI Lab, Beijing, China ^2 School of Artificial Intelligence, Beijing University of Posts and Telecommunications [email protected] Correlation-Aware Mutual Learning for Semi-supervised Medical Image Segmentation Shengbo Gao 1⋆ Ziji Zhang 2⋆† Jiechao Ma 1 Zihao Li 1 Shu Zhang ^1 mailto:[email protected] August 12, 2023 ==================================================================================================================================================================================== Semi-supervised learning has become increasingly popular in medical image segmentation due to its ability to leverage large amounts of unlabeled data to extract additional information. However, most existing semi-supervised segmentation methods only focus on extracting information from unlabeled data, disregarding the potential of labeled data to further improve the performance of the model. In this paper, we propose a novel Correlation Aware Mutual Learning (CAML) framework that leverages labeled data to guide the extraction of information from unlabeled data. Our approach is based on a mutual learning strategy that incorporates two modules: the Cross-sample Mutual Attention Module (CMA) and the Omni-Correlation Consistency Module (OCC). The CMA module establishes dense cross-sample correlations among a group of samples, enabling the transfer of label prior knowledge to unlabeled data. The OCC module constructs omni-correlations between the unlabeled and labeled datasets and regularizes dual models by constraining the omni-correlation matrix of each sub-model to be consistent. Experiments on the Atrial Segmentation Challenge dataset demonstrate that our proposed approach outperforms state-of-the-art methods, highlighting the effectiveness of our framework in medical image segmentation tasks. The codes, pre-trained weights, and data are publicly available. [<https://github.com/Herschel555/CAML>] [⋆ Both authors contributed equally to this work.] [† Work done as an intern in Deepwise AI Lab] § INTRODUCTION Despite the remarkable advancements achieved through the use of deep learning for automatic medical image segmentation, the scarcity of precisely annotated training data remains a significant obstacle to the widespread adoption of such techniques in clinical settings. As a solution, the concept of semi-supervised segmentation has been proposed to enable models to be trained using less annotated but abundant unlabeled data. Recently, methods that adopt the co-teaching <cit.> or mutual learning <cit.> paradigm have emerged as a promising approach for semi-supervised learning. Those methods adopt two simultaneously updated models, each trained to predict the prediction results of its counterpart, which can be seen as a combination of the notions of consistency regularization<cit.> and entropy minimization<cit.>. In the domain of semi-supervised medical image segmentation, MC-Net <cit.> has shown significant improvements in segmentation performance. With the rapid advancement of semi-supervised learning, the importance of unlabeled data has garnered increased attention across various disciplines in recent years. However, the role of labeled data has been largely overlooked, with the majority of semi-supervised learning techniques treating labeled data supervision as merely an initial step of the training pipeline or as a means to ensure training convergence<cit.>. Recently, methods that can leverage labeled data to directly guide information extraction from unlabeled data have attracted the attention of the community<cit.>. In the domain of semi-supervised medical image segmentation, there exist shared characteristics between labeled and unlabeled data that possess greater intuitiveness and instructiveness for the algorithm. Typically, partially labeled clinical datasets exhibit similar foreground features, including comparable texture, shape, and appearance among different samples. As such, it can be hypothesized that constructing a bridge across the entire training dataset to connect labeled and unlabeled data can effectively transfer prior knowledge from labeled data to unlabeled data and facilitate the extraction of information from unlabeled data, ultimately overcoming the performance bottleneck of semi-supervised learning methods. Based on the aforementioned conception, we propose a novel Correlation Aware Mutual Learning (CAML) framework to explicitly model the relationship between labeled and unlabeled data to effectively utilize the labeled data. Our proposed method incorporates two essential components, namely the Cross-sample Mutual Attention module (CMA) and the Omni-Correlation Consistency module (OCC), to enable the effective transfer of labeled data information to unlabeled data. The CMA module establishes mutual attention among a group of samples, leading to a mutually reinforced representation of co-salient features between labeled and unlabeled data. Unlike conventional methods, where supervised signals from labeled and unlabeled samples are separately back-propagated, the proposed CMA module creates a new information propagation path among each pixel in a group of samples, which synchronously enhances the feature representation ability of each intra-group sample. In addition to the CMA module, we introduce the OCC module to regularize the segmentation model by explicitly modeling the omni-correlation between unlabeled features and a group of labeled features. This is achieved by constructing a memory bank to store the labeled features as a reference set of features or basis vectors. In each iteration, a portion of features from the memory bank is utilized to calculate the omni-correlation with unlabeled features, reflecting the similarity relationship of an unlabeled pixel with respect to a set of basis vectors of the labeled data. Finally, we constrain the omni-correlation matrix of each sub-model to be consistent to regularize the entire framework. With the proposed omni-correlation consistency, the labeled data features serve as anchor groups to guide the representation learning of the unlabeled data feature and explicitly encourage the model to learn a more unified feature distribution among unlabeled data. In summary, our contributions are threefold: (1)We propose a novel Correlation Aware Mutual Learning (CAML) framework that focuses on the efficient utilization of labeled data to address the challenge of semi-supervised medical image segmentation. (2)We introduce the Cross-sample Mutual Attention module (CMA) and the Omni-Correlation Consistency module (OCC) to establish cross-sample relationships directly. (3)Experimental results on a benchmark dataset demonstrate significant improvements over previous SOTAs, especially when only a small number of labeled images are available. § METHOD §.§ Overview Fig <ref> gives an overview of CAML. We adopt a co-teaching paradigm like MC-Net <cit.> to enforce two parallel networks to predict the prediction results of its counterpart. To achieve efficient cross-sample relationship modeling and enable information propagation among labeled and unlabeled data in a mini-batch, we incorporate a Cross-sample Mutual Attention module to the auxiliary segmentation network f_a, whereas the vanilla segmentation network f_v remains the original V-Net structure. In addition, we employ an Omni-Correlation Consistency regularization to further regularize the representation learning of the unlabeled data. Details about those two modules will be elaborated on in the following sections. The total loss of CAML can be formulated as: L=L_s+λ_cl_c+λ_ol_o where l_o represents the proposed omni-correlation consistency loss, while L_s and l_c are the supervised loss and the cross-supervised loss implemented in the Cross Pseudo Supervision(CPS) module. λ_c and λ_o are the weights to control l_c and l_o separately. During the training procedure, a batch of mixed labeled and unlabeled samples are fed into the network. The supervised loss is only applied to labeled data, while all samples are utilized to construct cross-supervised learning. Please refer to <cit.> for a detailed description of the CPS module and loss design of L_s and l_c. §.§ Cross-sample Mutual Attention Module To enable information propagation through any positions of any samples in a mini-batch, one can simply treat each pixel's feature vector as a token and perform self-attentions for all tokens in a mini-batch. However, this will make the computation cost prohibitively large as the computation complexity of self-attention is O(n^2) with respect to the number of tokens. We on the other hand adopt two sequentially mounted self-attention modules along different dimensions to enable computation efficient mutual attention among all pixels. As illustrated in Fig <ref>, the proposed CMA module consists of two sequential transformer encoder layers, termed as E_1 and E_2, each including a multi-head attention and a MLP block with a layer normalization after each block. For an input feature map a_in∈ℝ^b× c× k, where k=h^'× w^'× d^', b represents batch size and c is the dimension of a_in, E_1 performs intra-sample self-attention on the spatial dimension of each sample. This is used to model the information propagation paths between every pixel position within each sample. Then, to further enable information propagation among different samples, we perform an inter-sample self-attention along the batch dimension. In other words, along the b dimension, the pixels located in the same spatial position from samples are fed into a self-attention module to construct cross-sample relationships. In CAML, we employ the proposed CMA module in the auxiliary segmentation network f_a, whereas the vanilla segmentation network f_v remains the original V-Net structure. The reasons can be summarized into two folds. From deployment perspective, the insertion of the CMA module requires a batch size of large than 1 to model the attention among samples within a mini-batch, which is not applicable for model inference(batchsize=1). From the perspective of model design, we model the vanilla and the auxiliary branch with different architectures to increase the architecture heterogeneous for better performance in a mutual learning framework. §.§ Omni-Correlation Consistency Regularization In this chapter, we introduce Omni-Correlation Consistency (OCC) to formulate additional model regularization. The core of the OCC module is omni-correlation, which is a kind of similarity matrix that is calculated between the feature of an unlabeled pixel and a group of prototype features sampled from labeled instances features. It reflects the similar relationship of an unlabeled pixel with respect to a set of labeled reference pixels. During the training procedure, we explicitly constrain the omni-correlation calculated using heterogeneous unlabeled features from those two separate branches to remain the same. In practice, we use an Omni-correlation matrix to formulate the similarity distribution between unlabeled features and the prototype features. Let g_v and g_a denote two projection heads attached to the backbones of f_v and f_a separately, and z_v∈ℝ^m× c' and z_a∈ℝ^m× c' represent two sets of embeddings sampled from their projected features extracted from unlabeled samples, where m is the number of sampled features and c' is the dimension of the projected features. It should be noted that z_v and z_a are sampled from the embeddings corresponding to the same set of positions on unlabeled samples. Suppose z_p∈ℝ^n× c' represents a set of prototype embeddings sampled from labeled instances, where n represents the number of sampled prototype features, the omni-correlation matrix calculation between z_v and z_p can be formulated as: sim_vp_i=exp(cos(z_v,z_p_i)*t)/∑_j=1^n exp(cos(z_v,z_p_j)*t), i∈{1, ..., n } where cos means the cosine similarity and t is the temperature hyperparameter. sim_vp∈ℝ^m× n is the calculated omni-correlation matrix. Similarly, the similarity distribution sim_ap between z_a and z_p can be calculated by replacing z_v with z_a. To constrain the consistency of omni-correlation between dual branches, the omni-correlation consistency regularization can be conducted with the cross-entropy loss l_ce as follows: l_o=1/m∑l_ce(sim_vp,sim_ap) Memory Bank Construction We utilize a memory bank T to iteratively update prototype embeddings for OCC computation. Specifically, T initializes N slots for each labeled training sample and updates prototype embeddings with filtered labeled features projected by g_v and g_a. To ensure the reliability of the features stored in T, we select embeddings on the positions where both f_v and f_a have the correct predictions and update T with the mean fusion of the projected features projected by g_v and g_a. For each training sample, following<cit.>, T updates slots corresponding to the labeled samples in the current mini-batch in a query-like manner. Embeddings Sampling For computation efficiency, omni-correlation is not calculated on all labeled and unlabeled pixels. Specifically, we have developed a confidence-based mechanism to sample the pixel features from the unlabeled data. Practically, to sample z_v and z_a from unlabeled features, we first select the pixels where f_v and f_a have the same prediction. For each class, we sort the confidence scores of these pixels, and then select features of the top i pixels as the sampled unlabeled features. Thus, m=i× C, where C represents the number of classes. With regards to the prototype embeddings, we randomly sample j embeddings from each class among all the embeddings contained in T and n=j× C to increase its diversity. § EXPERIMENTS AND RESULTS Dataset. Our method is evaluated on the Left Atrium (LA) dataset <cit.> from the 2018 Atrial Segmentation Challenge. The dataset comprises 100 gadolinium-enhanced MR imaging scans (GE-MRIs) and their ground truth masks, with an isotropic resolution of 0.625^3mm^3. Following <cit.>, we use 80 scans for training and 20 scans for testing. All scans are centered at the heart region and cropped accordingly, and then normalized to zero mean and unit variance. Implementation Details. We implement our CAML using PyTorch 1.8.1 and CUDA 10.2 on an NVIDIA TITAN RTX GPU. For training data augmentation, we randomly crop sub-volumes of size 112×112×80 following <cit.>. To ensure a fair comparison with existing methods, we use the V-Net <cit.> as the backbone for all our models. During training, we use a batch size of 4, with half of the images annotated and the other half unannotated. We train the entire framework using the SGD optimizer, with a learning rate of 0.01, momentum of 0.9, and weight decay of 1e-4 for 15000 iterations. To balance the loss terms in the training process, we use a time-dependent Gaussian warming up function for λ_U and λ_C, where λ(t) = β∗ e^-5(1-t/t_max)^2, and set β to 1 and 0.1 for λ_U and λ_C, respectively. For the OCC module, we set c' to 64, j to 256, and i to 12800. During inference, prediction results from the vanilla V-Net are used with a general sliding window strategy without any post-processing. Quantitative Evaluation and Comparison. Our CAML is evaluated on four metrics: Dice, Jaccard, 95% Hausdorff Distance (95HD), and Average Surface Distance (ASD). It is worth noting that the previous researchers reported results (Reported Metrics in Table <ref>) on LA can be confusing, with some studies reporting results from the final training iteration, while others report the best performance obtained during training. However, the latter approach can lead to overfitting of the test dataset and unreliable model selection. To ensure a fair comparison, we perform all experiments three times with a fixed set of randomly selected seeds on the same machine, and report the mean and standard deviation of the results from the final iteration. The results on LA are presented in Table <ref>. The results of the full-supervised V-Net model trained on different ratios serve as the lower and upper bounds of each ratio setting. We report the reproduced results of state-of-the-art semi-supervised methods and corresponding reported results if available. By comparing the reproduced and reported results, we observe that although the performance of current methods generally shows an increasing trend with the development of algorithms, the performance of individual experiments can be unstable. and the reported results may not fully reflect the true performance. It is evident from Table <ref> that CAML outperforms other methods by a significant margin across all settings without incurring any additional inference or post-processing costs. With only 5% labeled data, CAML achieves 87.34% Dice score with an absolute improvement of 4.01% over the state-of-the-art. CAML also achieves 89.62% Dice score with only 10% labeled data. When the amount of labeled data is increased to 20%, the model obtains comparable results with the results of V-Net trained in 100% labeled data), achieving a Dice score of 90.78% compared to the upper-bound model's score of 90.98%. As presented in Table <ref>, through the effective transfer of knowledge between labeled and unlabeled data, CAML achieves impressive improvements. Table <ref> also demonstrated that as the labeled data ratio declines, the model maintains a low standard deviation of results, which is significantly lower than other state-of-the-art methods. This finding suggests that CAML is highly stable and robust. Furthermore, the margin between our method and the state-of-the-art semi-supervised methods increases with the decline of the labeled data ratio, indicating that our method rather effectively transfers knowledge from labeled data to unlabeled data, thus enabling the model to extract more universal features from unlabeled data. Figure <ref> shows the qualitative comparison results. The figure presents 2D and 3D visualizations of all the compared methods and the corresponding ground truth. As respectively indicated by the orange rectangle and circle in the 2D and 3D visualizations Our CAML achieves the best segmentation results compared to all other methods. §.§.§ Ablation Study. In this section, we analyze the effectiveness of the proposed CMA module and OCC module. We implement the MC-Net as our baseline, which uses different up-sampling operations to introduce architecture heterogeneity. Table <ref> presents the results of our ablation study. The results demonstrate that under 5% ratio, both CMA and OCC significantly improve the performance of the baseline. By combining these two modules, CAML achieves an absolute improvement of 6.42% in the Dice coefficient. Similar improvements can be observed for a data ratio of 10%. Under a labeled data ratio of 20%, the baseline performance is improved to 90.43% in the Dice coefficient, which is approximately comparable to the upper bound of a fully-supervised model. In this setting, adding the CMA and OCC separately may not achieve a significant improvement. Nonetheless, by combining these two modules in our proposed CAML framework, we still achieve the best performance in this setting, which further approaches the performance of a fully-supervised model. § CONCLUSION In this paper, we proposed a novel framework named CAML for semi-supervised medical image segmentation. Our key idea is that cross-sample correlation should be taken into consideration for semi-supervised learning. To this end, two novel modules: Cross-sample Mutual Attention(CMA) and Omni-Correlation Consistency(OCC) are proposed to encourage efficient and direct transfer of the prior knowledge from labeled data to unlabeled data. Extensive experimental results on the LA dataset demonstrate that we outperform previous state-of-the-art results by a large margin without extra computational consumption in inference. Acknowledgements. This work is funded by the Scientific and Technological Innovation 2030 New Generation Artificial Intelligence Project of the National Key Research and Development Program of China (No.2021ZD0113302), Beijing Municipal Science and Technology Planning Project (No.Z201100005620008, Z211100003521009).
http://arxiv.org/abs/2307.05588v2
20230710155742
Collaborative Song Dataset (CoSoD): An annotated dataset of multi-artist collaborations in popular music
[ "Michèle Duguay", "Kate Mancey", "Johanna Devaney" ]
cs.SD
[ "cs.SD", "eess.AS" ]
[ Amitabh Basu August 12, 2023 =================== The Collaborative Song Dataset (CoSoD) is a corpus of 331 multi-artist collaborations from the 2010–2019 Billboard “Hot 100” year-end charts. The corpus is annotated with formal sections, aspects of vocal production (including reverberation, layering, panning, and gender of the performers), and relevant metadata. CoSoD complements other popular music datasets by focusing exclusively on musical collaborations between independent acts. In addition to facilitating the study of song form and vocal production, CoSoD allows for the in-depth study of gender as it relates to various timbral, pitch, and formal parameters in musical collaborations. In this paper, we detail the contents of the dataset and outline the annotation process. We also present an experiment using CoSoD that examines how the use of reverberation, layering, and panning are related to the gender of the artist. In this experiment, we find that men's voices are on average treated with less reverberation and occupy a more narrow position in the stereo mix than women's voices. § INTRODUCTION As far back as the 1960s, Billboard charts have featured collaborations between independent acts. In recent years, however, the number of songs featuring a collaboration between artists has skyrocketed <cit.>. Part of this is due to the rising popularity of hip-hop in the 1980s, in which collaboration between different artists is a fixture. The 1986 version of “Walk This Way” by Aerosmith and Run DMC is an oft-cited example of such a collaboration. As Rose notes, the success of a collaboration between a hip-hop group (Run DMC) and a rock group (Aerosmith) “brought [hip-hop’s] strategies of intertextuality into the commercial spotlight” <cit.>. The 1990 success of “She Ain’t Worth It” by Glenn Medeiros ft. Bobby Brown marked the first time a sung and rapped collaboration reached #1 on Billboard’s “Hot 100.” Molanphy notes that during this period, multi-artist collaborations crystallized into two different frameworks: the “featured bridge rapper,” and the “featured hook singer” <cit.>. Subsequently, tracks with one or more guest artist(s) have become a mainstay on the charts. By 2021, over a third (39%) of the songs in Billboard's “Hot 100” year-end chart credited more than one artist. Consider for instance “Save Your Tears,” by singers The Weeknd & Ariana Grande, which occupied second place on the chart. A solo version of the song originally appeared on The Weeknd’s album After Hours (2020). While this version achieved commercial success, the remix with Ariana Grande became a #1 single on the Billboard “Top 100” in May 2021 and became the longest-charting collaboration in Billboard “Hot 100” history. In the remix, Grande performs approximately half of the vocals, transforming the solo song into a dialogue between two characters. The collaboration between the two artists is responsible for the popularity of the remix, inviting both Grande’s and The Weeknd’s fans to stream, buy, and otherwise engage with the song. Several musicological studies have examined this relationship between collaborative songs and commercial success <cit.>. Other work has provided in-depth explorations of the musical characteristics of collaborative songs, with a particular focus on hip-hop <cit.>. Given the popularity of multi-artist collaborations, a more systematic exploration of their musical features is warranted. In this paper, we introduce the Collaborative Song Dataset (CoSoD), an annotated dataset that facilitates the study of various musical features in multi-artist collaborations. CoSoD provides metadata and analytical data for 331 multi-artist collaborations appearing on the Billboard “Hot 100” year-end charts between 2010 and 2019. The dataset also provides timed annotations on the song's formal structure, artists' gender, vocal delivery and pitch, and vocal production (reverberation, panning, and layering). As detailed in Section 2, the range of features included in the dataset makes it more broadly applicable for MIR research tasks. These include structural segmentation, vocal mixing, automatic music production, and examinations of gender in popular music. After outlining the contents of the dataset and the annotation methodology in Section 3, we present an experiment in Section 4 that examines the relationship between vocal production parameters and the gender of the performer in a subset of CoSoD. § RELATED WORK CoSoD complements the growing list of annotated datasets that provide information on song structure in various popular music genres, e.g.,<cit.>, and is the first dataset to exclusively contain data on collaborative songs between independent acts. It can thus be used for training and evaluating structural segmentation tasks and for studying the specific structural characteristics of collaborative songs. CoSoD also complements existing datasets for multi-track mixing/analysis<cit.> and vocal analysis<cit.> by providing analytical annotations on the treatment of the voice in a mix. In recent years, several studies have proposed tools and methods to automate the mixing of multi-track recordings<cit.>. Such automatic production methods have various artistic and creative applications. One framework has been suggested to remix early jazz recordings, which are pre-processed using source separation then remixed with automatic production tools<cit.>. <cit.> proposes a prototype for an automatic DJ mixing system allowing for cross-fading via beat and tempo adjustment between songs. Studies on automatic mixing can be enhanced by knowledge of common mixing practices for specific instruments or sound sources. For instance, one study uses mixing practices that are consistent between mixing engineers to create a model that automatically mixes multiple drum tracks<cit.>. By focusing on vocals, which are a salient component of the mix in popular music<cit.>, CoSoD provides a complementary approach to these studies on automated production. By providing annotations based on close listening of specific vocal mixing parameters in the different formal sections of a song, the dataset allows for the identification of trends in panning, layering, and use of artificial reverberation as they are applied to vocals in commercially successful post-2010 popular music. It enables the direct comparison of how various mixing parameters are applied to individual artists' voices within and across songs. In addition to facilitating the modeling of voice mixing, CoSoD also allows musicologists to ask questions about the way different voice types and individuals are mixed. Finally, CoSoD facilitates the study of the relationship between gender and popular music. A number of previous studies have examined music programming and streaming services, exploring for instance how listeners tend to stream male artists more than women and mixed-gender groups<cit.>. Watson discusses gender inequality and low programming of women’s music in country music radio<cit.>. Other work addresses how a listener’s declared gender impacts automatic music recommendation<cit.> and musical preferences<cit.>. Additionally, various studies have addressed race and gender, along with sexist and racist discourses and practices, as they impact the music industry in general and the Billboard charts in particular<cit.>. By providing data on musical features, gender, and the role of these parameters within the formal structure of a song, CoSoD offers a new and complementary angle for the study of gender as it directly relates to the musical content of post-2010 popular collaborations. § COLLABORATIVE SONG DATASET (COSOD) CoSoD[<https://github.com/duguay-michele/CoSoD>] consists of metadata and analytical data of a 331-song corpus comprising all multi-artist collaborations on the Billboard “Hot 100” year-end charts published between 2010 and 2019. Each song in the dataset is associated with two CSV files: one for metadata and one for analytical data. We assembled the corpus by identifying every song on the charts that featured collaborations between two or more artists who usually perform independently from one another. §.§ Annotation of Musical Features The following analytical data is provided for each song in the dataset: 0.4cm * Index number: 1 to 33 * Time stamps: In seconds (start of new section) * Formal section label: Introduction, Verse, Pre-chorus, Chorus, Hook, Dance Chorus<cit.>, Link, Post-chorus, Bridge, Outro, Refrain or Other * Name of artist(s): Full name of the artist performing in each section. If all artists credited on the Billboard listing perform in a section, the label both or all is used. Songs were assigned at random to one of two annotators, who generated time stamps at the onset of each formal section with Sonic Visualiser.[The first annotator (first author) has a doctorate in music theory, while the second (second author) is a doctoral candidate in the same field.] The annotators provided formal labels according to their analysis of the song. In case of ambiguity in the formal sections, both annotators discussed the analysis and agreed upon an interpretation. For each formal section performed by one artist only, the following analytical data on the voice is provided: * Gender of artist: M (Man), W (Woman), NB (Non-binary) * Function of artist: Feat (Featured artist), Main (Main artist), Neither, Uncredited * Style of vocal delivery: R (Rapped vocals), S (Sung vocals), Spoken * Minimum pitch value: In Hz * First quartile pitch value: In Hz * Median pitch value: In Hz * Third quartile pitch value: In Hz * Maximum pitch value: In Hz * Environment value: On a scale of E1 to E5 * Layering value: On a scale of L1 to L5 * Width (panning) value: On a scale of W1 to W5 The annotators determined the name of the artist(s) performing in each section by ear, and using song lyric website Genius.com to validate their hearing. In cases where an artist only provides minimal background vocals (a few words) in a particular formal section, their name is not included. One annotator then provided analytical data on each formal section performed by one artist only. Data on gender was gathered from media interviews and social media statements from the artists, and matches the artist's gender identity at the time of the dataset creation. This methodology yielded three categories: man, non-binary, and woman. We understand these labels as umbrella terms that encompass a variety of lived experiences that intersect with race, sexuality, and other power structures. The style of vocal delivery was determined by ear. The distinction between rapping and singing is porous, with many vocalists adopting ambiguous modes of vocal delivery. We consider any formal section containing a melodic line performed with sustained pitches as sung. The pitch data was obtained by first isolating the vocals from the full mix using Open-Unmix<cit.> and then running the pYIN Smoothed Pitch Track transform <cit.> on the isolated vocal file. The minimum, first quartile, median, third quartile, and maximum pitch points in each formal section were calculated and recorded in the dataset.[The accuracy of the F0 estimates used to calculate this feature is impacted by the quality of the vocal source separation. A more accurate isolated vocal file would allow for more precise pitch data. Additionally, since pYIN Smoothed Pitch Track can only track a single melodic line, the accuracy of the pitch data is lessened in sections that feature multiple vocal layers with different pitch content.] The Environment, Layering, and Width values were determined by the first annotator to ensure consistency. Rather than attempting to reconstruct the mixing process itself, the annotations for these parameters represent the way a listener might perceive the final mix upon listening to it on stereo speakers. The Environment of a voice is the space in which the voice reverberates. Environment values were determined via an aural analysis of the full track by using the following scale[The scales were initially published in <cit.>.]: 0.7cm E1: The voice’s environment sounds flat. There might be minimal ambiance added to the voice, but there is no audible echo or reverberation. E2: The last word or syllable of most musical phrases is repeated through an echo or reverberation effect. E3: The vocal line is repeated in one clear layer of echo. This added layer may be dry or slightly reverberant and has a lower amplitude than the main voice. E4: The main voice is accompanied by a noticeable amount of reverberation. There is no clear echo layer, but rather a sense that the main voice is being reverberated across a large space. E5: The main voice is accompanied by two or more layers of echo. The echo layers may be noticeably reverberant, similar in amplitude to the main voice, and difficult to differentiate from one another. The Layering of a voice refers to the additional vocal tracks that are dubbed over a single voice. Layering values were determined via an aural analysis of the full track by using the following scale: 0.7cm L1: The voice is presented as solo. Occasionally, a few words may be doubled with another vocal track for emphasis. Double-tracking is often used in the mixing process to create a fuller sound, with a final result sounding like a single vocal layer. Such cases fall into this category. L2: The voice is presented as solo, but additional vocal layers are added at the end of musical phrases for emphasis. L3: The main voice is accompanied by one or two layers. Layers might provide minimal harmonies or double the main voice. The layers have a noticeably lower amplitude than the main voice. L4: The main voice is accompanied by two or more layers. These layers are close copies of the main voice, sharing the same pitch and similar amplitude. L5: The main voice is accompanied by two or more layers. These layers add harmonies to the main voice, creating a thick and multi-voiced texture. The Width of a voice refers to the breadth it occupies on the stereo stage. The Width was analyzed aurally with the aid of panning visualisation tool MarPanning<cit.>. The annotator simultaneously listened to the isolated vocal audio and observed the MarPanning visualization generated from the isolated vocals to determine the Width value. Since Open-Unmix occasionally omits reverberated components of the voice from the isolated file, the analyst then listened to the full track to confirm the Width value. Width values were determined according to the following scale: 0.7cm W1: The voice occupies a narrow position in the center of the stereo stage. W2: The voice occupies a slightly more diffuse position in the center of the stereo stage. W3: The main voice occupies a narrow position in the center of the stereo stage, but some of its components (echo, reverberation, and/or additional vocal tracks) are panned toward the sides. These wider components have a lower amplitude than the main voice. W4: The main voice occupies a slightly more diffuse position in the center of the stereo stage, and some of its components (echo, reverberation, and/or additional vocal tracks) are panned toward the sides. These wider components have a lower amplitude than the main voice. W5: The main voice and its associated components (echo, reverberation, and/or additional vocal tracks) are panned across the stereo stage. All components have a similar amplitude. §.§ Metadata The following metadata is provided for each song in the dataset: 0.4cm * Index number: From 1 to 331 * Year of first appearance on Billboard “Hot 100” year-end charts * Chart position: As it appears on the Billboard “Hot 100” year-end charts * Song title: As it appears on the Billboard “Hot 100” year-end charts * Name of artists: As it appears on the Billboard “Hot 100” year-end charts * Collaboration type: * Lead/featured: Collab. with lead artist(s) and featured artist(s) * No lead/featured: Collab. with no determined lead * DJ/vocals: Collab. between a DJ and vocalist(s) * Gender of artists: * Men: Collab. between two or more men * Women: Collab. between two or more women * Mixed: Collab. between two or more artists of different genders * Collaboration type + gender: * Collab M: Collab. between men, no determined lead * Collab M and W: Collab. between men and women, no determined lead * Collab NB and W: Collab. betwen women and non-binary artists, no determined lead * Collab W: Collab. between women, no determined lead * DJ with M: Collab. between male DJ and male vocalist * DJ with Mix: Collab. between male DJ and mixed-gender vocalists * DJ with NB: Collab. between male DJ and non-binary vocalist * DJ with W: Collab. between male DJ and female vocalist * M ft. M: Men featuring men * M ft. W: Men featuring non-binary artist(s) * W ft. M: Women featuring men * W ft. W: Women featuring women * MusicBrainz URL: Link to the song on open music encyclopedia MusicBrainz Each song in the dataset is labeled with an index number from 1 to 331. Songs are numbered in reverse chronological order, beginning with the 2019 charts and ending with 2010. One annotator obtained the metadata on year, chart position, title, and artists from the information available on the Billboard charts. Within years, songs are organized according to their position on the chart, from highest to lowest. Some songs appear on the charts two years in a row. In such cases, we only include the data for the earliest appearance. §.§ Corpus Statistics The dataset can be divided into three categories (shown in Figure <ref>): (i) collaborations between the lead artist(s) and featured artist(s), which account for 221, or 66.7% of the tracks, (ii) collaborations with no determined lead or featured artist, which account for 59, or 17.8%, of the tracks, and (iii) collaborations between a DJ and a vocalist, which account for 51, or 15.4% of the tracks. In category (i), the lead artist usually performs the majority of vocals. For example, in “No Limit” (2018) by G-Eazy ft. A$AP Rocky & Cardi B, G-Eazy performs most of the vocals. A$AP Rocky accompanies him in the chorus and Cardi B raps the second verse. In category (ii), the performance of the vocals is often more equally distributed. Such collaborations are often billed as “duets,” and the artists’ names are separated by a “+”, a “&”, or a comma on the Billboard charts. For example,“Something’ Bad” (2014) is labeled as a “Miranda Lambert Duet With Carrie Underwood.” Both vocalists perform approximately equal portions of the song. In category (iii), the DJ does not provide vocals. In “Sweet Nothing” (2012), for instance, only the featured Florence Welch sings. The voice of DJ Calvin Harris is not heard. Mixed-gender collaborations (including any combination of non-binary, women, and men artists) frequently appear on the Billboard charts and account for 162, or 49%, of the tracks in the dataset. Collaborations between two or more men account for 159 tracks, or 48% of the dataset. Finally, collaborations between women account for 10, or 3%, of the tracks. In six of the ten years under study–2011, 2012, 2015, 2017, 2018, and 2019–no collaborations between women reached the Billboard “Hot 100” year-end chart. Conversely, songs with two or more male vocalists were a consistent fixture on the charts. Mixed-gender collaborations, with any combination of men, women, and non-binary artists within the same track, also frequently appear on the charts. Figure <ref> shows the number and type of sections performed by individual artists in the corpus, categorized according to gender. This figure includes identical sections (such as choruses) that are repeated within a song. Sections in which more than one artist performs are not included. More sections are performed by men than by women and non-binary artists, which is to be expected given the over-representation of men in the dataset as a whole (Figure <ref>). Figure <ref> displays the number and type of sections performed by featured artists only. § EXPERIMENT: VOCAL PRODUCTION FEATURES AND GENDER This section examines the relationship between the gender of an artist and the treatment of their voice, as characterized by three of the annotated musical features in the dataset: Environment, Layering, and Width. For the purposes of statistical power in the experiment, only songs with men and/or women artists were included. We only included tracks that contained verse and chorus sections to remove section types that occur in only a few tracks. In order to avoid over-representations of tracks with repeated sections (i.e., several instances of the same chorus), we sampled the first verse and chorus performed by a single artist from each track.[If the first verse of a song was performed by two artists simultaneously, while the second verse was only performed by one, we sampled the second verse.]This method resulted in the inclusion of two sections from 287 of the 331 dataset tracks in the experiment. We analyzed the data with three separate logistic regressions–one for each feature–using the statsmodels package in Python. We encoded the different levels of the parameter scales (defined in Section 3.1) with one-hot encoding in order to allow us to examine whether there is a correspondence between specific parameter scale levels and gender. Of the three logistic regressions, Environment (R2McFadden (4, N = 574) = 0.028, p < 0.0001) and Width (R2McFadden (4, N = 574) = 0.035, p < 0.0001) were statistically significant, while Layering (R2McFadden (4, N = 574) = 0.0036, p = 0.64) was not. The McFadden R2 values for both Environment and Width were very low. This was not surprising since we did not anticipate that these features, particularly in isolation, would be explanatory. We were instead interested in exploring whether there is a significance between these features with respect to the man/woman gender binary in these collaborations. For Environment, there were significant effects (p < 0.0001) for E1 (=-1.18, 95%CI [-1.49, -0.87]), E2 (=-1.12, 95%CI [-1.56, -0.69]), and E3 (=–0.78, 95%CI [-1.14, -0.42). There was a significant negative effect for the lower/mid-level environment values and gender, meaning that men's voices are more likely to be set in less reverberant spaces than women's voices. For Width, there were significant effects at all of the levels: W1 (=-1.84, 95%CI [-2.50, -1.17]), W2 (=-1.58, 95%CI [-2.39, -0.77]), W3 (=–1.13, 95%CI [-1.51, -0.75]), W4 (=-0.47, 95%CI [-0.77, -0.17), and W5 (=-0.60, 95%CI [-0.95, -0.25]). The Width results are harder to interpret than the Environment ones because the coefficient values are smaller and all negative. This is likely due to the imbalance between men and women in featured artist roles, both in the dataset (see Figure <ref>) overall and in the sample used in this experiment (404 of the included sections featured men while only of 170 featured women). However, the overall trend is similar to the one in the Environment experiment: lower-level values are more common for men than women. Men's voices are more likely to occupy a narrow, centered position on the stereo stage, while women's voices are more likely to occupy a wider space. These results were expected given that high Environment values tend to be associated with high Width values, as the reverberated components of a voice are generally panned across the stereo stage. The lack of significant results for Layering indicates that there are no differences in the ways in which this parameter is applied to men's and women's voices. Since textural variation (such as the addition of vocal layers) is a standard feature of verse-chorus form, it is possible that Layering is linked to the type of formal section rather than to the gender of the vocalist. The significant results for the Environment and Width parameters can be interpreted in light of Brøvig-Hanssen's and Danielsen's work on technological mediation<cit.>. The authors establish a distinction between transparent and opaque technological mediation in recorded music. Transparent mediation, on one hand, is meant to create a recorded product that sounds natural and unaltered. Low Environment and Width values, for instance, are closer to transparent mediation because they sound closer to a real-life performance that is unmediated with artificial reverb or panning. Opaque mediation, on the other hand, highlights the use of technology by making it obvious to the listener. High Width and Environment values, with their clearly audible artificial reverberation and wide panning, are examples of opaque mediation. The results of the experiment therefore suggest that men’s voices are more likely to be mixed to sound “transparent” and natural while women’s voices are more likely to be mixed to sound “opaque” and technologically mediated. Overall, this experiment demonstrates that within verse and chorus sections in CoSoD, there is a significant difference between the treatment of men's and women's vocals in terms of Environment and Width. This suggests that some mixing parameters contribute to the sonic differentiation of men’s and women’s voices in popular music. § CONCLUSION CoSoD is a 331-song corpus of all multi-artist collaborations for faciliating appearing on the 2010–2019 Billboard “Hot 100” charts. Each song in the dataset is annotated with metadata, formal sections, and aspects of vocal production (including reverberation, layering, panning, and gender of the artists). As outlined in Section 2, CoSoD has several implications for MIR research. It provides annotated data for structural segmentation tasks and a listener-centered perspective on vocal mixing that could be useful for automatic music mixing tasks. The dataset could also be used to determine how these parameters interact with song form. Further study could also examine the relationship between the vocal range of an artist in a given section, their type of vocal delivery (rapped, spoken, or sung), and mixing parameters. Finally, the dataset also allows for the examination of the ways in which Environment, Layering, and Width values tend to be grouped together to create specific vocal production effects. The dataset also facilitates musicological study of multi-artist collaborations post-2010 and gender norms. The experiment in Section 4 demonstrates this, as its results suggest that, for the chorus and verse data sampled from 287 songs in the dataset, men's voices are more likely to be narrow and less reverberated than women's. Opportunities for future research include examining whether there is a significant difference in the way Environment, Width, Layering, or other parameters are applied to women's and men's voices within collaborations that feature mixed- and same-gender vocalists. In other future work, we plan on expanding the annotations in the dataset with time-aligned lyrics, harmonic analyses, and additional performance data for the voice extracted using AMPACT <cit.>. These annotations will include both spectral features and semantic descriptors, and the data will be encoded in relation to vocal-line transcriptions, where possible <cit.>. We also plan on providing annotations on vocal production parameters in sections performed by multiple artists and examining how vocal production parameters correlate with mixing parameters such as panning. Finally, while our dataset focuses on gender, we are also interested in encoding other aspects of identity, such as race, in order to provide an intersectional perspective on artists' identities. However, categorizing artists according to race proves to be more problematic than gender. Matthew D. Morrison writes that “white (and other nonblack) people freely express themselves through the consumption and performance of commodified black aesthetics without carrying the burden of being black under white supremacist structures” <cit.>. In other words, white and non-Black artists–such as rappers Iggy Azalea and G-Eazy, or singer Bruno Mars–often assume particular sonic characteristics that implicitly associate them with commodified notion of Blackness. By categorizing all white artists together, for instance, we would ignore this phenomenon and the way it is sonically realized. Further work needs to be done to understand how to best expand on CoSoD, or datasets in general, to account for this dynamic.
http://arxiv.org/abs/2307.05993v1
20230712081605
The Coble Quadric
[ "Vladimiro Benedetti", "Daniele Faenzi", "Michele Bolognesi", "L Manivel" ]
math.AG
[ "math.AG" ]
Given a smooth genus three curve C, the moduli space of rank two stable vector bundles on C with trivial determinant embeds in ^8 as a hypersurface whose singular locus is the Kummer threefold of C; this hypersurface is the Coble quartic. Gruson, Sam and Weyman realized that this quartic could be constructed from a general skew-symmetric four-form in eight variables. Using the lines contained in the quartic, we prove that a similar construction allows to recover _C(2,L), the moduli space of rank two stable vector bundles on C with fixed determinant of odd degree L, as a subvariety of G(2,8). In fact, each point p∈ C defines a natural embedding of _C(2,(p)) in G(2,8). We show that, for the generic such embedding, there exists a unique quadratic section of the Grassmannian which is singular exactly along the image of _C(2,(p)), and thus deserves to be coined the Coble quadric of the pointed curve (C,p). Canonical partition function and distance dependent correlation functions of a quasi-one-dimensional system of hard disks [ August 12, 2023 ========================================================================================================================== § INTRODUCTION A century ago, Arthur Coble proved that there exists a unique quartic hypersurface 𝒞 in ^7 that is singular exactly along the 3 dimensional Kummer variety, image of the Jacobian of a genus 3 curve C via the |2Θ|-linear system (<cit.>, see also <cit.>). This remarkable hypersurface is now named after him, and its many very special features have been studied by several algebraic geometers. For example 𝒞 is projectively self-dual <cit.>, it has close relationships with the Θ-geometry of the curve (e.g. a Schottky-Jung configuration of Kummer surfaces of Prym varieties <cit.>, etc.) and with moduli of configurations of points in the projective space <cit.>. Probably, the most striking property is, however, that 𝒞 is the image, via the theta map, of the moduli space of semi-stable rank two vector bundles on C with trivial determinant. This was first remarked by Narasimhan and Ramanan in the seminal paper <cit.>. In particular, since the theta map is an embedding for rank two bundles with trivial determinant <cit.>, we can identify 𝒞 with the moduli space _C(2) itself. In rank two there is, up to isomorphism, only one other moduli space _C(2,L) of rank two vector bundles on C, obtained by fixing the determinant to be any given line bundle L of odd degree (up to non-canonical isomorphisms, L is irrelevant). Contrary to , this moduli space is smooth and we can wonder what could be an analogue of the Coble quartic. The main results of this paper answer this natural question. In order to achieve this, we will use the theory of theta representations <cit.>, in the way this was initiated in <cit.> as a complex addition to arithmetic invariant theory. In our setting, the main point is that starting from the _8-module ∧^4^8 one can easily construct the Coble quartics in terms of Pfaffian loci. From this point of view, the curve C defined by a general element of ∧^4^8 is not immediately visible, but certain deep properties of the quartic become easy to establish. For example, we give in Theorem <ref> a short, self-contained proof of the self-duality of . Then we switch from ^7 to the Grassmannian G(2,8) and observe that also in this Grassmannian, there exist natural Pfaffian loci corresponding to skew forms of rank at most 4 and 6, respectively of codimension 6 and 1: D=D_Z_6(v)⊂ Q=D_Z_1(v)⊂ G(2,8). Here v is a general element in ∧^4 ^8 and Q is a quadric section of the Grassmannian that is singular exactly along the six-dimensional smooth locus D (the notation D_Z_i(v) will be explained in Section <ref>). The connection with the Coble quartic comes from the fact that D parametrizes a family of lines on it, some of the so-called Hecke lines. We deduce (Theorem <ref> later on): D≃_C(2,L) for L of odd degree. Consequently, the moduli space, which is smooth, comes up with a natural hypersurface of which it is the singular locus, contrary to the even case for which the moduli space is singular and uniquely determined by its singular locus, which is the Kummer. We extend the unicity statement by proving (Theorem <ref> later on): Q is the only quadratic section of the Grassmannian that is singular along D. Because of this property, Q really deserves to be called a Coble quadric. Moreover, exactly as the Coble quartic, we show this hypersurface is self-dual in a suitable sense (Theorem <ref>). As a matter of fact, for each point p∈ C, there is an embedding φ_p : _C(2,_C(p)) ↪ G(2,8), (see <cit.>), and we show that at least for the generic p, there exists a unique quadric section of the Grassmannian that is singular along the moduli space (Theorem <ref>). Remarkably, we found other instances of this phenomenon: for example, an eightfold inside the flag variety Fl(1,7,8) whose singular locus is an abelian threefold, essentially the Jacobian of the curve (see Remark <ref>). The paper is organized as follows. In section 2, we recall a few classical results about lines on moduli spaces of vector bundles on curves, and more specifically about lines in the Coble quartic. In section 3 we explain how the Coble quartic, the Kummer threefold and the associated Jacobian can be constructed from a skew-symmetric four-form in eight variables, and we give a short proof of the self-duality of the quartic. In section 4 we explain how this point of view allows to understand the lines in the Coble quartic in terms of orbital degeneracy loci <cit.>, and we deduce Theorem 1 (see Theorem 21). The resulting description as a relative Pfaffian locus makes it clear that the odd moduli space is the singular locus of a special quadratic section of the Grassmannian G(2,8). In order to prove that this special quadric is unique, we need to study the square of the ideal of the Grassmannian G(2,6) in its Plücker embedding. Going back to the relative setting we deduce Theorem 2 (see Theorem 27). We finally complete the picture by explaining why and how the special quadric is also self-dual. All authors partially supported by FanoHK ANR-20-CE40-0023. D.F. and V.B. partially supported by SupToPhAG/EIPHI ANR-17-EURE-0002, Région Bourgogne-Franche-Comté, Feder Bourgogne and Bridges ANR-21-CE40-0017. We warmly thank Christian Pauly, Sasha Kuznetsov and Jerzy Weyman for useful discussions. Special thanks also to Shigeru Mukai and Akihiro Kanemitsu for sharing the results of <cit.>. § LINES IN THE COBLE QUARTIC Throughout the text we will denote by U_C(r,d) the moduli space of semi-stable vector bundles on a curve C of rank r and determinant of degree d. If L is a degree d line bundle on C, we will denote by _C(r,L) the subvariety of U_C(r,d) parametrizing vector bundles of determinant L; moreover _C(r):=_C(r,_C). Since all the moduli spaces (r,L) are (non canonically) isomorphic when the degree of L is fixed, we will also denote their isomorphism class by _C(r,d); it does depend on d only modulo r. Finally, we will denote by U_C(r,d)^eff the moduli space of vector bundles with effective determinant. When d=1, this moduli space fibers over the curve C with fiber over c isomorphic to _C(2,_C(c)). §.§ Covering families of rational curves in _C(2) Rational curves in the moduli spaces _C(r,d) were extensively studied, see e.g. <cit.>. Restricting to g=3, r=2 and d=0, the results of <cit.> show that there exist two different families of covering lines i.e., families of rational curves of degree one with respect to the Theta embedding :=_2(C)↪ |2Θ|=(V_8), passing through a general point of the moduli space. We will denote these two families by _H and _R and consider them as subvarieties of the Grassmannain G(2,V_8). In the sequel we describe these two covering families in some detail. They are both of dimension six but behave very differently; we will illustrate this by showing how different are the corresponding VMRT's (variety of minimal rational tangents), which in our case, since we deal with lines, are just the spaces of lines through a fixed general point. §.§ Hecke lines A generic Hecke line can be described by choosing a point c∈ C, and a rank two vector bundle F on C with determinant (F)=_C(c). Then the bundles E that fit into an exact sequence 0 E F_c 0 are parametrized by (F_c^∨)≃^1. They have trivial determinant and are all stable when F is (1,0)-semistable in the sense of <cit.>. For vector bundles of rank two and degree one, this condition is equivalent to stability, hence also to semistability. The resulting curve in _C(2) is a line and such lines are called Hecke lines. Note that dualizing, we get an exact sequence 0 F^∨ E^∨_c 0, so a Hecke line parametrizes all the possible extensions of _c by F^∨. By <cit.>, a general Hecke line defines a vector bundle of rank 2 over C×^1 fitting into an exact sequence 0 p_1^*F^∨⊗ p_2^*_^1(-1)^∨ p_1^*_c 0, where p_1 and p_2 are the projections of C ×^1 onto the two factors C and ^1. An easy consequence is that, since ^∨ admits a unique jumping line at c, this point can be uniquely recovered from the Hecke line. (Beware this is only true for general Hecke lines.) We will denote by _H the family of Hecke lines in _C(2,_C), considered as a subvariety of the space G(2,V_8) of lines in (V_8). Although a Hecke line does not always define a unique point in C, once we have fixed such a point c there is a well-defined morphism from _C(2,_C(c)) to _H. By the previous observations, the resulting morphism from _H:=U_C(2,1)^eff to _H is birational. Conversely, Hecke lines passing through a general point [E] of _C(2,K_C) (we make this choice of determinant just for convenience) are obtained by choosing a projection E E_c_c, where E_c denotes the fiber of the vector bundle E at the point c∈ C. So they are parametrized by (the image in _H of) the total space of the projective bundle (E^∨) over C. The tangent map of this morphism sends (E^∨) to the tangent space of the moduli space at [E], which is the projectivization of H^1(C,ℰnd_0(E))≃ H^0(C,K_C⊗ℰnd_0(E))^∨≃ H^0(C,S^2E)^∨, since K_C≃(E). Here ℰnd_0(E) denotes the vector bundle of traceless endomorphims of E. This implies (see <cit.> for more general statements): The VMRT of the family _H of Hecke lines at a general point [E] of the moduli space is the image of the ruled surface (E^∨) by the linear system |_E(2)|. In particular this surface contains no line. Equivalently, the latter claim means that a general Hecke line is not contained in any larger linear space contained in _C(2), although such larger linear spaces do exist. §.§ Lines in the ruling For each line bundle L∈^1(C), consider the rank two vector bundles E obtained as extensions of the form 0 L E K_C⊗ L^∨ 0. Such extensions are parametrized by _L:= (^1(K_C⊗ L^∨,L))≃^3. Hence a ruling of _C(2) by a family of ^3's parametrized by ^1(C), which we denote by ()^1(C). Note that _L intersects the Kummer threefold along a copy C_L of C <cit.>. According to <cit.>, a line in _L is a Hecke line if and only if it meets C_L. Moreover, by <cit.>, two spaces _L and _M are always distinct for L M and, for sufficiently general choices of L and M, they are disjoint. When they meet, their intersection is a single point, or a line; the latter case happens exactly when K_C-L-M is effective. In particular, if a line is contained in _L∩_M, it must be a bisecant to both C_L and C_M. Now consider the family _R of lines contained in the ^3's of the ruling. By what we have just recalled, _R is the birational image in G(2,V_8) of the quadric bundle G(2,) over ^1(C). The VMRT at a general point of _C(2), of the family _R of lines in its ruling, is the disjoint union of eight planes in ^5. It follows from <cit.> that the map ()_C(2) is generically finite of degree 8. This means that eight ^3's of the ruling pass through a general point [E] of _C(2), and for each of them the lines passing through [E] are parametrized by a projective plane. Finally, these projective planes are disjoint, again by <cit.>. For future use we record the following easy consequence. Any plane in _C(2,K_C) passing through a general point is contained in a unique ^3 of the ruling. § FOUR-FORMS AND ORBITAL DEGENERACY LOCI In this section we recall the definitions of some orbital degeneracy loci closely connected to the geometry of _C(2,_C), for C a general curve of genus 3. In particular we recall how to recover the Coble quartic from a general four-form in eight variables. Using this description, we give a short proof of the self-duality statement of <cit.>. Our references for orbital degeneracy loci (sometimes abbreviated as ODL) are <cit.>. Notation. We will denote by V_n and U_i complex vector spaces of dimension n and i, respectively (usually V_n will be fixed and U_i will be a variable subspace of V_n). We will also denote by G(i,V_n) the Grassmannian of i-dimensional subspaces of V_n and by Fl(i_1,…,i_k,V_n) the flag variety of flags of subspaces of V_n of dimensions i_1<⋯ <i_k. Over the flag variety, we will denote by _i_j the rank-i_j tautological bundle; over the Grassmannian we will denote by the tautological bundle and by the quotient tautological bundle. §.§ A simple construction of the Coble quartic In this section we recall some results from <cit.>. The starting point is a general four-form in eight variables, v∈∧^4V_8≃∧^4V_8^∨, where V_8 denotes a complex eight-dimensional vector space. Recall that this is a theta-representation, being part of a _2-grading of the exceptional Lie algebra _7≃(V_8)⊕∧^4V_8. The action of the so-called theta-group, which here is (V_8), behaves very much as the action of the adjoint group on a simple complex Lie algebra. In particular one has Jordan decompositions, and the GIT-quotient ∧^4V_8 // (V_8) ≃/W for some finite complex reflection group W acting on what is called a Cartan subspace of the theta-representation. We will make this Cartan subspace explicit later on. For now we just need to know that it coincides with the seven-dimensional representation of the Weyl group of E_7. As a consequence, the choice of v determines uniquely a non-hyperelliptic curve C of genus three (a plane quartic) with a marked flex point <cit.>. We will construct from our general v∈∧^4V_8 a collection of geometric objects defined as orbital degeneracy loci. The main point of this approach is that it allows to reduce to simpler representations. Typically, the Borel-Weil theorem gives an isomorphism ∧^4V_8≃ H^0((V_8),∧^4) ≃ H^0((V_8),∧^3^∨(1)), where denotes the rank seven quotient vector bundle on (V_8). At the price of passing to a relative setting over (V_8), this reduces the study of ∧^4V_8 to that of three-forms in seven variables. But then the situation is much simpler, because if V_7 is a seven-dimensional complex vector space, ∧^3V_7^∨≅∧^4 V_7 has finitely many orbits under the action of (V_7). Each orbit closure Y allows to associate to v∈∧^4V_8 the locus D_Y(v)⊂(V_8) of points x where the image of v lies in the corresponding Y_x⊂∧^3^∨(1)_x (this is exactly how orbital degeneracy loci are defined). By the general results of <cit.>, for v general the main properties of Y will be transferred to D_Y(v), starting from its codimension. We can therefore focus on the orbit closures in ∧^3V_7^∨ of codimension at most seven. Remarkably, there are only three such orbit closures (not counting the whole space), that we can index by their codimension: Y_1 is a hypersurface of degree 7, Y_4 is its singular locus, Y_7 is the singular locus of Y_4. The corresponding orbital degeneracy loci have been described in <cit.>. For v general, the threefold :=D_Y_4(v) is the Kummer variety of a non-hyperelliptic genus three curve C. It is the singular locus of the quartic hypersurface :=D_Y_1(v). Its singular locus is the finite set [2]:=D_Y_7(v). Since the Coble quartic can be characterized as the unique quartic hypersurface that is singular along the Kummer threefold <cit.>, we can immediately deduce that it coincides with D_Y_1(v). §.§ Kempf collapsings A nice feature of our orbital degeneracy loci is the following. It turns out that the orbit closures they are associated to, although singular, admit nice resolutions of singularities by Kempf collapsings, which are birational contractions from total spaces of homogeneous vector bundles on flag manifolds. These homogeneous vector bundles are typically non-semisimple, making them more difficult to handle. Nevertheless, these collapsings allow to describe the corresponding orbital degeneracy loci in terms of zero loci of sections of vector bundles. In the cases we are interested in, we obtain the following descriptions, where U_k stands for a k-dimensional subspace of V_8. For A,B subspaces of a vector space V, we will denote by (∧^p A) ∧ (∧^q B) ⊂∧^p+qV the linear subspace spanned by the elements of the form a_1∧⋯∧ a_p ∧ b_1 ∧⋯∧ b_q with a_1,…,a_p ∈ A and b_1,…,b_q∈ B. For vector subbundles 𝒜,ℬ of a the trivial bundle V ⊗, we use the same convention to define (∧^p 𝒜) ∧ (∧^q ℬ) in ∧^p+q V ⊗. The Coble quartic can be described as { [U_1]∈(V_8) |∃ U_4⊃ U_1, v∈ (∧^2 U_4) ∧ (∧^2 V_8) +∧^3 V_8∧ U_1} . The Kummer threefold is { [U_1]∈(V_8) |∃ U_6⊃ U_2⊃ U_1, v∈∧^4 U_6+ ∧^2 U_6∧ U_2∧ V_8 + ∧^3 V_8∧ U_1} . The singular locus [2] of is { [U_1]∈(V_8) |∃ U_7⊃ U_4⊃ U_1, v∈∧^3 U_4∧ V_8 + (∧^2 U_4)∧ (∧^2 U_7) +∧^3 V_8∧ U_1} . These results follow from a combination of <cit.> and <cit.>. Let us clarify the statement for instance for , the explanations for the other loci being similar. In <cit.> it is shown that :=D_Y_4(v) is the Kummer threefold. In <cit.> it is proved that Y_4⊂∧^4 V_7 is desingularized by a the total space of the vector bundle :=∧^4 _5+ ∧^2 _5∧_1 ∧ V_7 over the flag variety Fl(1,5,V_7). Here we denoted by _1 and _5 respectively the rank one and rank five tautological vector bundles on Fl(1,5,V_7). The projection from the total space of to Y_4⊂∧^4 V_7 is given by the composition of the inclusion of inside ∧^4 V_7 ⊗_Fl(1,5,V_7) with the projection to ∧^4 V_7. For v general, this desingularization ()→ Y_4 of Y_4 can be relativized to obtain a desingularization of D_Y_4(v), as explained in <cit.>. For this we simply consider the flag bundle Fl(1,5,): by the previous discussion, any point of x=[U_1]∈ D_Y_4(v) must be the image of a flag U̅_1 ⊂U̅_5⊂_x=V_8/U_1 such that v mod U_1 belongs to ∧^4 U̅_5+ ∧^2 U̅_5∧U̅_1 ∧_x⊂∧^4_x. This flag originates from a flag (U_1⊂ U_2⊂ U_6⊂ V_8) (such that U̅_1=U_2/U_1, etc.), and we can rewrite the previous condition as asking that x=[U_1] belongs to the projection of Z(v):={ (U_1⊂ U_2⊂ U_6)∈ Fl(1,2,6,V_8), v∈∧^4 U_6+ ∧^2 U_6∧ U_2∧ V_8 + ∧^3 V_8∧ U_1}. This is the zero locus of a global section of a globally generated bundle, obtained as a quotient of the trivial bundle with fiber ∧^4V_8. For v general this section is general, so Z(v) is smooth. Moreover the projection Z(v)→ D_Y_4(v)⊂(V_8), obtained by just forgetting U_2 and U_6, is birational. §.§ Self-duality of the Coble quartic Because of the natural isomorphism ∧^4V_8≃∧^4V_8^∨ (defined up to scalar, or more precisely up to the choice of a volume form on V_8), the same constructions can be performed in the dual projective space (V_8^∨). This is related to the remarkable fact that the Coble quartic is projectively self-dual <cit.>. Let us show how this duality statement easily follows from our approach in terms of orbital degeneracy loci. First consider a general point [U_1] of =D_Y_1(v). As we have seen in the previous section, there exists (a unique) U_4⊃ U_1 such that v belongs to (∧^2 U_4) ∧ (∧^2 V_8) +∧^3 V_8∧ U_1. Reducing modulo (∧^2 U_4) ∧ (∧^2 V_8), we get v̅∈∧^3(V_8/U_4)⊗ U_1≃ (V_8/U_4)^∨. In general v̅ is nonzero and defines a hyperplane in V_8/U_4, that is, a hyperplane U_7 of V_8, containing U_4. Note that this exactly means that v∈ (∧^2 U_4) ∧ (∧^2 V_8) +∧^3 U_7∧ U_1. (U_7) is the tangent hyperplane to at [U_1]. Let denote the variety of flags (U_1⊂ U_4) such that v belongs to (U_1,U_4):=(∧^2 U_4) ∧ (∧^2 V_8) +∧^3 V_8∧ U_1. We know that the projection is birational. Moreover, as a subvariety of the flag manifold Fl(1,4,V_8), is the zero-locus of the section of the vector bundle ∧^4V_8/ defined by v. Let (U_1,U_4) denote the stabilizer of the flag (U_1⊂ U_4) inside (V_8). The tangent space to Fl(1,4,V_8) at the corresponding point is the quotient (V_8)/(U_1,U_4); and the tangent space to is the image, in this quotient, of the space of endomorphisms X∈(V_8) such that X(v) belongs to (U_1,U_4), as follows from the normal exact sequence. The tangent space to is then the image of this space inside (V_8)/(U_1)≃(U_1,V_8/U_1), where (U_1) denotes the stabilizer of the line U_1. So our claim will follow, if we can check that any X∈(V_8) such that X(v) belongs to (U_1,U_4), must send U_1 into the hyperplane U_7. But (<ref>) implies, once we apply X, that X(v)∈ U_4 ∧ (∧^3 V_8) +∧^3 U_7∧ X(U_1). If X(v) belongs to (U_1,U_4), it has to vanish modulo U_4. So ∧^3 U_7∧ X(U_1) must also vanish modulo U_4, which is the case only if X(U_1)⊂ U_7. Recall that once we fix a volume form on V_8, we get an isomorphism of ∧^4V_8 with ∧^4V_8^∨. We will denote by v^∨ the image of v. (Strictly speaking it is uniquely defined only up to scalar, but this is irrelevant in our constructions.) To make things clearer we will denote by (v) the Coble quartic defined by v in (V_8), and by (v^∨) the Coble quartic defined by v^∨ in (V_8^∨). The projective dual of (v) is (v^∨). For [U_1] a general point of , we have a flag (U_1⊂ U_4⊂ U_7) such that v belongs to (∧^2 U_4) ∧ (∧^2 V_8) +∧^3 U_7∧ U_1. Choose an adapted basis e_1,… , e_8, so that e_1 generates U_1, etc. The condition means that v is a linear combination of elementary tensors e_i∧ e_j∧ e_k∧ e_ℓ with i,j≤ 4, and of e_5∧ e_6∧ e_7∧ e_1. Now recall that if the chosen volume form on V_8 is e_1∧⋯∧ e_8, and e_1^∨,… , e_8^∨ is the dual basis of e_1,… , e_8, then the isomorphism of ∧^4V_8 with ∧^4V_8^∨ sends the elementary tensor e_i∧ e_j∧ e_k∧ e_ℓ to ± e_p^∨∧ e_q^∨∧ e_r^∨∧ e_s^∨, where {i,j,k,l}∩{p,q,r,s}=∅. As a consequence, v^∨ will be a linear combination of elementary tensors e_p^∨∧ e_q^∨∧ e_r^∨∧ e_s^∨ with p,q≥ 5, and e_2^∨∧ e_3^∨∧ e_4^∨∧ e_8^∨. In other words, v^∨∈ (∧^2 U_4^⊥) ∧ (∧^2 V_8^∨) +∧^3 U_1^⊥∧ U_7^⊥. This is exactly the condition that ensures that [U_7^⊥] belongs to (v^∨). Thanks to the previous Lemma we deduce that (v)^∨⊂(v^∨). Moreover, the symmetry between U_1 and U_7^⊥ implies that in general, U_1 can be recovered from U_7 exactly as U_7 is constructed from U_1, which means that (v)^∨ is birationally equivalent to (v). Finally, since (v)^∨ and (v^∨) are both irreducible hypersurfaces, they must be equal. The previous discussion shows that it is natural to define the variety (v,v^∨)⊂ Fl(1,4,7,V_8) parametrizing the flags (U_1⊂ U_4⊂ U_7⊂ V_8) satisfying condition (<ref>). This is a smooth variety dominating birationally both (v) and (v^∨); there is a diagram @-2ex Fl(1,4,7,V_8) Fl(1,4,V_8) (v,v^∨)[dr][dl]@^(->[u] Fl(4,7,V_8) (v)[dl]@_(->[ul] (v^∨)[dr]@^(->[ur] (V_8)⊃(v) @–>[rrrr]^d (v^∨)⊂(V_8^∨) One recovers that way the constructions explained is <cit.>. We used the suggestive notation d for the Gauss map, which sends a smooth point of (v) to its tangent hyperplane, given by the differential of the cubic's equation. §.§ The Cartan subspace Recall that a Cartan subspace for the _2-graded Lie algebra _7=(V_8)⊕∧^4V_8 is a maximal subspace of ∧^4V_8, made of elements of _6 which are semisimple and commute <cit.>. Among other nice properties, a general element of ∧^4V_8 is (V_8)-conjugate to (finitely many) elements of any given Cartan subspace. An explicit Cartan subspace of ∧^4V_8 is worked out in <cit.>. It coincides with the space of Heisenberg invariants provided in <cit.>. Here is a list of seven generators, for a given basis e_1,… , e_8 of V_8: [ h_1 = e_1∧ e_2∧ e_3∧ e_4 +e_5∧ e_6∧ e_7∧ e_8 ,; h_2 = e_1∧ e_3∧ e_5∧ e_7 +e_6∧ e_8∧ e_2∧ e_4 ,; h_3 = e_1∧ e_5∧ e_6∧ e_2 +e_8∧ e_4∧ e_3∧ e_7 ,; h_4 = e_1∧ e_6∧ e_8∧ e_3 +e_4∧ e_5∧ e_7∧ e_2 ,; h_5 = e_1∧ e_8∧ e_4∧ e_5 +e_7∧ e_2∧ e_6∧ e_3 ,; h_6 = e_1∧ e_4∧ e_7∧ e_6 +e_2∧ e_3∧ e_8∧ e_5 ,; h_7 = e_1∧ e_7∧ e_2∧ e_8 +e_3∧ e_5∧ e_4∧ e_6. ] Combinatorially, each of these generators is given by a pair of complementary fourtuples of indices in {1,…,8}. Each of these 14 fourtuples shares a pair of indices with any other distinct, not complementary fourtuple. This is the property that ensures the commutation in _6, since the Lie bracket of _6, restricted to ∧^4V_8, is given by the unique (up to scalar) _8-equivariant morphism ∧^2(∧^4V_8) ∧^4V_8⊗∧^4V_8 S_21111110V_8≃_8. If we start with two elementary tensors given by fourtuples with a common pair of indices, we can include them into ∧^4U_6 for some codimension two susbpace U_6⊂ V_8. But then the Lie bracket factors through S_21111110U_6={0}, so it has to vanish. Each pair of indices in {1,…,8} belongs to three of the 14 fourtuples. For any triple (ijk) among (124), (137), (156), (235), (267), (346), (457), h_i, h_j and h_k share four disjoint pairs (for example h_1, h_2 and h_4 share (13), (24), (57), (68)). These seven triples always meet in exactly one index, so they are in correspondence with the lines in a Fano plane. More on this in <cit.>. A nice consequence of this description is the following (v) and (v^∨) are isomorphic. Since our v is general, we may suppose up to the action of (V_8) that v belongs to our Cartan subspace above, given in terms of the basis e_1, …, e_8 of V_8. Denote the dual basis by e_1^∨, … , e_8^∨, and choose the volume form e_1^∨∧⋯∧ e_8^∨ on V_8. Then the induced isomorphism from ∧^4V_8 to ∧^4V^∨_8 sends e_I=e_i_1∧ e_i_2∧ e_i_3∧ e_i_4 to ϵ_I,Je_J, where J is the complement of I in {1,… , 8} and ϵ_I,J is the sign of the permutation (i_1,… , i_4,j_1,…, j_4). Now, observe that for each i, h_i is of the form e_K+e_L for two complementary sets of indices K and L. Moreover, one can check that ϵ_K,L is always equal to 1. This implies that h_i^∨=e^∨_K+e^∨_L has exactly the same expression as h_i, in terms of the dual basis. In other words the map v↦ v^∨, when restricted to our Cartan subspace, is essentially the identity, and the claim follows. §.§ The abelian threefold Remarkably, one can construct the abelian threefold whose Kummer variety is by considering another orbital degeneracy locus. The idea is to use the flag variety Fl(1,7,V_8), the incidence correspondence in (V_8)×(V_8^∨) parametrizing flags (U_1⊂ U_7). The rank six quotient bundle =_7/_1 allows to realize the space of four-forms as ∧^4V_8^∨=H^0(Fl(1,7,V_8), p_1^*(1)⊗∧^3^∨). Exactly as before, this allows to associate to any (V_6)-orbit closure Y in ∧^3V_6 an orbital degeneracy locus D_Y(v)⊂ Fl(1,7,V_8). Here V_6 is a six-dimensional vector space. In particular, the cone Y_10 over the Grassmannian G(3,V_6) yields, for v generic, a smooth threefold :=D_Y_10(v). In similar terms as for the other orbital degeneracy loci, this threefold is ={ [U_1⊂ U_7]∈ Fl(1,7,V_8) |∃ U_1⊂ U_4⊂ U_7, v∈∧^3 U_4∧ V_8 +∧^4 U_7 +∧^3 V_8∧ U_1} . is a torsor over an abelian threefold, and the projection to (V_8) is a double cover of . This is <cit.>. Over a point of , given by a flag U_1⊂ U_7, the four-form v defines a decomposable tensor in ∧^3(U_7/U_1). This tensor is never zero if v is general, and therefore defines a four-dimensional space U_4 such that U_1⊂ U_4⊂ U_7. Hence a rank-four vector bundle _4 on , a subbundle of the trivial bundle V_8⊗_. The proper orbit closures of the (V_6)-action on ∧^3V_6 are, apart from the cone over the Grassmannian, a quartic hypersurface and the codimension five locus of partially decomposable tensors. In our relative setting, the quartic induces a hypersurface of bidegree (2,2), whose singular locus is an eightfold that is singular exactly along . So once again we get a very interesting singular hypersurface. It would be very nice to find a modular interpretation of these loci. § LINES FROM ALTERNATING FORMS In this section we will identify the two covering families of lines in _C(2) in terms of orbital degeneracy loci; this will give a very explicit description of these families in terms of existence of special flags of vector spaces. As a consequence of this we will obtain Theorem <ref>, in which we identify the moduli space _C(2,L), for L of odd degree, with an orbital degeneracy locus in G(2,V_8) associated to v∈∧^4 V_8. §.§ The ruling and its lines Recall the definition of the abelian threefold from Equation (<ref>). Our next result relates it to the ruling described in section <ref>. The family (_4) over coincides with the ruling () over ^1(C) of the moduli space _C(2). We need to prove that for any flag (U_1⊂ U_7) in , defining the four-plane U_4, the linear space (U_4) is contained in . If we can show that is even covered by this family of ^3's, we will be done since the ruling is unique. So let us prove these two statements. The image of (_4) in (V_8) is contained in the Coble quartic. Consider a point of and the associated flag U_1⊂ U_4⊂ U_7. By the very definition of , this means we can write v=e_1∧ w+v'+e_2∧ e_3∧ e_4∧ e_8 for some vectors e_1∈ U_1 and e_2,e_3,e_4∈ U_4, with w∈∧^3V_8 and v'∈∧^4U_7. Under the generality hypothesis we can suppose that U_4=⟨ e_1,e_2,e_3,e_4⟩, and it suffices to check that U'_1= e_2 defines a point of . Modulo e_1 and e_2, the tensor w if a three-form in six variables. Since the secant of the Grassmannian G(3,6) in its Plücker embedding fills in the whole ambient projective space, generically we can write w=a∧ b∧ c+d∧ e∧ f modulo e_1 and e_2, for some vectors a,b,c,d,e,f. Modulo e_1 and e_2 again, v' is a four-form in only five variables, so it defines a hyperplane that will cut the three-dimensional space ⟨ a,b,c⟩ in codimension one, say along ⟨ a,b⟩, and similarly it will cut ⟨ d,e,f⟩ in codimension one, say along ⟨ d,e⟩. In other words, we may suppose that modulo e_1 and e_2, v'=a∧ b∧ d∧ e. But then, modulo e_2 we get v=e_1∧ (a∧ b∧ c+d∧ e∧ f)+a∧ b∧ d∧ e. So v belongs to (∧^2U'_4)∧ (∧^2V_8)+∧^3V_8∧ U'_1 if U'_4=⟨ e_1,e_2,a,d⟩. The existence of such a space U'_4⊃ U'_1 is precisely the required condition for U'_1 to belong to , so we are done. The family (_4) covers the Coble quartic. This can be done by a Chern class computation, being equivalent to the fact that the degree of (_4) with respect to the relative hyperplane class does not vanish. Notice that by Equation (<ref>), can be considered as a subvariety of Fl(1,4,7,V_8). Even more, it is the zero locus in the flag manifold of the section v of the rank 19 vector bundle :=∧^4 V_8 /(∧^3 _4∧ V_8 + ∧^4 _7 + ∧^3 V_8∧_1) over Fl(1,4,7,V_8) defined by v. Since this section is general, the class of in the Chow ring of the flag manifold is the top Chern class of . So the degree we are looking for is ∫_(_4)c_1(_1^∨)^6=∫_s_3(_4^∨)=∫_Fl(1,4,7,V_8)c_19()s_3(_4^∨)=32, as can be computed using <cit.>. This implies the claim. Remark. 32 is the expected number: since the Coble hypersurface has degree 4, we recover the fact that exactly 8 ^3's of the ruling pass through a general point of the quartic, as recalled in the proof of Proposition <ref>. The previous statement allows to reconstruct the curve C purely in terms of the four-form and its associated orbital degeneracy loci. Indeed, we have recalled that a ^3 of the ruling meets the Kummer threefold along a copy of the curve. For any point of , with associated flag (U_1⊂ U_4⊂ U_7), the intersection of (U_4) with is a copy of the curve C. And of course we also recover the family of lines in the ruling as a quadric bundle. Indeed, the same arguments as in section 2.3 yield: The total space of the fiber bundle G(2,_4) over maps birationally to the family _R in G(2,V_8). §.§ Hecke lines from alternating forms In the previous section we have defined some ODL D_Y_i(v) from orbits inside the space of three-forms in seven variables (i.e., in the notation of the previous sections, inside ∧^3 V_7). We will use a similar construction to obtain ODL inside the Grassmannian G(2,V_8). The Borel-Weil theorem gives an isomorphism ∧^4V_8≃ H^0(G(2,V_8),∧^4)=H^0(G(2,V_8),∧^2^∨(1)), where denotes the rank six quotient vector bundle on G(2,V_8). Thus, in this case, we need to look at two-forms in six variables. If V_6 is as before a six-dimensional complex vector space, ∧^4 V_6^∨≃∧^2 V_6 has only two proper (V_6)-orbits closures, that we will index by their codimension: the Pfaffian cubic hypersurface Z_1 and its singular locus Z_6, that is the cone over the Grassmannian G(2,V_6). These allow us to construct inside G(2,V_8) the two orbital degeneracy loci D_Z_1(v) and D_Z_6(v). Let us first consider D_Z_6(v), which can also be defined by D_Z_6(v):={[U_2]∈ G(2,V_8)|∃ U_6 ⊃ U_2, v ∈∧^3 V_8∧ U_2+∧^4 U_6}. D_Z_6(v) is a smooth Fano sixfold of even index. By definition, D_Z_6(v) is the projection in G(2,V_8) of the locus Z_6(v) in Fl(2,6,V_8) parametrizing flags (U_2⊂ U_6⊂ V_8) such that v belongs to the 56-dimensional space ∧^3 V_8∧ U_2+∧^4 U_6. Taking the quotient of ∧^4V_8 by the latter, we get a rank 14 vector bundle on Fl(2,6,V_8). Moreover v defines a generic section of this vector bundle and Z_6(v) is the zero-locus of this section. Since Fl(2,6,V_8) has dimension 20, we deduce that Z_6(v) is smooth of dimension 6, and that its canonical bundle is given by the adjunction formula. A straightforward computation yields K_Z_6(v)=(U_2)^-3⊗(U_6)^5. On the other hand, for any [U_2]∈ G(2,V_8), the quotient of ∧^4V_8 by ∧^3 V_8∧ U_2 is isomorphic to ∧^4(V_8/U_2)≃∧^2(V_8/U_2)^∨⊗ (V_8/U_2). This is a space of skew-symmetric forms in six dimensions, and the existence of U_6 exactly means that v defines a skew-symmetric form in ∧^2(V_8/U_2)^∨⊗ (V_8/U_2) whose rank is at most two. In fact the rank must be exactly two, since for v generic, a simple dimension count shows that the rank can never be zero. In particular the projection of Z_6(v) to D_Z_6(v) is an isomorphism. More than that, the kernel of our two form on V_8/U_2 is U_6/U_2, so we get a non-degenerate skew-symmetric form on the quotient V_8/U_6, which is therefore identified with its dual. To be precise, since the skew-symmetric form has values in (V_8/U_2), we get an isomorphism V_8/U_6≃ (V_8/U_6)^∨⊗ (V_8/U_2). Taking determinants, we deduce that (U_2)^2≃ (U_6)^2; in other words, the line bundle = (U_6)⊗ (U_2)^∨ is 2-torsion on Z_6(v). But then we can rewrite the canonical bundle as K_Z_6(v)=(U_2)⊗(U_6)⊗^⊗ 4. Note that (U_2)^∨⊗(U_6)^∨ is very ample on Fl(2,6,V_8) since it defines its canonical Plücker type embedding. Since is torsion we deduce that Z_6(v) is Fano. But then its Picard group is torsion free, so is actually trivial. So finally K_Z_6(v)=(U_2)^2, hence the index is even. The previous discussion shows that D_Z_6(v) is a Pfaffian locus defined by a skew-symmetric map ψ_v : ^∨ (1) associated with v. The rank four sheaf 𝒦er(ψ_v) (which is _6/_2 in the previous proof) fits into an exact sequence 0𝒦er(ψ_v)^∨ (1)𝒦er(ψ_v)^∨ (1) 0. Let us set once and for all the more compact notation D:=D_Z_6(v) and G:=G(2,V_8). The exact sequence (<ref>) allows to describe the normal bundle _D/G as follows. We have isomorphisms _D/G≃∧^2𝒦er(ψ_v)^∨ (1), and _D/G^∨≃_D/G(-2). We will use this information later on. Our next goal is to prove the following theorem. For a generic v∈∧^4 V_8 the orbital degeneracy locus D_Z_6(v) is isomorphic with the moduli space _C(2,_C(c)) of semistable rank two vector bundles on C with fixed determinant _C(c), for a certain point c∈ C. Such embeddings defined by Hecke lines are studied in <cit.>, and there is one, denoted φ_p in loc. cit., for each choice of a point p on the curve C. Here we only get one of these embeddings, in agreement with the already mentionned fact that v does not only determine a genus three curve, but a marked point on this curve. An interesting consequence is that we know a minimal resolution of the structure sheaf of _C(2,_C(c)) inside the Grassmannian G(2,V_8). From this resolution it is easy to check that the intersection with a general copy of G(2,6) inside G(2,V_8) is a K3 surface of genus 13. This kind of description is used in <cit.> to provide a new model for the general such K3 surface. Let us begin by showing that D_Z_6(v) defines a six-dimensional family of Hecke lines. Let [U_2]∈ D_Z_6(v), then (U_2)⊂(V_8) is a line in _4. Let [U_1]∈(U_2) be a point in the line. By definition of D_Z_6(v), one can write (v U_1)=u_2∧ v'+ a∧ b∧ c∧ d for some u_2∈ U_2, some trivector v' and some vectors a,b,c,d. The trivector v' is a trivector in six variables, therefore it can in general be written as e∧ f∧ g + h∧ i∧ l for some vectors e,f,g,h,i,l, since the secant variety of G(3,6) fills the full Plücker space. Now, modulo U_2, (⟨ a,b,c,d ⟩∩⟨ e,f,g ⟩)≥ 1 and (⟨ a,b,c,d ⟩∩⟨ h,i,l ⟩)≥ 1. Thus we can suppose that a=e and b=h. But then if we let U_4=⟨ U_2,a,b ⟩, it is straightforward to check that ( v U_1) ∈ (∧^2 U_4)⋀ (∧^2 V_8). This ensures that [U_1] belongs to _4. The point-line incidence variety of the family of lines parametrized by D_Z_6(v) is given by the projective bundle (_2)→ D_Z_6(v). The family of lines parametrized by D_Z_6(v) covers _4. This is again a Chern class computation. Indeed, by irreducibility of the varieties in play, it is sufficient to check that, if _1^∨ denotes the relative dual tautological line bundle of (_2)→ D_Z_6(v), then c_1(_1^∨)^6≠ 0. This implies that the image of (_2) inside (V_8) has dimension at least six, and is thus the Coble quartic _4 by Proposition <ref>. Notice that one can work directly on Z_6(v), since it is isomorphic to D_Z_6(v). Since Z_6(v) can be constructed as the zero locus of a section of a vector bundle inside the flag variety Fl(2,4,V_8), we can verify that c_1(_1^∨)^6≠ 0 with <cit.> by constructing the coordinate ring of the zero locus Z_6(v) and of the projective bundle (_2) over it, similarly to what we did in the proof of Lemma <ref>. The lines parametrized by D_Z_6(v) are Hecke lines. Suppose by contradiction that the lines parametrized by D_Z_6(v) are not Hecke. Since they form a covering family, they must be lines in the ruling, i.e. D_Z_6(v)⊂_R. Now recall that _R is a birational image of the quadric bundle G(2,_4) over . The pre-image of D_Z_6(v) in G(2,_4) is rationally connected, being birationally equivalent to the Fano manifold D_Z_6(v). But then its projection to must be constant. Since the fibers of this projection are only four-dimensional, while the dimension of D_Z_6(v) is six, we get a contradiction. Proof of Theorem <ref>. Recall that the family _H of Hecke lines has dimension seven, so D_Z_6(v) cannot be the whole family. In fact _H has a rational map η to C, and by the same argument as above, the fact that D_Z_6(v) is Fano ensures that its image in _H is contained in a fiber of η, over some point c∈ C. But then the morphism from _C(2,_C(c)) to _H is birational onto its image D_Z_6(v). Since _C(2,_C(c)) has Picard rank one <cit.>, this morphism must be an isomorphism. § A COBLE TYPE QUADRIC HYPERSURFACE The aim of this section is to show that the Coble quadric hypersurface in G(2,V_8) deserves its name, in the sense that it is singular along the moduli space and it is uniquely determined by this property. So the section is mainly devoted to the proof of Theorem <ref>. In the last part we also prove a self-duality statement concerning this hypersurface which is analogous to the self-duality of the Coble quartic in (V_8). §.§ The relative Pfaffian As we have seen, the fact that D_Z_6(v) is defined as a Pfaffian locus in G(2,V_8) implies that it is the singular locus of a Pfaffian hypersurface, defined as the first degeneracy locus D_Z_1(v) of the skew-symmetric morphism ^∨(1) defined by v. The hypersurface D_Z_1(v) of G(2,V_8) is a quadratic section of the Grassmannian. It is the unique quadratic section that is singular along D_Z_6(v). Starting from a genus three curve C and its Kummer threefold embedded in ^7 by the linear system |2Θ|, the original observation of Coble was that there exists a unique Heisenberg-invariant quartic that is singular along the Kummer. Beauville proved much later that the Heisenberg-invariance hypothesis was actually not necessary <cit.>. In our context the curve and its Heisenberg group are not easily available (although there are connections between the latter and the Weyl group W(E_7) of the theta-representation ∧^4V_8), so we do not use any Heisenberg-invariance hypothesis. §.§.§ Structure of the proof of Theorem <ref> Let us write D for D_Z_6(v) and G for G(2,V_8), for simplicity. That D_Z_1(v) is a quadratic section of G follows from the fact that it is defined by a rank six Pfaffian, obtained as the image of v by the cubic morphism S^3(∧^2^∨(1))∧^6^∨(3)= _G(2). In order to prove that this is the only quadratic section that is singular along D, recall that the conormal bundle of D in the Grassmannian G is the quotient of the ideal sheaf _D by its square ^2_D. Twisting by _G(2) and taking cohomology, we get an exact sequence 0 H^0(G,^2_D(2)) H^0(G,_D(2)) H^0(D,_D/G^∨(2)) H^1(G,^2_D(2)). Observe that H^0(G,_D(2)) parametrizes quadratic sections of G (up to scalar) that contain D, while, since D is smooth, H^0(G,^2_D(2)) parametrizes quadratic sections that are singular along D. Our claim is that the latter space is one-dimensional. This will be proved in three steps: first, compute the dimension of the space of quadrics containing D; second, bound H^0(D,_D/G^∨(2)) from below; third, prove that H^1(G,^2_D(2)) vanishes. These results are contained in Lemmas <ref>, <ref>, <ref>. From the fact that H^1(_D^2(2))=0, the exact sequence 0 H^0(_D^2(2)) H^0(_D(2)) H^0(_D/G^∨(2))= H^0(_D/G) 0, knowing that h^0(_D(2))=71 and h^0(_D/G)≥ 70, will allow us to conclude that h^0(_D^2(2))≤ 1 and the proof will be complete. §.§.§ Quadrics containing the moduli space Let us count the quadric sections of G=G(2,V_8) that contain the moduli space D≃_C(2,L). h^0(G,_D(2))=71. Let us first recall the classical minimal resolution of the ideal I generated by submaximal Pfaffians of a generic skew-symmetric matrix of size 6; in other words, the ideal of the cone over the Grassmannian G(2,V_6) inside ∧^2V_6. Letting S=[∧^2V_6], this resolution is the following <cit.>: @-2.5ex 0 I[u] ∧^4V_6^∨⊗ S(-2)[u] S_21111V_6^∨⊗ S(-3)[u] S_311111V_6^∨⊗ S(-4)[ru] ⊕ S_22222V_6^∨⊗ S(-5)[lu] S_322221V_6^∨⊗ S(-6)[ru][lu] S_332222V_6^∨⊗ S(-7)[u] (V_6^∨)^3⊗ S(-9)[u] 0[u] Since D is a Pfaffian locus of the expected dimension, given by a skew-symmetric map ^∨(1), we deduce the following free resolution of its twisted ideal sheaf (we used identifications like S_332222^∨=∧^2^∨(-2)) : @-2.5ex 0 _D(2)[u] ∧^4[u] ()[u] S^2^∨(-1) [ru] ⊕ S^2(-1) [lu] ()(-2)[ru][lu] ∧^2(-3)[u] _G(-4)[u] 0[u] Note that this resolution is self-dual, up to twist. Moreover, using the Bott-Borel-Weil theorem one can check that all the factors are acyclic homogeneous vector bundles, with two exceptions: ∧^4 has a non zero space of sections, isomorphic to ∧^4V_8; and S^2^∨(-1), which is one of the two irreducible factors of Ω_G^2, has a one dimensional cohomology group in degree two. We end up with a canonical exact sequence 0∧^4V_8 H^0(G,_D(2)) 0, and our claim follows. Being defined by a cubic Pfaffian, the equation of the hypersurface D_Z_1(v) must be a cubic (V_8)-covariant of v in ∧^4V_8, taking values in H^0(_G(2))≃ S_22V_8. In fact it is a (V_8)-covariant, that by homogeneity with respect to V_8, must take its values in S_22V_8⊗(V_8). One can check that the latter module has multiplicity one inside S^3(∧^4V_8), so this covariant is unique up to scalar. For example, it can be obtained as the composition S^3(∧^4V_8)↪ S^3(∧^2V_8⊗∧^2V_8)→ S^3(∧^2V_8)⊗ S^3(∧^2V_8)→ S^3(∧^2V_8)⊗∧^6V_8 → → S^3(∧^2V_8)⊗∧^2V_8^∨⊗(V_8) → S^2(∧^2V_8)⊗(V_8) → S_22V_8⊗(V_8). Following the natural morphisms involved in these arrows, this would allow to give an explicit formula for an equation of the quadratic hypersurface D_Z_1(v) in terms of the coefficients of v (this was done in <cit.> for the Coble quartic itself). It would suffice to do this when v belongs to our prefered Cartan subspace; this is in principle a straightforward computation but the resulting formulas would be huge. The embedding of ∧^4V_8 inside H^0(G,_D(2)) in (<ref>) is given by the derivatives of D_Z_1(v) with respect to v, that is, can be obtained by polarizing the cubic morphism discussed in the previous remark. On the other hand, modulo these derivatives, (2) shows that there is a uniquely defined "non-Pfaffian" quadric vanishing on D. This non-Pfaffian quadric comes from the contribution of S^2^∨(-1) in the resolution of _D(2). Since in this resolution, these two terms are connected one to the other through three morphisms having respective degree two, one, and two with respect to v, the non-Pfaffian quadric must be given by a quintic covariant in v. And indeed, a computation with LiE <cit.> shows that (S^5(∧^4 V_8), S_22V_8⊗ ( (V_8))^2)^(V_8)≃^2. A special line in this space of covariants is generated by the cubic covariant defining the Pfaffian quadric, twisted by the invariant quadratic form (defined by the wedge product). The quotient is our non-Pfaffian quadric. As before we could in principle compute it explicitely by constructing a specific covariant. One way to construct such a covariant is to observe that S^2(∧^4 V_8)⊃ S_221111V_8⊂∧^2 V_8⊗∧^6 V_8= ∧^2 V_8⊗∧^2 V_8^∨⊗ (V_8). Taking the square of the resulting morphism we can define a quartic covariant S^4(∧^4 V_8)→ S^2(∧^2 V_8)⊗ S^2(∧^2 V_8^∨)⊗ (V_8)^2→ S_22V_8⊗∧^4 V_8^∨⊗ (V_8)^2, hence the desired quintic covariant. §.§.§ The normal bundle of D in G(2,V_8) Let us now bound from below the dimension of H^0(D,_D/G^∨(2)). By Lemma <ref>, this space is isomorphic with H^0(_D/G), which parametrizes infinitesimal deformations of D inside G. Some of these deformations must be induced by the deformation of [v] inside (∧^4V_8), which should provide 69 parameters. But recall that the family _H of Hecke lines inside SU_C(2) is a subvariety of G(2,V_8), birationally fibered over the curve C, with one fiber isomorphic to D≃_C(2,_C(c)) for some point c∈ C. So we expect one extra deformation of D to be obtained by deforming c in the curve C. That these deformations are independent is essentially the content of h^0(D,_D/G)≥ 70. The locus in ∧^4 V_6≃∧^2V_6^∨ corresponding to skew-symmetric forms of rank at most 2 is desingularized by the total space of ∧^4 _4 over the Grassmannian G(4,V_6). As a consequence of this and of <cit.>, the Pfaffian locus D is desingularized by the zero locus Z:=Z_6(v) inside Fl(2,6,V_8) of a (general) section of the bundle =∧^4 (V_8/_2)/∧^4 (_6/_2). This bundle is an extension of irreducible bundles 0 →∧^3 (_6/_2)⊗ (V_8/_6) →→∧^2 (_6/_2)⊗(V_8/_6) → 0. By dimension count, Z is in fact isomorphic to D via the natural projection. Under this isomorphism and by Lemma <ref>, _D/G can be identified with the restriction of :=∧^2 (_6/_2)⊗(V_8/_6) to Z. In order to compute the cohomology of this restriction we can tensorize with the Koszul complex ∧^∙^∨ of the global section of , whose zero locus is Z⊂ Fl(2,6,V_8). This gives the following resolution of _D/G by locally free sheaves on Fl(2,6,V_8) 0→∧^∙^∨⊗→_D/G→ 0. By applying the Bott-Borel-Weil Theorem we can compute the cohomology groups of the bundles ∧^k ^∨⊗, for all k ≥ 0. Those that do not vanish are the following: H^0(∧^0 ^∨⊗)=∧^4 V_8, H^0(∧^1 ^∨⊗)=, H^2(∧^3 ^∨⊗)= , H^3(∧^3 ^∨⊗)=^2, H^4(∧^4 ^∨⊗)= H^5(∧^4 ^∨⊗)=∧^4 V_8, H^4(∧^5 ^∨⊗)=(V_8)⊕^3, H^5(∧^5 ^∨⊗)=(V_8)⊕^4 , H^6(∧^5 ^∨⊗)=, H^6(∧^7 ^∨⊗)= H^7(∧^7 ^∨⊗)=, H^8(∧^9 ^∨⊗)= H^9(∧^9 ^∨⊗)=, H^12(∧^13^∨⊗)= H^13(∧^13^∨⊗)= . A direct consequence is that χ(_D/G)=70. Moreover, observe that H^q(∧^k ^∨⊗)=0 for q-k>1. Since these groups give the first page of the spectral sequence in cohomology induced by the Koszul complex of _Z twisted by , this implies that H^i(_D/G)=0 for i>1. Therefore h^0(_D/G)=χ(_D/G)+h^1(_D/G)≥ 70. §.§.§ An affine module M As usual V_6 denotes a six dimensional vector space. The ideal I of the cone over G(2,V_6) is generated by the submaximal Pfaffians of the generic skew-symmetric matrix of size 6; the (6)-module generated by these submaximal Pfaffians is ∧^4V_6^∨⊂ S^2(∧^2V_6^∨). The square of I is then generated by the symmetric square of this module, which decomposes as S^2(∧^4V_6^∨) = S_221111V_6^∨⊕ S_2222V_6^∨. The first component is ∧^2V_6^∨⊗ V_6^∨, and must be interpreted as parametrizing quartics that are multiples of linear forms by the Pfaffian cubic. The ideal they generate is S_+I_P, where S_+⊂ S is the irrelevant ideal, and I_P denotes the ideal of the Pfaffian hypersurface. Consider the exact sequence 0→ S_+I_P → I^2 → M:=I^2/ S_+I_P → 0. The quotient module M is generated by S_2222V_6^∨. According to <cit.>, the minimal resolution R_∙ of M has the Betti numbers of Table <ref>. The minimal resolution is _6-equivariant and it is not difficult to write it in terms of Schur functors. Indeed, we know that the quartic generators are parametrized by S_2222V_6^∨, so the first syszygy module must be contained in S_2222V_6^∨⊗∧^2V_6^∨, and it turns out that there is a unique _6-module of the correct dimension inside this tensor product. Proceeding inductively we arrive at the following conclusion: the minimal _6-equivariant resolution of the -module M has the following shape: @-2ex 0 M[u] S_2222V_6^∨⊗ S(-4)[u] (S_32221V_6^∨⊕[u] S_222211V_6^∨) ⊗ S(-5) (S_422211V_6^∨⊕ S_33222V_6^∨[u] ⊕ S_322221V_6^∨)⊗ S(-6) (S_432221V_6^∨⊕[u] S_422222V_6^∨)⊗ S(-7) S_333331V_6^∨⊗ S(-8)[lu] S_442222V_6^∨⊗ S(-8)[u] S_433332V_6^∨⊗ S(-9)[lu][u] S_443333V_6^∨⊗ S(-10)[lu][u] 0[u] Here vertical arrows have degree one and diagonal arrows have degree two. Notice that the complex in bold reproduces the resolution of the Pfaffian ideal I itself. As J. Weyman observed, one could also obtain this resolution by considering the natural resolution of the Pfaffian hypersurface given by the total space of the vector bundle ∧^2 over the Grassmannian G(4,V_6). The morphism π from (∧^2) to ∧^2V_6 is a resolution of singularities, and one can check that M is the push-forward by π of the module given by the pull-back of the line bundle (2) from the Grassmannian. Applying the geometric technique from <cit.>, one can extract the minimal resolution of M from the collection of (V_6)-modules given by F_i = ⊕_j≥ 0 H^j(G(4,V_6),(2)⊗∧^i+j(∧^2)^⊥). Here (∧^2)^⊥ is the kernel of the natural projection ∧^2V_6^∨→∧^2 ^∨. The bundle (∧^2)^⊥ is not semisimple but is an extension of (-1) by ^∨⊗ Q^∨. Remarkably, it is the contribution of (-1) that reproduces the minimal resolution of I (twisted) inside that of M. §.§.§ Relativizing M Now we want to use these results in the relative setting. Since ∧^4 is a vector bundle on G(2,V_8) which is locally isomorphic to ∧^2 V_6, we can relativize the construction of I and I_P and M. For convenience let us restrict to the complement of the zero section inside the total space of this vector bundle. We get sheaves of _-modules and ideals that we denote respectively by ', '_P, '. Note that since we avoid the zero section, we get an exact sequence 0→'_P →'^2 →' → 0. Then we consider v∈∧^4V_8 as a general section of ∧^4, that we interpret as a morphism from G=G(2,V_8) to the total space of ∧^4. By the definition of orbital degeneracy loci <cit.>, the ideal of D_Z_1(v) is _P:='_P⊗_G and the ideal of D=D_Z_6(v) is _D:='⊗_G. Let us also denote ='⊗_G. Of course these tensor produts are taken over _. There is an exact sequence 0→_P →_D^2 →→ 0. By the right exactness of tensor product, here by _G, we get an exact sequence _P →_D^2 →→ 0. But the map _P ⊂_D^2 (which expresses the fact that D is contained in the singular locus of the Pfaffian hypersurface) clearly remains an injection, and we are done. In order to control we will now consider the complex of vector bundles induced by the resolution we constructed for M. We can deduce a resolution of ' and then tensor out again by _G. In order to prove that we get a resolution of (the resolution given just below), we need to check that the Tor-sheaves of _-modules 𝒯or_i(',_G) vanish for i>0. All the Tor-sheaves we compute in the sequel will also be for _-modules. @-2ex 0 [u] S_2222(-4)[u] S_32221(-5)⊕[u] S_222211(-5) S_422211(-6)⊕ S_33222(-6)[u] ⊕ S_322221(-6) S_432221(-7)⊕[u] S_422222(-7) S_333331(-8)[lu] S_442222(-8)[u] S_433332(-9)[lu][u] S_443333(-10)[lu][u] 0[u] For any i>0, * 𝒯or_i('_P,_G)=0, * 𝒯or_i(_/',_G)=0, * 𝒯or_i('/'^2,_G)=0, * 𝒯or_i(',_G)=0. (1) is obvious since '_P is locally free. (2) is a consequence of the Generic Perfection Theorem (see <cit.>), since I and therefore ' is perfect, and D has the expected dimension. (3) is a consequence of (2), because '/'^2 is a locally free _/'-module (recall that since we have a generality assumption the singular locus is avoided). Finally to prove (4) observe first that by the long exact sequence of Tor's, 𝒯or_i(',_G)=𝒯or_ i+1(_/',_G)=0 for any i>0. Because of (3) this implies that 𝒯or_i('^2,_G)=0 for any i>0. Then we can use the exact sequence of Lemma <ref> to deduce that 𝒯or_i(',_G)=0 when i>1, and that there is an exact sequence 0𝒯or_1(',_G)_P_D^2 0. By Lemma <ref>, 𝒯or_1(',_G) vanishes as well, and we are done. (2) is acyclic. Twist the previous resolution of by (2) and deduce from the Bott-Borel-Weil theorem that all the bundles in the twisted resolution are acyclic. This implies the claim. H^i(_D^2(2))=0 for any i>0. This follows immediately from Lemmas <ref> and <ref>. This concludes the proof of Theorem <ref>. Note the following consequence: D has non-obstructed deformations. h^0(_D/G)=70 and h^i(_D/G)=0 for any i>0. §.§ Deforming the Pfaffian hypersurface We already observed that varying v in ∧^4V_8, we only get a codimension one family of deformations of D. The missing dimension is provided by the choice of the point on the curve C, but this is invisible in our constructions. We will nevertheless prove that the special quadric section of the Grassmannian deforms. For a generic point p∈ C, and the associated embedding φ_p: _C(2,𝒪_C(p)) ↪ G(2,V_8), there exists at most one quadric hypersurface Q_p in the Grassmannian, that is singular along _C(2,𝒪_C(p)). Such a quadric corresponds to a line in H^0(G(2,V_8), ^2__C(2,𝒪_C(p))(2)) and we have computed in the proof of Theorem <ref> that this space has dimension one for certain special points p. By semicontinuity this dimension remains smaller or equal to one for p generic. For the generic embedding φ_p, there exists a unique quadric hypersurface of G(2,V_8) that is singular along _C(2,𝒪_C(p)). Let us consider the embedding Q=D_Z_1(v)↪ G from Theorem <ref>. Let H'_Q/G be the so-called "locally trivial Hilbert scheme" parametrizing locally trivial deformations of Q⊂ G, as defined in <cit.>. Remark that the construction of <cit.> is done for finite singularities, but their arguments, as the authors underline in the introduction, go through for arbitrary singularities because of <cit.>. Let 𝒩'_Q/G= ( 𝒩_Q/G→𝒯^1_Q), where 𝒯^1_Q denotes the first cotangent sheaf of Q (as defined, for instance, in <cit.>). In order that the locally trivial Hilbert scheme be smooth at Q, by <cit.> we need that H^1(Q,𝒩'_Q/G)=0. If this happens, then h^0(Q,𝒩'_Q/G)=(H'_Q/G) and we will show that this equals 70. By <cit.> we have an exact sequence 0 → T_Q → T_G|_Q →_Q/G→𝒯^1_Q → 0 . Hence 𝒩'_Q/G coincides with the image of T_G/Q inside _Q/G, which is exactly the (twisted) jacobian ideal _Q/G(2) restricted to Q. In turn, the Jacobian ideal of the Pfaffian locus of 6× 6 matrices is exactly the ideal of 4× 4 skew-symmetric minors. This implies that _Q/G(2) is the twisted ideal _D(2)/_Q(2) of D inside _Q(2)=_Q(Q). Let us therefore consider the exact sequence 0 →_Q(2) →_D(2) →_Q/G(2)→ 0. By Lemma <ref> we have h^0(G,_D(2))=71 and in the proof of the same Lemma we showed that h^i(G,_D(2))=0, for i>0. On the other hand, we have _Q(2)=_G. Via the long cohomology exact sequence associated to sequence (<ref>), we deduce that h^0(G,_Q/G(2))=70 and h^i(G,_Q/G(2))=0 for i>0. Hence H'_Q/G is smooth of dimension 70 at [Q]. We have a natural map between Hilbert schemes σ: H'_Q/G→ H_D, where H_D is the component of the Hilbert scheme of G(2,V_8) that contains the point [D] defined by D. Both spaces have dimension 70 and are smooth respectively at [Q] and [D] by Corollary <ref>. In order to show that σ is dominant, it is enough to check that the induced morphism of tangent spaces is dominant. This is true because H^0(G,_Q/G(2)) and H^0(_D/G) are both dominated by H^0(_D(2)), and the morphism from _D(2) to _D/G factorizes through _Q/G(2). This concludes the proof. §.§ Grassmannian self-duality Exactly as we constructed the singular quadric hypersurface D_Z_1(v)⊂ G(2,V_8), there is another hypersurface D_Z_1(v^∨)⊂ G(2,V_8^∨)=G(6,V_8). Because of Proposition <ref> these two hypersurfaces are projectively isomorphic. But one should also expect some projective duality statement analogous to Theorem <ref>. Of course we cannot refer to classical projective duality, since we want to consider D_Z_1(v) and D_Z_1(v^∨) really as hypersurfaces in Grassmannians, not as subvarieties of the ambient projective spaces. It turns out that a version of projective duality in this setting (and for certain other ambient varieties than Grassmannians) was once proposed in <cit.> (that remained unpublished). We will refer to it as Grassmannian duality. The idea is the following. Consider, say, a hypersurface H in G(2,V_8) (or any Grassmannian, but let us restrict to the case we are interested in). At a smooth point h=[U_2] of H, the tangent space to H is a hyperplane in T_hG(2,V_8)=(U_2,V_8/U_2), or equivalently, a line in the dual space (V_8/U_2,U_2). If this line is generated by a surjective morphism, the kernel of this morphism is a four-dimensional subspace of V_8/U_2. Equivalently, this defines a six-dimensional space U_6 such that U_2⊂ U_6⊂ V_8. We get in this way a rational map from H to G(6,V_8), and we can define the Grassmannian dual H^∨ as the image of this rational map. For more details see <cit.>. Chaput has a remarkable Biduality Theorem generalizing the classical statement, according to which duality for subvarieties of Grassmannians is an involution <cit.>. So this Grassmannian duality is perfectly natural, and we have: D_Z_1(v)≃ D_Z_1(v^∨) is Grassmannian self-dual. Suppose that U_2 belongs to D_Z_1(v). By definition, this means that there exists U_4⊃ U_2 (unique in general) such that v ∈ U_2∧ (∧^3V_8)+(∧^2U_4)∧ (∧^2V_8). If we mod out by ∧^2U_4, we get a tensor in U_2⊗∧^3(V_8/U_4)≃ U_2⊗ (V_8/U_4)^∨, that is, a morphism from V_8/U_4 to U_2. Generically this morphism has full rank, and its kernel defines some U_6⊃ U_4. So we get a flag (U_2⊂ U_4⊂ U_6) such that v∈ U_2∧ (∧^2U_6)∧ V_8+(∧^2U_4)∧ (∧^2V_8). U_6 defines a point of D_Z_1(v^∨). Using adapted basis, one checks that condition (<ref>) implies that v^∨∈ U_6^⊥∧ (∧^2U_2^⊥)∧ V_8^∨+ (∧^2U_4^⊥)∧ (∧^2V_8^∨). In particular, v^∨ mod U_6^⊥ has rank at most four. U_6 defines a point of D_Z_1(v)^∨. Using a basis of V_8 adapted to the flag (U_2⊂ U_4⊂ U_6), we can rewrite relation (<ref>) in the form v=e_1∧ e_5∧ e_6∧ e_7+e_2∧ e_5∧ e_6∧ e_8+v', v'∈ (∧^2U_4)∧ (∧^2V_8), where U_2=⟨ e_1,e_2⟩ and U_6=⟨ e_1,… ,e_6⟩. We can describe infinitesimal deformations of U_2 by some infinitesimal deformations of the vectors in the adapted basis, say e_i↦ e_i+ϵδ_i, and we must keep a similar relation. Modding out by U_4, we only remain with the relation δ_1∧ e_5∧ e_6∧ e_7+δ_2∧ e_5∧ e_6∧ e_8=0 mod U_4, which we can simply rewrite as δ_18=δ_27. This relation describes the tangent hyperplane to D_Z_1(v) at U_2, as a hyperplane in (U_2,V_8/U_2), orthogonal to the morphism e_8^*⊗ e_1-e_7^*⊗ e_2. The kernel of this morphism is U_6/U_2, and we are done. These two Lemmas together imply that D_Z_1(v^∨) coincides with the Grassmannian dual to D_Z_1(v). The proof of the Theorem is complete. Note that we can resolve the singularities of D_Z_1(v) by considering flags (U_2⊂ U_4) as before, which gives a subvariety D̃_Z_1(v)⊂ Fl(2,4,V_8). By considering the flags (U_2⊂ U_4⊂ U_6) as in the proof of the previous statement we obtain a subvariety D_Z_1(v,v^∨)⊂ Fl(2,4,6,V_8) that resolves simultaneously the singularities of D_Z_1(v) and D_Z_1(v^∨). As for the Coble quartic, we get a diagram @-2ex Fl(2,4,6,V_8) Fl(2,4,V_8) D_Z_1(v,v^∨)[dr][dl]@^(->[u] Fl(4,6,V_8) D̃_Z_1(v)[dl]@_(->[ul] @–>[rr] D̃_Z_1(v^∨)[dr]@^(->[ur] G(2,V_8)⊃ D_Z_1(v) @–>[rrrr]^dD D_Z_1(v^∨)⊂ G(6,V_8) The birational map D̃_Z_1(v)D̃_Z_1(v^∨) must be a flop, resolved by two symmetric contractions. Question. Is there a modular interpretation of D_Z_1(v) as for the Coble quartic? And of this diagram? Our framework excludes the hyperelliptic genus three curves, but there should be a very similar story for these curves. In fact, consider a general pencil of quadrics in ^7=(V_8). The eight singular members of the pencil define such a hyperelliptic curve C. It is a special case of the results of <cit.> that the moduli space _C(2,L), for L of odd degree, can be identified with the bi-orthogonal Grassmannian, that is the subvariety of G(2,V_8) parametrizing subspaces that are isotropic with respect to any quadric in the pencil. On the other hand, the even moduli space _C(2) is a double cover of the six-dimensional quadric ^6, branched over a quartic section which is singular along a copy of the Kummer threefold of the curve. One expects this quartic to be of Coble type, in the sense that it should be the unique quartic section of ^6 that is singular along the Kummer of C. It should also be self-dual in a suitable sense, and the whole story should be related to the representation theory of Spin_8. We plan to explore these topics in future work. amsalpha Institut de Mathématiques de Bourgogne, Université de Bourgogne et Franche-Comté, 9 Avenue Alain Savary, 21078 Dijon Cedex, France. Email address: [email protected] Institut Montpelliérain Alexander Grothendieck, Université de Montpellier, Place Eugène Bataillon, 34095 Montpellier Cedex 5, France. Email address: [email protected] Institut de Mathématiques de Bourgogne, Université de Bourgogne et Franche-Comté, 9 Avenue Alain Savary, 21078 Dijon Cedex, France. Email address: [email protected] Institut de Mathématiques de Toulouse, Paul Sabatier University, 118 route de Narbonne, 31062 Toulouse Cedex 9, France. Email address: [email protected]
http://arxiv.org/abs/2307.04123v1
20230709083214
Towards cross-language prosody transfer for dialog
[ "Jonathan E. Avila", "Nigel G. Ward" ]
cs.CL
[ "cs.CL" ]
Bounced Model of Droplet on Moving Substrate Chengwu Liu August 12, 2023 ============================================ Speech-to-speech translation systems today do not adequately support use for dialog purposes. In particular, nuances of speaker intent and stance can be lost due to improper prosody transfer. We present an exploration of what needs to be done to overcome this. First, we developed a data collection protocol in which bilingual speakers re-enact utterances from an earlier conversation in their other language, and used this to collect an English-Spanish corpus, so far comprising 1871 matched utterance pairs. Second, we developed a simple prosodic dissimilarity metric based on Euclidean distance over a broad set of prosodic features. We then used these to investigate cross-language prosodic differences, measure the likely utility of three simple baseline models, and identify phenomena which will require more powerful modeling. Our findings should inform future research on cross-language prosody and the design of speech-to-speech translation systems capable of effective prosody transfer. Index Terms: speech-to-speech translation, corpus, prosodic dissimilarity metric, English, Spanish § INTRODUCTION Speech-to-speech translation systems are valuable tools for enabling cross-language communication. While very useful today for short, transactional interactions, they are less so for long-form conversation <cit.>. One reason is that, without proper prosody transfer, translation systems are unable to reliably convey many intents and stances, impeding users' ability to deepen their interpersonal relationships and social inclusion. In dialog, prosody conveys pragmatic functions such as in turn-taking, expressions of attitudes, and negotiating agreement. Regarding prosody, current translation systems generally aim only to produce prosody that sounds natural, but this is not always sufficient. In traditional models, translation is done by a cascade of subsystems — for automatic speech recognition, machine translation, and speech synthesis — and the intermediate representations are just text, with all prosodic information lost. The prospect instead of transferring the additional information provided by the source-language prosody was a motivation for the development of unified, end-to-end models <cit.>. Despite rapid recent advances <cit.>, the ability of such models to perform prosody transfer seems not to have been examined. Rather, current approaches to prosody transfer handle it with specific modules <cit.>. To date, these target only specific functions of prosody, notably its roles in conveying paralinguistic/emotional state, emphasis, and syntactic structure, and target only a few prosodic features, notably F_0, pausing, and word duration. Very recent work has shown that this can significantly improve perceived translation quality <cit.>, but also that these techniques so far only close less than half of the perceived gap between default prosody and the human reference. Clearly, something is still missing. This paper investigates what that might be. While one might hope that the answer could be found in the linguistics literature, published knowledge of how prosody differs across languages focuses mostly on syllable-level, lexical, and syntactic prosody. In particular, there is relatively little work on differences in how prosody conveys pragmatic functions. Even for English and Spanish, a well-studied pair, our knowledge is sparse beyond a few topics such as turn-taking <cit.>, questions and declaratives <cit.>, and expression of certainty <cit.>. However, these certainly do not exhaust the prosodic meanings important for dialog. Further, these studies have been mostly limited to differences in intonation and duration, leaving out most prosodic features. Accordingly, this paper takes a fresh look, using a corpus-based approach. § PROTOCOL AND CORPUS To investigate prosodic differences in dialog, we need a suitable cross-language corpus. However, corpora for speech-to-speech translation today primarily comprise monologues, derived from readings <cit.>, political discussions <cit.>, or informative talks <cit.>. Those comprising dialogs were derived from television show dubs <cit.>, lectures and press conferences <cit.>, or speech synthesis <cit.>. Speech collected in these settings lacks interactivity, spontaneity, and most of the prosodic variation found in real dialog. We accordingly developed the Dialogs Re-enacted Across Languages (DRAL) protocol. This involves pairs of nonprofessional, bilingual participants. They first have a ten-minute conversation, which we record. These conversations are unscripted, although we sometimes suggest topics, which allows for pragmatic diversity and spontaneous interactions. Depending on their relationship, the participants mostly get to know each other, catch up on recent happenings, and/or share personal experiences. Subsequently, under the direction of a producer, they select an utterance or exchange and closely re-enact it in their other language, which may take several attempts to get right. They then re-enact another utterance. The yield is typically a few dozen matched pairs per one-hour session, with overall good pragmatic diversity, as suggested by Table <ref>. Our design choices and the DRAL corpus are discussed further in our technical report <cit.>. Following this protocol we have so far collected matched EN-ES utterance pairs, from a total of 42 speakers. The latest release, including source recordings and metadata, is available at <https://cs.utep.edu/nigel/dral/>. In the following explorations, we use the first 1139 matched “short” utterances, which each feature a single interlocutor. The average duration is 2.5 seconds. § UTTERANCE PROSODY REPRESENTATION As our aim here is exploratory, we chose to work with simple, explicit, interpretable representations of prosody. We use the Midlevel Prosodic Features Toolkit[<https://github.com/nigelgward/midlevel>], as its features were designed to be robust for dialog data, generally perceptually relevant, and normalized per speaker. From the available features, we selected ten based on previous utility for many tasks for several languages <cit.>, specifically: intensity, lengthening, creakiness, speaking rate, pitch highness, pitch lowness, pitch wideness, pitch narrowness, peak disalignment (mostly late peak), and cepstral peak prominence smoothed (CPPS), the latter an inverse proxy for breathy voice. This rich set of prosodic features supports more comprehensive analyses than most prosody research efforts. To characterize the prosody of an utterance, each base feature is computed over ten non-overlapping windows, together spanning the whole utterance. Thus, each utterance is represented by 100 features. The window sizes are proportional to an utterance's duration and span fixed percentages of its duration: 0–5%, 5–10%, 10–20%, 20–30%, 30–50%, 50–70%, 70–80%, 80–90%, 90–95%, 95–100%, as seen in Figure <ref>. This representation is thus not aligned to either syllables or words, but is appropriate for representing the sorts of overall levels and contours that are most often associated with pragmatic functions. Normalization occurs at two steps in the feature computation. The low-level (frame-level) features — pitch, energy, and CPPS — are normalized per track to mitigate individual differences. Subsequently, the mid-level features (peak disalignment, lengthening, etc.) are computed over each specified span for every utterance, and after being computed for all utterances in a track, each is z-normalized. § CROSS-LANGUAGE FEATURE CORRELATIONS For our first glimpse at the EN-ES prosody mapping, we examined the Spearman correlations between the 100 EN prosodic features and the 100 ES prosodic features, across all matched pairs. (We computed Spearman correlations as well within each language for comparison.) Were EN and ES prosodically identical, we would expect each EN feature to correlate perfectly with its ES counterpart. In fact, the correlations were far more modest but always positive and often substantial: more than half the features sharing the base feature and span have correlation ρ≥0.3. Thus, overall, EN and ES prosody is quite similar, and pitch highness is generally the most similar, especially towards the middle of utterances (e.g. 30–50%, ρ=0.59). While some features, such as pitch highness, have much stronger span-for-span correlations, other features, notably speaking rate, lengthening, and CPPS, have correlations that are strong throughout the utterances. For example, speaking rate at every span in an EN utterance correlates with speaking rate at every span in the corresponding ES utterance. These findings are compatible with the idea that English and Spanish prosody is overall roughly similar, but that the locations of local prosodic events can vary, likely due to differences in word order and lexical accents. However, some correlations were much weaker. The lowest cross-language correlations for the same features were for creakiness and peak disalignment, suggesting that these are likely to have different functions in the two languages. There were also many off-diagonal correlations. Most of these were unsurprising, such as the anticorrelations between the speaking rate and lengthening features, but not all. For example, intensity at the end of an EN utterance correlates with CPPS throughout an ES utterance (EN 90–95% vs. ES 5–20%, 30–70%, and 80–100%, ρ≥0.3), while no such relationship was found within either language. Examination of the ten pairs that most closely reflect this pattern (EN high near final intensity and ES high CPPS), showed that in half the speaker is preparing a follow-up explanation. Thus, we have identified a pragmatic function that seems to be prosodically marked differently in EN and ES. Figure <ref> shows the values for these two features for one such pair. § PROSODIC DISSIMILARITY METRIC To judge the quality of prosody transfer, we need a measure of how far the predicted prosody diverges from the observed prosody in the human reference translation. If there existed a synthesizer capable of realizing arbitrary prosodic specifications, we could just use it and then use human perceptions of the match between the synthesized and reference speech. However, no existing synthesizer is capable of this, especially for the rich set of prosodic features we are investigating here. Existing metrics for estimating similarity from prosodic feature representations exist, such as <cit.> and <cit.>, but these again are limited in the prosodic features considered. Accordingly, we propose a new simple metric. This estimates the dissimilarity of two utterances as the Euclidean distance between their respective prosodic representations, as computed in Section <ref>, with all features given equal weight. We do not expect this metric to accurately match human perceptions, but we can hope that it might be useful as a first-pass metric for judging prosodic dissimilarity. To gauge this, we compared its outputs to our perceptions of a few dozen within-language utterance pairs. To structure this process, we wrote software to randomly select an utterance (the “anchor”) from the data and retrieve the four most similar utterances and four most dissimilar utterances according to the metric. Ideally, perhaps, we would have made holistic judgments of the degree of prosodic similarity between each sample-anchor pair, but, probably like most people, we lack this ability. Instead, we repeatedly listened and identified whatever similarities and dissimilarities we could note, taking 2 or 3 minutes per pair to do so. The most salient of these were always at the level of pragmatic function, rather than prosodic features, but we considered this unproblematic, as the ultimate aim of prosody transfer is pragmatic fidelity, not prosodic fidelity. We did this process for seven anchors and eight comparisons utterances each, all from the English half of the data. We found, first, that the metric captures many aspects of pragmatic similarity — including speaker confidence, revisiting unpleasant experiences, discussing plans, describing sequences of events, and describing personal feelings — all of which were generally also prosodically similar. Table <ref> shows one set of utterances to illustrate. The prosody of this anchor utterance suggested that the topic is personal feelings: a slow then fast then slow speaking rate, a pause, and occasional use of creaky voice. Each of the utterances rated similar by the metric shared these qualities, albeit to varying degrees. Second, we noted that the similarities found were not generally lexically governed. While some words and syntactic structures have characteristic prosody, and some of the pairs considered similar by the metric shared lexical content, such as music in the fourth and fifth examples in Table <ref>, generally prosodic similarity seemed to be orthogonal to lexical similarity. Third, we noted that the metric does not always appear to match perceptions. To try to understand its limitations and what needs improving, we examined examples where our judgments diverged most from the metric's estimates, namely four which the metric judged very similar but sounded rather different to us, including EN_025_1 in Table <ref>, and two which we felt had significant similarities but which the metric judged very different, including EN_024_1 in Table <ref>. Of these, two pairs had very salient nasality differences, which our model does not capture, and sounded very different in terms of pragmatic function, specifically relating to the presumption of common ground. For three pairs the problem seemed to be differences in syllable-aligned pitch and energy contours, which are not directly represented by our features. However, for 50 of the 56 pairs examined, our judgments aligned with those of the model. Thus, while the metric needs improving, overall we deemed it likely to be useful. We consider these findings also to be evidence that our prosody representation is meaningful. Accordingly, below we rely on both for evaluating the quality of prosody transfer, as a way to obtain insight. § COMPARISON OF MODELING STRATEGIES Our corpus and metric enable the evaluation of different models of the cross-language prosody mappings. The task is, given the prosody of an utterance in the source language, to predict the prosody of its translation in the target language. The error is the dissimilarity between the inferred prosody and the prosody of the human re-enactment. We here report the results for models in both directions, EN→ES and ES→EN, using the partition described in Table <ref>. The first model is intended to represent the best that can be achieved with a typical cascaded speech-to-speech model, with a synthesizer that operates in ignorance of the input-utterance prosody. Our implementation relies on the lookup of the human-generated translation in the target language, to avoid the impact of ASR or MT errors. We use Whisper <cit.> to transcribe this to a word sequence with punctuation and then use Coqui TTS[<https://github.com/coqui-ai/TTS>] to synthesize speech from that transcription. To ensure a fair comparison, utterances incorrectly transcribed were excluded from the data. Table <ref> reflects the 252 excluded utterances. To judge the quality of each output, we compute a representation of the prosody of the synthesized speech using the method of Section <ref>. The second model predicts the prosody of the translation to be identical to the prosody of the input: it trivially outputs the same representation. This “naive” model embodies a strategy of directly transferring the input prosody. The third model is trained by linear regression. Thus, each feature of the target prosody representation is predicted as a linear function of the 100 features of the input utterance. Table <ref> shows the three models' overall average error. The synthesizer baseline is outperformed by the naive baseline, suggesting that keeping the same prosody in translation may be a reasonable basic strategy. The naive baseline is in turn outperformed by the linear regression model, suggesting that even a simple model can learn some aspects of the mapping between English and Spanish prosody. While our simple linear model shows a benefit, its prediction error is still very high. We think the likely factors include not only the existence of mappings too complex for a linear model, but also the small size of the training data, the existence of free variation implying a permissible margin of error for our metric, unmodeled dependencies of target-language prosody on the source-utterance context and its lexical content, and speaker-specific prosody behavior tendencies. § QUALITATIVE ANALYSIS To better understand the challenges of cross-language prosody modeling, we examined examples where the various models did well or poorly. First, we examined the 16 examples in each direction whose synthesized prosody was least similar to the human-produced target. The most common and salient differences were: failure to lengthen vowels and vary the speaking rate for utterances where speakers are thinking or expressing uncertainty or hesitation, failure to change pitch at turn ends, and generally sounding read or rehearsed and thus unnatural for conversational speech. Next, we exampled the 16 pairs for which the naive model did worse, that is, the cases where the English and Spanish prosody diverged most. Often there were salient differences, in a few common patterns, such as ES utterances being creakier than the English, EN but not ES utterances ending with rising pitch, and EN utterances being breathier in some regions. The latter two may reflect the common use of uptalk in English, that is to say, the use of breathy voice and rising pitch to establish common ground regarding a referent <cit.>, a pattern rare in the Spanish dialect of our corpus. In other cases there were no highly salient differences; presumably, these had multiple smaller differences which added up to a big difference according to the metric. Next, we examined the examples where the linear regression model provided the most improvement relative to the naive baseline; unsurprisingly these were often cases where it corrected for the divergences mentioned above. Finally, we examined the highest-magnitude coefficients of the linear model. Most were unsurprising and reflected correlations noted above. However, among the top three, there was a –.32 coefficient relating EN lengthening over 5%–10% to ES CPPS over 0%–5%. This may reflect the tendency for EN speakers to start turns with fast speech (low lengthening) but not ES speakers <cit.>, who perhaps tend instead to start turns with more harmonic (higher CPPS) speech. § IMPLICATIONS AND FUTURE WORK As we expected, these investigations indicate that effective cross-language transfer will require attention to prosodic features beyond pitch and duration. These include at least breathy voice, creaky voice, and intensity. We also found that the prosody of some pragmatic functions, as they occur in dialog, differs in previously unsuspected ways across languages. These include at least grounding, getting personal, leading into something, and taking the turn. These findings suggest that well-designed prosody transfer techniques will be important for effective speech-to-speech translation. Finally, our results indicate that doing so has the potential to convey many more pragmatic functions and intents that have been previously managed. These investigations relied on a small corpus, a non-comprehensive prosody representation, and a crude metric. The fact that these enabled us to obtain interesting findings, is evidence for their utility. At the same time, all of these need extensions and improvements, and doing so would enable future work to produce a clearer and broader picture of what prosody is conveying in the two languages, how it does it, and what the differences are. In addition to such basic research, we envisage our findings informing the design of speech-to-speech translation systems, potentially via two paths. In one path, for end-to-end models, an improved version of our dissimilarity metric, properly extended and tuned to model human perceptions, could serve as the loss function for training. In the other path, for cascaded models, our analysis techniques could inform the design of a specific prosody-transfer module, and inspire the development of synthesizers capable of following a rich prosody specification and thereby conveying a wide range of pragmatic functions. Given the unavoidable high cost and consequent low volume of matched conversation data, either approach will mostly likely need to exploit per-language or joint self-supervised training techniques. We share all our data, code, and observations at our public repository: <https://github.com/joneavila/DRAL>. § ACKNOWLEDGEMENTS We thank Emilia Rivas for assistance with the data collection, Ann Lee, Benjamin Peloquin, and Justine Kao for discussions, and UTEP URI for internal funding. IEEEtran
http://arxiv.org/abs/2307.05186v1
20230711114756
A regularized Interior Point Method for sparse Optimal Transport on Graphs
[ "Stefano Cipolla", "Jacek Gondzio", "Filippo Zanetti" ]
math.OC
[ "math.OC" ]
A regularized Interior Point Method for sparse Optimal Transport on Graphs Stefano Cipolla[School of Mathematics, University of Edinburgh, Edinburgh, UK. mailto:[email protected]@exseed.ed.ac.uk]Jacek Gondzio[School of Mathematics, University of Edinburgh, Edinburgh, UK. mailto:[email protected]@ed.ac.uk] Filippo Zanetti[School of Mathematics, University of Edinburgh, Edinburgh, UK. mailto:[email protected]@sms.ed.ac.uk] ======================================================================================================================================================================================================================================================================================================================================================================================================== In this work, the authors address the Optimal Transport (OT) problem on graphs using a proximal stabilized Interior Point Method (IPM). In particular, strongly leveraging on the induced primal-dual regularization, the authors propose to solve large scale OT problems on sparse graphs using a bespoke IPM algorithm able to suitably exploit primal-dual regularization in order to enforce scalability. Indeed, the authors prove that the introduction of the regularization allows to use sparsified versions of the normal Newton equations to inexpensively generate IPM search directions. A detailed theoretical analysis is carried out showing the polynomial convergence of the inner algorithm in the proposed computational framework. Moreover, the presented numerical results showcase the efficiency and robustness of the proposed approach when compared to network simplex solvers. Keywords: Convex programming, primal-dual regularized interior point methods, optimal transport on graphs, polynomial complexity, inexact interior point methods. § INTRODUCTION The Optimal Transport (OT) problem requires to move a certain distribution of mass from one configuration into another, minimizing the total cost required for the operation. It has been studied extensively, from the early work of Kantorovich <cit.>, to the development of ever faster algorithms for various OT formulations, e.g. <cit.>. Recently, there has been a growing interest in using Interior Point Methods (IPMs) <cit.> in applications that involve optimal transport, in particular for very large scale instances of such problems, see e.g. <cit.>. A particularly interesting problem is the optimal transport over sparse graphs: in this case, the transport of mass is only possible along a specific subset of connections, which is noticeably smaller than the full list of edges of a fully connected bipartite graph, as it would happen in a standard discrete OT formulation. The use of OT and the Wasserstein distance (i.e. the optimal objective function of the OT problem) is becoming more and more common in many practical applications, e.g. neural networks <cit.>, image processing <cit.>, inverse problems <cit.> and in the analysis of large complex networks <cit.>. The specific formulation of the problem is the following: suppose that G = (V,E) is a connected graph with directed edges E ⊂ V × V and weights 𝐜∈ℝ_+^|E|. Define the incidence matrix A ∈{-1,0,1}^|V| × |E| as A_ve:= -1, if e=(v,w) for some w ∈ V 1, if e=(w,v) for some w ∈ V 0, otherwise. We consider the optimal transport problem in the Beckmann form <cit.>: 𝒲_1(ρ_0,ρ_1):=min_𝐱∈ℝ^|E| ∑_e ∈ E c_e𝐱_e s.t. A𝐱=ρ_1-ρ_0 𝐱≥ 0 , where ρ_0, ρ_1∈{ρ∈ℝ^|V| : 1^Tρ=1 and ρ≥ 0 }=: Prob(V). In the following we will define |E|:=n and |V|:=m. OT on graphs has been recently studied in <cit.> and, in this formulation, it is similar to the more general minimum cost flow problem on networks <cit.>, which has also seen extensive use of IPMs, e.g. <cit.>. Sparse graphs have on average very few edges per node, which can lead to nearly disconnected regions and seriously limit the possible paths where mass can be moved. As a result, finding a solution to the optimal transport problem on a sparse graph requires more sophisticated algorithms and may be more computationally challenging compared to solving the same problem on a denser graph. In particular, first order methods like the network simplex may struggle and move slowly towards optimality, due to the limited number of edges available, while an interior point method manages to identify quickly the subset of basic variables (i.e. the subset of edges with non-zero flow) and converges faster. In this work, the authors address the efficient solution of the optimal transport problem (<ref>) considering the Proximal-Stabilized Interior Point framework (PS-IPM), recently introduced and analysed in <cit.>. As originally observed in <cit.>, when IPMs are used to solve the minimum cost flow problem on networks, the normal form of the related Newton systems is structured as a Laplacian matrix of the graph (defined as the difference of the diagonal matrix of the vertex degrees minus the adjacency matrix) and the iterates of IPM determine the associate weights of this matrix, see also eq. (<ref>) . In <cit.>, this observation was exploited to solve such Laplacian linear systems (which are, in turn, particular instances of symmetric M-matrices) through the fast specialized solution of O(ln m) linear systems involving symmetric diagonally dominant matrices <cit.>. We refer the interested reader to <cit.> for a survey on fast Laplacian solvers and to <cit.> for information concerning the distribution of Laplacian's singular values. §.§ Contribution and organization This work focuses on the efficient solution of large scale OT problems on sparse graphs using a bespoke IPM algorithm able to suitably exploit primal-dual regularization in order to enforce scalability. The organization of the work and its main contributions can be summarized as follows: * In Section <ref>, the authors briefly recall the proximal stabilized framework responsible for the primal-dual regularization of the IPMs here considered. * In Section <ref>, the authors provide a detailed convergence analysis of the inexact infeasible primal-dual regularized IPM, when a proximal stabilization procedure is used. Moreover, they prove its polynomial complexity. * In Section <ref>, the authors prove that the normal form of the related Newton system is naturally structured as a shifted Laplacian matrix characterized by a strict diagonal dominance. Such feature consistently simplifies the factorization of the normal equations and allows the use of standard libraries for the solution of the corresponding linear systems. On the other hand, such factorizations could incur a significant fill-in even when the original graph is sparse, hence limiting the applicability of the proposed approach for the solution of large scale problems. * In Section <ref>, to overcome potential scalability issues related to the fill-in mentioned above, the authors propose to generate IPM search directions using sparsified versions of the IPM normal equations. In particular, the original normal matrix takes the form A(Θ^-1+ρ I)^-1A^T+δ I, where ρ,δ are regularization parameters and Θ is a diagonal matrix related to the IPM barrier parameter μ; the authors propose to use a perturbed normal matrix, where the entries of (Θ^-1+ρ I)^-1 that are sufficiently small (when compared to μ) are set to zero (completely ignoring the corresponding columns of matrix A). This strategy reduces the time required to assemble and solve the normal equations systems, providing a fundamental advantage to the algorithm. The resulting sparsified linear systems are solved either using a Cholesky factorization (if that displays only negligible fill-in) or using the conjugate gradient method and employing a simple and inexpensive incomplete Cholesky preconditioner. In both these cases either the complete or the incomplete Cholesky factorization remains very sparse, and this translates into an outstanding efficiency of the proposed method. Moreover, the authors are able to interpret the Newton directions generated using sparsified Newton matrices as inexact Newton directions. Relying on the convergence theory developed in Section <ref>, the authors are able to prove that, under suitable choice of the sparsification parameters, the above described approach gives rise to a polynomially convergent algorithm. * In Section <ref>, the authors present experimental results which demonstrate the efficiency and robustness of the proposed approach. Extensive numerical experiments, involving very large and sparse graphs coming from public domain random generators as well as from real world applications, show that, for sufficiently large problems, the approach presented in this work consistently outperforms, in terms of computational time, the Lemon network simplex implementation <cit.>, one of the state-of-the-art solvers available for network problems. §.§ Notation In the paper, vectors are indicated with bold letters. · indicates the Euclidean norm. I represents the identity matrix and 𝐞 the vector of all ones. Given a vertex v of a graph G, we denote as deg(v) its degree, i.e. the number of edges that are incident to v. Concerning the variables inside the algorithm, we use a subscript k to indicate the external proximal iteration and a superscript j to indicate the internal IPM iteration. Given a sequence {μ^j}_j ∈ℕ and a continuos function f, the big-O notation O(·) is used as follows: {u^j}_j ∈ℕ∈ O(f(μ^j)) iff ∃ C>0 s.t. u^j ≤ C f(μ^j) for all j ∈ℕ. § COMPUTATIONAL FRAMEWORK §.§ Proximal-Stabilized Interior Point Method Let us consider the following primal-dual formulation of a Linear Program (LP): min_𝐱∈ℝ^n 𝐜^T𝐱 s.t. A𝐱= 𝐛 𝐱≥ 0 max_𝐬∈ℝ^n, 𝐲∈ℝ^m 𝐛^T𝐲 s.t. 𝐜-A^T𝐲-𝐬=0 𝐬≥ 0 where A ∈ℝ^m × n with m ≤ n is not required to have full rank. Notice that problem (<ref>) is indeed formulated in this way. We solve this problem using PS-IPM <cit.>, which is a proximal-stabilized version of classic Interior Point Method. Broadly speaking, PS-IPM resorts to the Proximal Point Method (PPM) <cit.> to produce primal-dual regularized forms of problem (<ref>). Indeed, given an approximation (𝐱_k,𝐲_k) of the solution of such problem, PS-IPM uses interior point methods to produce the next PPM step (𝐱_k+1,𝐲_k+1), which, in turn, represents a better approximation of the solution of problem (<ref>). In this regard, the problem that needs to be solved at every PPM step takes the form min_𝐱∈ℝ^n 𝐲∈ℝ^m 𝐜^T𝐱+ρ/2𝐱-𝐱_k^2 +δ/2𝐲^2 s.t. A𝐱+δ(𝐲-𝐲_k)= 𝐛 𝐱≥ 0, max_𝐱, 𝐬∈ℝ^n 𝐲∈ℝ^m 𝐲^T𝐛-ρ/2𝐱^2-δ/2𝐲-𝐲_k^2 s.t. ρ(𝐱- 𝐱_k)-A^T𝐲 - 𝐬+𝐜 =0 𝐬≥ 0 . PPM(k) Solution of problem (<ref>) Using standard duality theory, we say that (𝐱_k^*,𝐲_k^*,𝐬_k^*) is a solution of problem (<ref>) if the following identities hold A𝐱_k^* +δ(𝐲_k^* -𝐲_k) - 𝐛 =0 ρ(𝐱 - 𝐱_k)-A^T𝐲_k^* - 𝐬 +𝐜=0 (𝐱_k^*)^T𝐬_k^*=0 and (𝐱_k^*,𝐬_k^*) ≥ 0 More in particular, the PS-IPM here considered uses two nested cycles to solve problem (<ref>). The outer loop uses an inexact proximal point method <cit.>, as shown in Algorithm <ref>: the current approximate solution (𝐱_k,𝐲_k) is used to regularize the LP problem, which is then solved using an IPM to find the next approximate solution (𝐱_k+1,𝐲_k+1) ≈ (𝐱^*_k,𝐲^*_k). And indeed, at the inner loop level, an inexact infeasible interior point method is used to solve the PPM sub-problems, see Algorithm <ref>. Notice that both methods are inexact: the outer cycle is inexact because the sub-problems are solved approximately by an IPM; the IPM is inexact because the Newton systems are also solved inexactly (see Section <ref> for more details). Notice also that the IPM is referred to as infeasible because the intermediate iterates are not required to be inside the feasible region. We also call the inner loop regularized, because it is a primal-dual regularized version of the original LP (<ref>). Regularization in interior point methods was originally introduced in <cit.> and extensively used in <cit.>, as a tool to stabilize and improve the linear algebra routines needed for their efficient implementation. In this work and in <cit.>, the regularization is introduced as a result of the application of the PPM at the outer cycle level. To summarize, in the following we use three acronyms: PPM refers to the outer cycle; IPM refers to the inner cycle; PS-IPM refers to the overall procedure, combining PPM and IPM. Concerning the stopping criteria, we finally highlight that Algorithm <ref> is stopped based on the criterion (<ref>). Algorithm <ref> instead, is stopped according to the accuracy that is required for the solution of current sub-problem and based on the following natural residual, see <cit.>, of problem (<ref>): 𝐫_k(x,𝐲):=[ 𝐱; 𝐲 ] - Π_D ( [ 𝐱; 𝐲 ]- [ ρ ( 𝐱-𝐱_k) +𝐜-A^T𝐲; A𝐱-𝐛+ δ (𝐲-𝐲_k) ] ), where D:=ℝ_≥ 0^n×ℝ^m and where Π_D is the corresponding projection operator. Moreover, it is easy to verify that (𝐱_k^*,𝐲_k^*,𝐬_k^*) is a solution of problem (<ref>) if and only if 𝐫_k(𝐱_k^*,𝐲_k^*)=0, <cit.>. §.§ Interior point method We now focus on the inner cycle and give a brief description of the IPM used to solve problem (<ref>). To this aim, we introduce the following Lagrangian function which uses a logarithmic barrier to take into account the inequality constraints L_k(𝐱, 𝐲)= 1/2[𝐱^T, 𝐲^T] [ ρ I 0; 0 δ I ][ 𝐱; 𝐲 ] +[𝐜^T- ρ𝐱_k^T, 0 ][ 𝐱; 𝐲 ] -𝐲^T(A 𝐱 + δ (𝐲-𝐲_k) -𝐛) - μ∑_i =1^nln (x_i). The KKT conditions that arise from the gradients of the Lagrangian (<ref>) are ∇_𝐱L_k(𝐱,𝐲)= ρ𝐱-A^T𝐲+𝐜 -ρ𝐱_k - [ μ/x_1; ⋮; μ/x_n ] =0 ; -∇_𝐲L_k(𝐱, 𝐲) = (A𝐱+δ (𝐲-𝐲_k) -𝐛)=0 . Setting s_i = μ/x_i for i ∈{1, …, n}, we consider the following function F_k^μ, σ(𝐱,𝐲, 𝐬):=[ ρ (𝐱-𝐱_k)-A^T 𝐲 - 𝐬 +𝐜; A𝐱+δ (𝐲-𝐲_k) -𝐛; SX𝐞-σμ𝐞 ], where σ∈ (0,1) is the barrier reduction parameter, S=diag(𝐬) and X=diag(𝐱). A primal–dual interior point method applied to problem (<ref>) relies on the use of Newton iterations to solve a nonlinear problem of the form F_k^μ, σ(𝐱,𝐲, 𝐬)=0, 𝐱, 𝐬>0. A Newton step for (<ref>) from the current iterate (𝐱,𝐲, 𝐬) is obtained by solving the system [ ρ I -A^T -I; A δ I 0; S 0 X ][ Δ𝐱; Δ𝐲; Δ𝐬 ] =-F_k^μ,σ(𝐱,𝐲,𝐬)=:[ ξ_d; ξ_p; ξ_μ,σ ], i.e., the following relations hold: ρΔ𝐱 - A^T Δ𝐲 - Δ𝐬 = ξ_d AΔ𝐱 + δΔ𝐲 = ξ_p S Δ𝐱 + X Δ𝐬 = ξ_μ,σ, where (Δ𝐱,Δ𝐲,Δ𝐬) is the Newton direction to be taken at each iteration (with an appropriate stepsize). The solution of (<ref>) is delivered by the following computational procedure (A(Θ^-1+ρ I)^-1A^T+δ I)Δ𝐲 = ξ_p - A(Θ^-1+ρ I)^-1(X^-1ξ_μ,σ + ξ_d ) (Θ^-1+ρ I) Δ𝐱 = A^T Δ𝐲 + ξ_d + X^-1ξ_μ,σ X Δ𝐬 = (ξ_μ,σ - S Δ𝐱) . where Θ:=XS^-1. Before continuing let us give basic definitions used in the remainder of this work. Normal Matrix: S_ρ, δ:=A(Θ^-1+ρ I)^-1A^T+δ I. Neighbourhood of the infeasible central path: 𝒩_k(γ̅,γ,γ_p,γ_d):= {(𝐱,𝐲,s) ∈ℝ^n_>0×ℝ^m ×ℝ^n_>0 : γ̅𝐱^T𝐬/n ≥ x_is_i ≥γ𝐱^T𝐬/n for i=1,…,n; 𝐱^T𝐬≥γ_p A𝐱+δ(𝐲-𝐲_k)-𝐛; 𝐱^T𝐬≥γ_dρ(𝐱-𝐱_k)-A^T𝐲-𝐬 + 𝐜}, where γ̅ > 1 >γ> 0 and (γ_p,γ_d)>0. The neighbourhood here considered is standard in the analysis of infeasible IPMs <cit.>: it requires the iterates to be close enough to the central path (according to parameters γ̅ and γ), and the primal-dual constraint violations to be reduced at the same rate as the complementarity product 𝐱^T𝐬. Within this neighbourhood, 𝐱^T𝐬→0 guarantees convergence to a primal-dual optimal solution. Moreover, we consider an inexact solution of the linear system (<ref>): S_ρ, δΔ𝐲 = ξ̅_p + ζ where ζ≤ C_inexact 𝐱^T𝐬, where C_inexact∈(0,1) and we defined ξ̅_p:= ξ_p - A(Θ^-1+ρ I)^-1(X^-1ξ_μ,σ + ξ_d ). It is important to note that the above Assumption <ref> is a non-standard requirement in inexact Newton methods <cit.>. Its particular form is motivated by the use of IPM and the needs of the complexity analysis in Section <ref>. It is chosen in agreement with the definition of the infeasible neighbourhood (<ref>) of the central path of the sub-problem considered. Using (<ref>) and (<ref>) in (<ref>), we have AΔ𝐱 + δΔ𝐲 - ξ_p = S_ρ, δΔ𝐲 - ξ̅_p = ζ, whereas equations (<ref>) and (<ref>) are satisfied exactly. Therefore the inexact Newton directions computed according to (<ref>) satisfy: [ ρ I -A^T -I; A δ I 0; S 0 X ][ Δ𝐱; Δ𝐲; Δ𝐬 ] =[ ξ_d; ξ_p; ξ_μ,σ ]+[ 0; ζ; 0 ]. Define [ 𝐱^j_k(α); 𝐲^j_k(α); 𝐬^j_k(α) ]:=[ 𝐱_k^j; 𝐲_k^j; 𝐬_k^j ]+[ αΔ𝐱_k^j; αΔ𝐲_k^j; αΔ𝐬_k^j ], i.e. 𝐱^j_k(α) is the point reached from 𝐱^j_k after a step of length α along the Newton direction. Notice that, after selecting the correct stepsize α^j_k, we define 𝐱^j+1_k:=𝐱^j_k(α^j_k). We report in Algorithm <ref> a prototype IPM scheme for the solution of problem (<ref>). The fundamental steps involved in the algorithm are: computing the Newton direction by solving (<ref>) with a level of inexactness that satisfies (<ref>), see Line <ref>; finding the largest stepsize that guarantees to remain inside the neighbourhood and to sufficiently reduce the complementarity products, see Line <ref>; preparing the quantities to be used in the next iteration, see Lines <ref>-<ref>. We study the convergence of Algorithm <ref> in Section <ref>. Concerning the notation, recall that the subscript k is related to the iteration count of the outer Algorithm <ref> (PPM) whereas the superscript j is related to the iteration of the inner Algorithm <ref> (IPM). To avoid over-complicating the notation, notice that in the following we use ξ_p,k^j and ξ_d,k^j instead of (ξ_p)_k^j and (ξ_d)_k^j. § CONVERGENCE AND COMPLEXITY In this section, we show that the particular inexact IPM in Algorithm <ref>, used as inner solver in Algorithm <ref>, is convergent. Moreover, at the end of the present section, we show that such IPM converges to an ε-accurate solution in a polynomial number of iterations. Our implant of the proof is inspired by the works <cit.> but consistently differs from the hypothesis and techniques used there. The PPM iteration counter k is fixed through this section and, for the sake of readability, is used only when writing the fixed PPM iteration (𝐱_k,𝐲_k,𝐬_k) and not in the context of the IPM iterations (𝐱^j_k,𝐲^j_k,𝐬^j_k). We start from analysing the progress made in a single Newton iteration. Using (<ref>), (<ref>), and (<ref>) we obtain ρ(𝐱^j(α)-𝐱_ k)-A^T𝐲^j(α)-𝐬^j(α) + 𝐜 = (ρ(𝐱^j-𝐱_ k)-A^T𝐲^j-𝐬^j + 𝐜) +α (ρΔ𝐱^j - A^T Δ𝐲^j - Δ𝐬^j ) = (1-α)(ρ(𝐱^j-𝐱_ k)-A^T𝐲^j-𝐬^j + 𝐜), whereas, using (<ref>) and (<ref>) we have A𝐱^j(α) +δ(𝐲^j(α) -𝐲_𝐤)-𝐛 =(A𝐱^j +δ(𝐲^j - 𝐲_ 𝐤)-𝐛) + α (A Δ𝐱^j+δΔ𝐲^j) = (1- α)(A𝐱^j +δ(𝐲^j - 𝐲_ 𝐤)-𝐛) + αζ^j. The last block equation in (<ref>) yields (𝐬^j)^TΔ𝐱^j+(𝐱^j)^TΔ𝐬^j =- (𝐱^j)^T𝐬^j + σ n μ^j = (σ-1)(𝐱^j)^T𝐬^j and s_i Δ x_i + x_i Δ s_i = σ𝐱^T𝐬/n - x_is_i . Finally, using (<ref>), we state the following identity (𝐱^j+αΔ𝐱^j)^T(𝐬^j+αΔ𝐬^j)= (𝐱^j)^T𝐬^j(1+α(σ-1))+α^2 (Δ𝐱^j)^TΔ𝐬^j. With the next Theorem <ref> we prove that Algorithm <ref> is well-defined: at each iteration, there exist a non-empty interval of values for the stepsize α such that the next iterate still lies in the neighbourhood 𝒩_k(γ̅,γ,γ_p,γ_d) and such that the complementarity product (𝐱_k^j)^T𝐬_k^j is reduced by a sufficient amount, as required in (<ref>). Let us suppose that (𝐱^j,𝐲^j,𝐬^j) ∈𝒩_k(γ̅,γ,γ_p,γ_d) s.t. (𝐱^j)^T𝐬^j>0 is given. If the stopping conditions at Line <ref> of Algorithm <ref> are not satisfied, then there exists 0<α̂^j < α^*,j such that conditions (<ref>) are satisfied for all α∈ [0,α̂^j]. In this proof we omit also the IPM iterate counter j, i.e. (𝐱^j,𝐲^j,𝐬^j)≡ (𝐱,𝐲,𝐬). Let us define the following functions, for all i=1,…,n f_i(α):= (x_i+αΔ x_i)(s_i+αΔ s_i)- γ(𝐱+αΔ𝐱)^T(𝐬+αΔ𝐬)/n, f̅_i(α):= γ̅(𝐱+αΔ𝐱)^T(𝐬+αΔ𝐬)/n-(x_i+αΔ x_i)(s_i+αΔ s_i), h(α):= (1 -(1-σ̅)α) 𝐱^T𝐬 - (𝐱+αΔ𝐱)^T(𝐬+αΔ𝐬), g_d(α):= (𝐱+αΔ𝐱)^T(𝐬+αΔ𝐬) - γ_d ρ(𝐱+αΔ𝐱 - 𝐱_k) - A^T(𝐲+αΔ𝐲) -(𝐬+αΔ𝐬)+𝐜, g_p(α):= (𝐱+αΔ𝐱)^T(𝐬+αΔ𝐬) - γ_p A(𝐱+αΔ𝐱)+ δ (𝐲+αΔ𝐲- 𝐲_k) -𝐛 . Using (<ref>) in the expressions of g_d(α) we have g_d(α) =(𝐱+αΔ𝐱)^T(𝐬+αΔ𝐬) -γ_d (1-α) ρ(𝐱-𝐱_k)-A^T𝐲-𝐬 + 𝐜, whereas using (<ref>) in the expressions of g_p(α) we have g_p(α) ≥(𝐱+αΔ𝐱)^T(𝐬+αΔ𝐬) - γ_p ((1-α) A𝐱 +δ(𝐲 - 𝐲_k)-𝐛 + αζ). We start proving that there exists α̂^j>0 such that f_i(α)≥ 0, f̅_i(α)≥ 0, h(α)≥ 0, g_p(α)≥ 0, g_d(α)≥ 0 for all i = 1, …, n and for all α∈ [0, α̂^j]. In the following we will use exensively the identity (<ref>). We have f_i(α)= (1-α)(x_is_i-γ𝐱^T𝐬/n)_≥ 0+α^2(Δ x_i Δ s_i - γ(Δ𝐱)^T Δ𝐬/n) + ασ (1-γ)𝐱^T𝐬/n ≥α^2(Δ x_i Δ s_i - γ(Δ𝐱)^T Δ𝐬/n) + ασ (1-γ)𝐱^T𝐬/n. Since 𝐱^T𝐬>0, using a simple continuity argument, we can infer the existence of a small enough f_i>0 s.t. f_i(α)≥ 0 for all α∈ [0, f_i]. Reasoning analogously, we have f̅_i(α)= (1-α)(γ̅𝐱^T𝐬/n -x_is_i)_≥ 0+α^2(γ̅(Δ𝐱)^T Δ𝐬/n-Δ x_i Δ s_i ) + ασ (γ̅-1)𝐱^T𝐬/n ≥α^2(γ̅(Δ𝐱)^T Δ𝐬/n-Δ x_i Δ s_i ) + ασ (γ̅-1)𝐱^T𝐬/n, and hence there exists a small enough f̅_i>0 s.t. f̅_i(α)≥ 0 for all α∈ [0, f̅_i]. Concerning h(α), we have h(α)= 𝐱^T𝐬(σ̅-σ) α- α^2 (Δ𝐱)^TΔ𝐬, and, since 𝐱^T𝐬(σ̅-σ)>0, there exists ĥ>0 small enough s.t. h(α)≥ 0 for all α∈ [0, ĥ]. Concerning g_d(α), we have g_d(α) = (1-α)(𝐱^T𝐬- γ_d (ρ(𝐱-𝐱_k)-A^T𝐲-𝐬 + 𝐜))_≥ 0 + ασ𝐱^T𝐬 +α^2 (Δ𝐱)^TΔ𝐬 ≥ασ𝐱^T𝐬 +α^2 (Δ𝐱)^TΔ𝐬, and hence there exists ĝ_d>0 small enough s.t. g_d(α)≥ 0 for all α∈ [0, ĝ_d]. Finally, concerning g_p(α), we have g_p(α) ≥ (1-α)(𝐱^T𝐬- γ_p (A𝐱 +δ(𝐲 - 𝐲_k)-𝐛))_≥ 0 + ασ𝐱^T𝐬 + +α^2 (Δ𝐱)^TΔ𝐬 - αγ_p ζ ≥α (σ - γ_p C_inexact )𝐱^T𝐬 +α^2 (Δ𝐱)^TΔ𝐬, and hence there exists ĝ_p>0 small enough s.t. g_p(α)≥ 0 for all α∈ [0, ĝ_p]. Let us define α̂^j = min{min_if_i, min_if̅_i, ĥ, ĝ_d, ĝ_p, 1} >0. To prove the thesis, it remains to show that α^*,j>α̂^j, i.e. that (𝐱(α),𝐲(α),𝐬(α) ) ∈ℝ^n_>0×ℝ^m ×ℝ^n_>0 for all α∈ [0,α̂^j]. To this aim, let us suppose by contradiction that α^*,j≤α̂^j. By definition of α^*,j, there exists ℓ̅∈{1, …,n} s.t. (x_ℓ̅+α^*,jΔ x_ℓ̅)(s_ℓ̅+α^*,jΔ s_ℓ̅)=0. We have hence f_ℓ̅(α^*,j)= - γ(𝐱(α^*,j))^T𝐬(α^*,j)/n ≥ 0 ⇒ (𝐱(α^*,j))^T𝐬(α^*,j) =0. From the above implication, using g_d(α^*,j) and g_p(α^*,j), we obtain that A𝐱(α^*,j) +δ(𝐲(α^*,j) -𝐲_k) - 𝐛 =0 ρ(𝐱(α^*,j) - 𝐱_k)-A^T𝐲(α^*,j) - 𝐬(α^*,j) +𝐜=0, i.e. (𝐱(α^*,j),𝐲(α^*,j),𝐬(α^*,j)) is a solution of problem (<ref>). We have hence obtained a contradiction since we are supposing that Algorithm <ref> did not stop at Line <ref>. Before proving the convergence of Algorithm <ref> we would like to emphasize that the above proof complements and expands <cit.>. The next two results establish that Algorithm <ref> converges to a solution of problem (<ref>). This is done by establishing that the right-hand sides of the Newton systems are uniformly bounded and by showing that the complementarity product (𝐱^j)^T𝐬^j cannot be bounded away from zero. The right-hand sides of the Newton systems are uniformly bounded. As a consequence of Theorem <ref>, we can suppose the existence of a sequence of iterates {(𝐱^j,𝐲^j,𝐬^j)}_j ∈ℕ produced by Algorithm <ref> s.t. (𝐱^j,𝐲^j,𝐬^j) ∈𝒩_k(γ̅,γ,γ_p,γ_d). Since by construction (𝐱^j)^T𝐬^j ≤ (𝐱^0)^T𝐬^0, we have from (<ref>) A𝐱^j+δ(𝐲^j-𝐲_k)-𝐛≤ (𝐱^0)^T𝐬^0/γ_p, ρ(𝐱^j-𝐱_k)-A^T𝐲^j-𝐬^j + 𝐜≤ (𝐱^0)^T𝐬^0/γ_d. Moreover, we have S^jX^j𝐞-σμ^j 𝐞≤S^jX^j𝐞+σμ^j 𝐞 ≤γ̅/√(n) (𝐱^j)^T𝐬^j + σ/√(n) (𝐱^j)^T𝐬^j ≤γ̅+σ/√(n) (𝐱^0)^T𝐬^0. Algorithm <ref> produces a sequence of iterates in 𝒩_k(γ̅,γ,γ_p,γ_d) s.t. lim (𝐱^j)^T𝐬^j=0, i.e. (𝐱^j,𝐲^j,𝐬^j) converges to a solution of problem (<ref>). Let us argue by contradiction supposing that there exists ε^*>0 s.t. (𝐱^j)^T𝐬^j> ε^* for all j ∈ℕ. Claim 1 There exists a constant C_1 dependent only on n s.t. [Δ𝐱^j, Δ𝐲^j, Δ𝐬^j ]^T≤ C_1 for all j ∈ℕ. The proof of this fact follows observing that the Newton matrices in (<ref>) satisfy all the hypothesis of <cit.>, i.e. they have a uniformly bounded inverse, and that the right-hand sides are uniformly bounded, see Corollary <ref>. As a consequence, there exists another constant C_2 s.t. |Δ x_i^j Δ s_i^j - γ(Δ𝐱^j)^T Δ𝐬^j/n| ≤ C_2, |γ̅(Δ𝐱^j)^T Δ𝐬^j/n-Δ x_i^j Δ s_i^j | ≤ C_2, |(Δ𝐱^j)^TΔ𝐬^j| ≤ C_2. Claim 2 There exists α^*>0 s.t. α^j ≥α^* for all j ∈ℕ. Using (<ref>) in equations (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) we have f_i(α) ≥ α^2(Δ x_i^j Δ s_i^j - γ(Δ𝐱^j)^T Δ𝐬^j/n) + ασ (1-γ)(𝐱^j)^T𝐬^j/n ≥ - C_2 α^2 +ασ (1-γ) ε^*/n, f̅_i(α) ≥ (γ̅(Δ𝐱^j)^T Δ𝐬^j/n-Δ x_i^j Δ s_i^j ) + ασ (γ̅-1)(𝐱^j)^T𝐬^j/n ≥ - C_2 α^2 +ασ (γ̅-1) ε^*/n, h(α)= (𝐱^j)^T𝐬^j(σ̅-σ) α- α^2 (Δ𝐱^j)^TΔ𝐬^j ≥ ε^*(σ̅-σ) α - C_2 α^2, g_d(α) ≥ ασ (𝐱^j)^T𝐬^j +α^2 (Δ𝐱^j)^TΔ𝐬^j ≥ε^*σα - C_2 α^2, g_p(α) ≥ α (σ - q γ_p ) (𝐱^j)^T𝐬^j +α^2 (Δ𝐱^j)^TΔ𝐬^j ≥ε^*α (σ - γ_p C_inexact ) - C_2 α^2. Hence α^j≥α^*, where α^*:=min{ 1, σ(1-γ)ε^* /n/C_2,σ(γ̅-1)ε^* /n/C_2, (σ̅-σ)ε^*/C_2,σε^*/C_2, (σ- γ_p C_inexact) ε^*/C_2}. The convergence claim follows observing that the inequality ε^* ≤ (𝐱^j)^T𝐬^j ≤ (1 -(1-σ̅)α^* )^j(𝐱^0)^T𝐬^0 leads to a contradiction for j →∞. §.§ Polynomial Complexity In this section we show that the number of iterations needed to reduce μ^j below a certain tolerance ε grows polynomially with the size of the problem. Moreover, it is important to note that, from the definition of the central path 𝒩_k(γ̅,γ,γ_p,γ_d), the same number of iterations is sufficient to reduce also the primal and dual infeasibility below the tolerance ε. In the following we will omit the index j when this does not lead to ambiguities. It is important to note that the linear system in (<ref>) can be written in an alternative form as follows [ ρ I A^T -I; A -δ I 0; S 0 X ]_=:J[ Δ𝐱; -Δ𝐲; Δ𝐬 ] =[ ξ_d; ξ_p + ζ; ξ_μ,σ ]. Using <cit.>, we have that J^-1=[ H^-1 1/δH^-1A^T H^-1X^-1; 1/δAH^-1 1/δ^2AH^-1A^T- 1/δ I 1/δAH^-1X^-1; - Θ^-1 H ^-1 -1/δΘ^-1H^-1A^T (I-Θ^-1H^-1)X^-1 ], where H:= ρ I + Θ^-1 + 1/δA^TA. To prove polynomial complexity, we start by bounding the terms that appear in the expression (<ref>). The next two technical results are useful in this sense. We have that H^-1∈ O(1). Using the Sherman-Morrison-Woodbury formula, we get H^-1= (ρ I + Θ ^-1)^-1-1/δ(ρ I + Θ ^-1)^-1A^T(I + 1/δA(ρ I + Θ ^-1)^-1A^T)^-1A(ρ I + Θ ^-1)^-1. We observe (ρ I + Θ ^-1)^-1_ii=Θ_ii/ρΘ_ii + 1= ρΘ_ii/ρΘ_ii + 11/ρ< 1/ρ and hence H^-1 ≤ (ρ I + Θ ^-1)^-1· ·(1 + 1/δA^TA(I + 1/δA(ρ I + Θ ^-1)^-1A^T)^-1 (ρ I + Θ ^-1)^-1) ≤ 1/ρ(1+ 1/δρA^TA), where we used that (I + 1/δA(ρ I + Θ ^-1)^-1A^T)^-1≤ 1. We have that Θ^-1 H^-1∈ O(1). Using (<ref>), we observe that (Θ^-1 (ρ I + Θ ^-1)^-1)_ii= 1/ρΘ_ii+1 and hence, using (<ref>), we have Θ^-1 H^-1≤ Θ^-1 (ρ I + Θ ^-1)^-1 · (1+ 1/δA^TA(1 + 1/δA(ρ I + Θ ^-1)^-1A^T)^-1 (ρ I + Θ ^-1)^-1) ≤ 1+ 1/δρA^TA, where we used (<ref>). Therefore, we know that H^-1≤ C_3 and Θ^-1 H^-1≤ C_4, for some positive constants C_3 and C_4. Let us define C_5 := max{C_3, C_4}. Now that we have bounded the terms in (<ref>), we look for an upper bound on the norms of the Newton directions that depend polynomially on the size of the problem n. This is a crucial step to find a polynomial lower bound on the minimum stepsize (<ref>) which leads to the polynomial complexity result mentioned at the beginning of this section. There exists a positive constant C_6 s.t. [Δ𝐱^j, Δ𝐲^j, Δ𝐬^j ]^T≤ C_6 n√(μ^j) for all j ∈ℕ. Using (<ref>), we have Δ𝐱^j = (H^j)^-1ξ^j_d +1/δ(H^j)^-1A^T(ξ^j_p+ζ^j)+(H^j)^-1 (X^j)^-1ξ^j_σ, μ^j, -Δ𝐲^j = 1/δA(H^j)^-1ξ^j_d +(1/δ^2A(H^j)^-1A^T-1/δI)(ξ^j_p+ζ^j) + 1/δA(H^j)^-1 (X^j)^-1ξ^j_σ, μ^j, and hence Δ𝐱^j≤ (H^j)^-1ξ^j_d +1/δ(H^j)^-1A^Tξ^j_p+ζ^j + (H^j)^-1/2(H^j)^-1/2(X^j)^-1/2(S^j)^1/2(X^j)^1/2(S^j)^1/2𝐞 - σμ^j (X^j)^-1/2(S^j)^-1/2𝐞 ≤ C_5 n μ^j/γ_d + C_5 A^T (1+γ_p C_inexact) nμ^j/δγ_p+ C_5 (√(γ̅n )+σ√(n)/√(γ)) √(μ^j) ≤ (C_5 √(μ^0)/γ_d + C_5 A^T (1+γ_p C_inexact)√(μ^0)/δγ_p+ C_5 (√(γ̅)+σ1/√(γ)) ) n√(μ^j) ≤ C_Δ xn √(μ^j) , where we used (X^j)^1/2(S^j)^1/2𝐞 - σμ^j (X^j)^-1/2(S^j)^-1/2𝐞 ≤ (X^j)^1/2(S^j)^1/2𝐞+ σμ^j (X^j)^-1/2(S^j)^-1/2𝐞, μ^j=√(μ^j) √(μ^j)≤√(μ^0) √(μ^j) and √(n)≤ n, and where C_Δ x is a positive constant. Analogously Δ𝐲^j≤ 1/δA(H^j)^-1ξ^j_d + (1/δ^2(H^j)^-1AA^T+ 1/δ) ξ^j_p+ζ^j + 1/δA(H^j)^-1/2(H^j)^-1/2(X^j)^-1/2(S^j)^1/2· ·(X^j)^1/2(S^j)^1/2𝐞 - σμ^j (X^j)^-1/2(S^j)^-1/2𝐞 ≤ AC_5 n μ^j/δγ_d+ (C_5 A^TA+δ )(1+γ_p C_inexact)n μ^j/δ^2 γ_p+ + C_5A/δ(√(γ̅n )+σ√(n)/√(γ)) √(μ^j) ≤ (AC_5 √(μ^0)/δγ_d+ (C_5 A^TA+δ )(1+γ_p C_inexact) √(μ^0)/δ^2 γ_p+ + C_5A/δ(√(γ̅)+σ1/√(γ)) )n √(μ^j) ≤ C_Δ yn √(μ^j) , where C_Δ y is a positive constant. Finally, using the fact that Δ𝐬^j= ρΔ𝐱^j-A^T Δ𝐲^j -ξ_d^j and using (<ref>), (<ref>), and the definition of 𝒩_k(γ̅,γ,γ_p,γ_d), we have Δ𝐬^j= ρΔ𝐱^j+A^TΔ𝐲^j +ξ_d^j ≤ ρ C_Δ xn √(μ^j) +A^T C_Δ yn √(μ^j)+n μ^j/γ_d ≤ (ρ C_Δ x +A^T C_Δ y +√(μ^0)/γ_d)n√(μ^j) ≤ C_Δ sn√(μ^j), where C_Δ s is a positive constant. The thesis follows setting C_6 = max(C_Δ x,C_Δ y,C_Δ s). The next straightforward technical result specializes the polynomial bound of Theorem <ref> to the terms that appear in equations (<ref>)-(<ref>). There exists a positive constant C_7 such that, for all i |Δ x_i Δ s_i - γ(Δ𝐱)^T Δ𝐬/n| ≤ C_7 n^2 μ, |γ̅(Δ𝐱)^T Δ𝐬/n-Δ x_i Δ s_i | ≤ C_7 n^2 μ, |(Δ𝐱)^TΔ𝐬| ≤ C_7 n^2 μ. We can now apply the previous results and obtain a bound similar to (<ref>), but that depends polynomially on the size of the problem n. This is the last fundamental step before the polynomial complexity result can be stated. There exists a constant α̃ s.t. α^j ≥α̃ for all j ∈ℕ and α̃≥ C_8 n^-2, where C_8 is a positive constant. Using (<ref>) in equations (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) we have: f_i(α) ≥ α^2(Δ x_i Δ s_i - γ(Δ𝐱)^T Δ𝐬/n) + ασ (1-γ)𝐱^T𝐬/n ≥ - C_7n^2μα^2 +ασ (1-γ) μ, f̅_i(α) ≥ α^2(γ̅(Δ𝐱)^T Δ𝐬/n-Δ x_i Δ s_i ) + ασ (γ̅-1)𝐱^T𝐬/n ≥ - C_7 n^2 μα^2 +ασ (γ̅-1) μ, h(α)= 𝐱^T𝐬(σ̅-σ) α- α^2 (Δ𝐱)^TΔ𝐬≥ n μ (σ̅-σ) α - C_7 n^2 μα^2, g_d(α) ≥ ασ𝐱^T𝐬 +α^2 (Δ𝐱)^TΔ𝐬≥ n μσα - C_7n^2 μα^2, g_p(α) ≥ α (σ - γ_p C_inexact) 𝐱^T𝐬 +α^2 (Δ𝐱)^TΔ𝐬≥ n μα (σ - γ_p C_inexact ) - C_7n^2 μα^2. Hence, defining α̃:=min{ 1, σ(1-γ)/C_7n^2, σ(γ̅-1)/C_7n^2, (σ̅-σ)/C_7n, σ/C_7n, (σ- γ_p C_inexact)/C_7n}, the thesis follows observing that, by definition, α^j ≥α̃. Finally, we are ready to show that the number of iterations required to reduce μ below a certain tolerance ε is proportional to n^2. Algorithm <ref> has polynomial complexity, i.e. given ε>0 there exists K ∈ O(n^2ln (1/ε) ) s.t. μ^ j ≤ε for all j ≥ K. Thesis follows observing that (𝐱^j)^T𝐬^j ≤(1 -(1-σ̅)α̃)^j(𝐱^0)^T𝐬^0 ≤(1 -(1-σ̅) C_8/n^2)^j(𝐱^0)^T𝐬^0. § PROPERTIES OF THE REGULARIZED NORMAL EQUATIONS SYSTEM In this section we show some properties of matrix S_ρ,δ that are useful for the analysis performed in the next section. For the original graph G(V,E), we define the adjacency matrix 𝒜∈ℝ^|V|×|V| such that 𝒜_ij=1 if there exists an edge between nodes i and j and 𝒜_ij=0 otherwise. Notice that for an undirected graph 𝒜 is symmetric; we assume that there are no self-loops, so that the diagonal of 𝒜 is made of zeros. Let us define the degree matrix of a graph 𝒟∈ℝ^|V|×|V| such that 𝒟 is diagonal and 𝒟_jj is the degree of node j. Notice that 𝒟_jj = (𝒜𝐞)_j. Let us define also the Laplacian matrix of a graph as ℒ∈ℝ^|V|×|V| such that ℒ=𝒟-𝒜. An important relation between the Laplacian ℒ and the node-arc incidence matrix A is that ℒ = AA^T. Given a diagonal matrix Θ and a parameter ρ, we define the re-weighted graph G_Θ as the graph with the same connectivity of G, in which the weight of every edge j is scaled by a factor √(Θ_jj/1+ρΘ_jj). The adjacency matrix of the new graph 𝒜_Θ has the same sparsity pattern of 𝒜, but takes into account the new weight of the edges. The same happens for the degree matrix 𝒟_Θ. The incidence matrix therefore becomes A_Θ = A(Θ^-1+ρ I)^-1/2. The new Laplacian matrix thus reads ℒ_Θ = 𝒟_Θ-𝒜_Θ and can be written as ℒ_Θ = A_Θ A_Θ^T = A(Θ^-1+ρ I)^-1A^T. We have just shown that the normal equations matrix can be interpreted as the Laplacian matrix of the original graph, where the edges are weighted according to the diagonal entries of matrix Θ. This result is important because solving linear systems that involve Laplacian matrices is much easier than solving general linear systems. We summarize this result in the following Lemma. The matrix A(Θ^-1+ρ I)^-1A^T is the Laplacian of a weighted undirected graph and hence, for every Θ∈ℝ^n× n_+ A(Θ^-1+ρ I)^-1A^T=𝒟_Θ-𝒜_Θ, where 𝒟_Θ and 𝒜_Θ are the degree and adjacency matrices of the weighted graph. The next result shows that the normal matrix is strictly diagonally dominant, due to the presence of dual regularization. This property is significant because it assures that the incomplete Cholesky factorization of the normal equations matrix S_ρ,δ can always be computed without the algorithm breaking down (in exact arithmetic), see e.g. <cit.>. If δ>0 the matrix S_ρ, δ is strictly diagonally dominant. From Lemma <ref>, we have that A(Θ^-1+ρ I)^-1A^T=𝒟_Θ-𝒜_Θ and hence ∑_j ≠ i |(S_ρ, δ)_ij|= ∑_j ≠ i | -(𝒜_Θ)_ij |=∑_j ≠ i (𝒜_Θ)_ij < (𝒟_Θ)_ii+δ = (S_ρ, δ)_ii. The next two technical results are related to the distribution of eigenvalues of the normal matrix. They are used in the next section to show that the inexactness introduced when sparsifying the normal matrix remains bounded. λ_max(AA^T) ≤ 2 max_v ∈ V deg(v). The proof is straightforward using Gershgorin's circle Theorems. The eigenvalues λ of matrix S_ρ,δ satisfy δ≤λ < δ + 2/ρmax_v ∈ V deg(v). Using the Rayleigh quotient, for some vector 𝐮 and 𝐯=A^T𝐮, the eigenvalues can be written as λ = 𝐯^T(Θ^-1+ρ I)^-1𝐯/𝐯^T𝐯𝐮^T AA^T𝐮/𝐮^T𝐮 + δ The lower bound λ≥δ is trivial; the upper bound follows from Lemma <ref> and (<ref>). § SPARSIFICATION OF THE REDUCED MATRIX We now propose a technique to reduce the number of nonzeros in the normal equations S_ρ,δ, based on the weights of the edges in the re-weighted graph (according to Lemma <ref>). We then show that this sparsification strategy is sound and produces a polynomially convergent interior point algorithm. In this section we omit the IPM iteration counter j and we consider all the IPM-related quantities as a function of μ→ 0. As IPMs progress towards optimality, we expect the following partition of the diagonal matrix Θ contributed by the barrier term: ℬ:={ i=1,…,n s.t. x_i → x_i^*>0, s_i → s_i^*=0 } 𝒩:={ i=1,…,n s.t. x_i → x_i^*=0, s_i → s_i^*>0 }, where the optimal solution (x^*,y^*,s^*) was defined in (<ref>). Notice that (ℬ,𝒩) is the partition corresponding to the optimal solution. We suppose that the following asymptotic estimates hold s_i ∈ O(μ) and x_i ∈ O(1) for i ∈ℬ x_i ∈ O(μ) and s_i ∈ O(1) for i ∈𝒩 and, since x^-1_is_i≈μ x^-2_i when an IPM iterate is sufficiently close to the central path, using (<ref>), we suppose Θ_ii^-1=x^-1_is_i = O(μ) for i ∈ℬ and Θ_ii^-1=x^-1_is_i= O(μ^-1) for i ∈𝒩. This assumption makes sense given the neighbourhood that is considered, see e.g. <cit.>. Due to Assumption <ref>, we consider the following asymptotic estimates of (Θ^-1+ρ I)^-1 (Θ^-1+ρ I)^-1_ii= O(1/ρ+μ) if i ∈ℬ O(μ/1+ρμ ) if i ∈𝒩. The diagonal entries of Θ give a specific weight to each column of matrix A (or equivalently, give a weight to each edge of the original sparse graph, as shown in Lemma <ref>). The columns for which the corresponding Θ_ii is O(μ) have a very small impact on the normal matrix, but still contribute to its sparsity pattern. In order to save time and memory when forming (complete or incomplete) Cholesky factorizations, we propose the following sparsification strategy: we introduce a suitable threshold C_t ∈ℝ_+ and define (Θ_C_t μ,ρ^†)_ii:= (Θ^-1+ρ I)^-1_ii if (Θ^-1+ρ I)^-1_ii≥C_t μ/1+ρμ 0 if (Θ^-1+ρ I)^-1_ii < C_t μ/1+ρμ . We define the μ-sparisified version S^C_t μ_ρ, δ of S_ρ, δ as S^C_t μ_ρ, δ:= AΘ_C_t μ,ρ^†A^T+δ I. Notice that this matrix completely ignores some of the columns of A (and some of the edges of the graph). The dual regularization δ I guarantees that the resulting matrix is non-singular, irrespective of the level of sparsification chosen. In this paper, we consider using inexact Newton directions produced by solving linear systems with matrix S^C_t μ_ρ, δ, rather than S_ρ, δ. It is important to note that, in general, the sparsity pattern of the matrix S^C_t μ_ρ, δ depends on the choice of the parameter C_t and on the partitioning (ℬ, 𝒩). Indeed, when μ is sufficiently small, we expect that |{i ∈{1, …, n} s.t. (Θ^-1+ρ I)^-1_ii≥C_t μ/1+ρμ}| = |ℬ| . Let us now show how the algorithm is affected by the use of the proposed sparsified normal matrix. Notice that the results presented below depend strongly on two facts: the optimization problem evolves on a graph and thus the normal matrix is a Laplacian, with very desirable properties; the interior point method employs primal-dual regularization. We start by showing how much the normal matrix deviates from its sparsified counterpart. The sparsification strategy in (<ref>) produces a matrix which is close to the original S_ρ,δ, in the sense that S_ρ, δ- S^C_t μ_ρ, δ=O(C_t μ/1+ρμ). We have that S_ρ, δ- S^C_t μ_ρ, δ = AE^μA^T where E^μ is the diagonal matrix defined as E^μ_ii:= (Θ^-1+ρ I)^-1_ii-(Θ_C_tμ, ρ^†)_ii= 0 if (Θ^-1+ρ I)^-1_ii≥C_t μ/1+ρμ (Θ^-1+ρ I)^-1_ii if (Θ^-1+ρ I)^-1_ii < C_t μ/1+ρμ. Hence we have λ_max(E^μ) ≤C_t μ/1+ρμ. Thesis follows using Lemma <ref> and observing that 𝐯^T(S_ρ, δ- S^C_t μ_ρ, δ)𝐯/𝐯^T𝐯=𝐯^T(AE^μ A^T)𝐯/𝐯^T𝐯≤λ_max(E^μ)λ_max(AA^T). We now show that the condition number of both matrices is uniformly bounded. This is an important property when using iterative Krylov solvers to find the Newton directions. When μ is sufficiently small, the condition numbers of S_ρ, δ and S^C_t μ_ρ, δ satisfy k_2( S_ρ, δ) ∈ O(1+ 1/δ(ρ + μ)) and k_2( S^C_t μ_ρ, δ) ∈ O(1+ 1/δ(ρ + μ)). The thesis follows from Lemma <ref> and observing that for 𝐯 = A^T𝐰 δ≤𝐰^TS_ρ, δ𝐰/𝐰^T𝐰≤δ + 𝐯^T(Θ^-1+ρ I)^-1𝐯/𝐯^T𝐯𝐰^TAA^T 𝐰/𝐰^T𝐰 ≤δ + O(1/ρ+μ(2 max_v ∈ V deg(v))) . We now show that the solution of the sparsified linear system is “close" to the solution of the original one, and the bound depends on μ. This result depends on the spectral distribution that was shown in the previous section. For all 𝐯∈ℝ^m we have that (S^C_tμ_ρ, δ)^-1𝐯=S^-1_ρ, δ𝐯+ψ where ψ∈ O(C_t μ/δ^2 (1 + ρμ)𝐯). Using Lemma <ref> and Theorem <ref>, we have that I-S_ρ, δ^-1S_ρ, δ^C_tμ≤S_ρ, δ^-1 S_ρ, δ- S^C_t μ_ρ, δ∈ O(C_tμ/δ(1+ρμ) ), and hence (S_ρ, δ^-1-(S_ρ, δ^C_tμ)^-1)𝐯≤(S_ρ, δ^C_tμ)^-1S_ρ, δ^-1 (S_ρ, δ- S^C_t μ_ρ, δ)𝐯∈ O( C_t μ/δ^2 (1 + ρμ)𝐯). The next technical result is useful for the proof of Corollary <ref>. ξ̅_p^j is uniformly bounded, i.e. there exists a constant C_9>0 such that for all j∈ℕ ξ̅_p^j≤ C_9 For the sake of clarity of notation, we do not include the index j in the proof. To bound ξ̅_p, consider the following estimate ξ̅_p≤ξ_p + A( (Θ^-1+ρ I)^-1X^-1ξ_μ,σ+ (Θ^-1+ρ I)^-1ξ_d). We already know the following estimates ξ_p≤μ n/γ_p≤μ^0 n/γ_p, ξ_d≤μ n/γ_d≤μ^0 n/γ_d, (Θ^-1+ρ I)^-1≤1/ρ. To estimate (Θ^-1+ρ I)^-1X^-1ξ_μ,σ, we proceed as in (<ref>): (Θ^-1+ρ I)^-1X^-1ξ_μ,σ= (Θ^-1+ρ I)^-1(S𝐞-σμ X^-1𝐞)≤ ≤(Θ^-1+ρ I)^-1/2(Θ^-1+ρ I)^-1/2X^-1/2S^1/2(X^1/2S^1/2𝐞+σμX^-1/2S^-1/2𝐞). It is straightforward to prove that (Θ^-1+ρ I)^-1/2≤1/ρ^1/2, (Θ^-1+ρ I)^-1/2X^-1/2S^1/2≤1. The remaining terms can be bounded using the properties of the neighbourhood X^1/2S^1/2𝐞≤√(μγ̅n), σμX^-1/2S^-1/2𝐞≤σ√(μ n/γ). Since μ≤μ^0, we deduce that ξ̅_p≤ C_9, for some positive constant C_9. Finally, we show that, for a small enough constant C_t, the inexactness introduced by the sparsification strategy satisfies the Assumption (<ref>). Therefore, an algorithm that includes such a sparsification strategy retains the polynomial complexity of the inexact IPM shown in Section <ref>. If in Algorithm <ref> we generate the search directions using (S^C_tμ_ρ, δ)^-1 with C_t sufficiently small, i.e. if we compute the search directions using (<ref>), (<ref>) and (<ref>) where S^C_t μ_ρ, δ substitutes S_ρ, δ, then Algorithm <ref> is convergent. Using Theorem <ref>, we have (S^C_tμ_ρ, δ)^-1ξ̅_p=S^-1_ρ, δξ̅_p+ψ where ψ≤ C_10C_t μ/δ^2 (1 + ρμ)ξ̅_p for some constant C_10>0. Hence S_ρ, δ(S^C_tμ_ρ, δ)^-1ξ̅_p_= Δ𝐲=ξ̅_p+S_ρ, δψ. Recall (<ref>) and Lemma <ref>; the thesis follows observing that there exists a constant C_11>0 s.t. S_ρ, δψ≤ C_11C_t μ/δ^2 (1 + ρμ)≤ C_inexact𝐱^T𝐬 where the last inequality holds if C_t < δ^2 (1 + ρμ) nC_inexact/C_11. § NUMERICAL RESULTS The proposed method is compared with Lemon (Library for Efficient Modelling and Optimization on Networks) <cit.>, an extremely efficient implementation of the network simplex method, that has been shown to significantly outperform other popular implementations, like Cplex, see e.g. <cit.>. The network simplex method has been shown <cit.> to be very competitive against other algorithms specifically developed for discrete OT, while remaining very robust and adaptable to many types of problems. Let us highlight that the other algorithms available in Lemon (cost scaling, capacity scaling, cycle cancelling) produced worse results than the network simplex. All the computational tests discussed in this section are performed using a Dell PowerEdge R740 running Scientific Linux 7 with 4 × Intel Gold 6234 3.3G, 8C/16T, 10.4GT/s, 24.75M Cache, Turbo, HT (130W) DDR4-2933, with 500GB of memory. The PS-IPM implementation closely follows the one from <cit.> and is written in Matlab®. The software versions used for the numerical experiments are as follows: Matlab R2022a, Lemon 1.3.1 and GCC 4.8.5 as the compiler. We stop Algorithm <ref>, when 𝐠 -A^T𝐲-𝐬_∞≤ R · tol ∧ 𝐛 -A𝐱_1≤ R · tol ∧ C_𝐱, 𝐬≤ tol, where tol =10^-10, R:=max{A_∞, 𝐛_1, 𝐜_1 }, and C_𝐱, 𝐬:=max_i{min{|(𝐱_i𝐬_i)|,|𝐱_i|, |𝐬_i|}}. Concerning the choice of the parameters in Algorithm <ref>, we set σ_r = 0.7. Moreover, to prevent wasting time on finding excessively accurate solutions in the early PPM sub-problems, we set τ_1=10^-4, i.e. we use as inexactness criterion for the PPM method 𝐫_k(𝐱_k+1,𝐲_k+1)) < 10^4 σ_r^k min{1, (𝐱_k+1, 𝐲_k+1)-(𝐱_k, 𝐲_k) . Indeed, in our computational experience, we have found that driving the IPM solver to a high accuracy in the initial PPM iterations is unnecessary and usually leads to a significant deterioration of the overall performance. Concerning Algorithm <ref>, we set as regularization parameters ρ=10^-4 and δ = 10^-6. Moreover, in order to find the search direction, we employ a widely used predictor-corrector method <cit.>. This issue represents the main point where practical implementation deviates from the theory in order to gain computational efficiency. Finally, concerning the test problems, in all the following experiments we generate the load vector ρ_1-ρ_0 in (<ref>) randomly and such that the sum of its entries is zero (to guarantee feasibility of the optimization problem), with only 10% of them being nonzeros. Moreover, we fix the weight of each edge at 1. §.§ Analysis of the sparsification strategy In this section, we compare three possible solution strategies inside the PS-IPM: Cholesky factorization (using Matlab's function) applied to the full normal equations matrix (<ref>); Cholesky factorization (always using Matlab's function) applied to the sparsified matrix (<ref>); preconditioned conjugate gradient (PCG) (using Matlab's function) applied to the sparsified matrix (<ref>) with incomplete Cholesky preconditioner (computed using Matlab's function). More in particular, as sparsification parameter in (<ref>) we use C_t = 0.4, = 10^-3 in and = 10^-1μ in . We test the above mentioned solution strategies on various instances generated with the generator <cit.>; in particular, we considered the graphs , , and , with a fixed number of 100,000 nodes and different densities (i.e. average number of edges per node). Therefore, for these instances, m=|V|=100,000 and n=|E|=m·density. In the upper panels of Figures <ref> and <ref> we report the computational time of the three approaches for various values of densities (chosen in relation to the properties of the graph), whereas in the lower panels we report the total number of IPM iterations. From the presented numerical results, it is clear that the sparsification strategy, in conjunction with the iterative solution of the linear systems, provides a clear advantage over the use of a direct factorization. As can be expected, the iterative method and the sparsification strategy become more advantageous when the size of the problem (number of edges) increases. On the other hand, it is important to note that the use of the sparsified Newton equations in conjunction with the full Cholesky factorization presents only limited advantages in terms of computational time when compared to the Cholesky factorization of the full Newton normal equation. This is the case because the resulting inexact IPM requires, generally, more iterations to converge (see lower panels of Figures <ref> and <ref>). Advantages of the proposed approach become clearer when the graphs are denser. §.§ Results on randomly generated graphs In this section, we compare the PS-IPM algorithm, using the sparsified normal equations matrix and the PCG, with the network simplex solver of Lemon. For PS-IPM we use the same parameters as proposed in Section <ref>. The graphs used in this section come from the generator developed in <cit.> and already used for OT on graphs in <cit.>. This generator produces random connected graphs with a number of nodes varying from 1,000 to 10,000,000 and degrees of each node in the range [1,10], with an average of 5 edges per node. For each size, 10 graphs and load vectors are generated and tested. These parameters closely resemble the ones used in <cit.>. Figure <ref> shows the comparison of the computational time between PS-IPM and Lemon: for each size of the problem (indicated by the total number of edges), we report the summary statistics of the execution times using Matlab's . For small size problems, Lemon is the clear winner, by two orders of magnitude; however, as the size increases, the performance difference between the two methods reduces and for the largest instance considered, Lemon becomes one order of magnitude slower than PS-IPM. Figure <ref> shows the average computational time against the number of edges (from 5,000 to 50M) in a logarithmic scale, the corresponding regression lines and their slopes. Both the proposed method and the network simplex (see <cit.>) are known to have polynomial complexity in terms of number of iterations (although the estimates are usually very pessimistic for IPMs); from the computational results presented, we can estimate the practical time complexity of both methods. Recall that, in a log-log plot, polynomials of the type x^m appear as straight lines with slope m. Using linear regression, we can estimate that the time taken by Lemon grows with exponent approximately 2.06, while the time taken by PS-IPM grows with exponent approximately 1.28, providing a considerable advantage for large sizes. Looking at the full set of results as reported in Figure <ref>, let us mention finally the fact that the variance of the computational times over the 10 runs for a given problem size is smaller when using PS-IPM, especially for large sizes, indicating that the method is more robust and less dependent on the specific problem being solved. This is a very desirable property. §.§ Results on SuiteSparse graphs Results on randomly generated problems do not necessarily represent the ability of an optimization method to tackle problems coming from real world applications. Therefore, in this section, we show the results of applying PS-IPM and Lemon to some sparse graphs from the SuiteSparse matrix collection <cit.>. The characteristics of the graphs considered are shown in Table <ref>: the number of nodes, edges and the average number of edges per node. All the graphs are undirected and connected. Due to the fact that the considered graphs are particularly sparse, in the numerical results presented in this section, we solve the sparsified normal equations using the full Cholesky factorization. Figure <ref> shows the computational times for the eight problems considered, using PS-IPM and Lemon. Apart from the problem , which represents a relatively small instance in out dataset, on all the other problems PS-IPM consistently outperforms Lemon in terms of required computational time. In particular, for the problems , , and , which reach up to 16 million nodes and 48 million edges, PS-IPM is one order of magnitude faster than Lemon. Notice that graphs of these sizes (and larger) appear in many modern practical applications, e.g. social networks, PageRank, analysis of rail/road networks, energy models, to mention a few. Looking at the regression lines and their slopes, we notice that the time taken by Lemon grows with exponent approximately 2.07 while the time taken by PS-IPM grows with exponent approximately 1.40. These values are very close to the ones found previously for randomly generated graphs. The data of Figure <ref> however has a more erratic behaviour than the times shown in Figure <ref>, because the properties of each graph considered are different and because we are not averaging over 10 different instances of each problem. Let us highlight also that the time taken by Lemon seems to be more problem dependent, while PS-IPM looks more consistent and robust. § CONCLUSION An efficient computational framework for the solution of Optimal Transport problems on graphs has been presented in this paper. Such framework relies on Proximal-Stabilized Interior Point Method and clever sparsifications of the normal Newton equations to compute the inexact search directions. The proposed technique is sound and polynomial convergence guarantee has been established for the inner inexact IPM. Extensive numerical experiments show that for large scale problems, a simple prototype implementation is able to outperform consistently a highly specialized and very efficient implementation of the network simplex method. We highlight also that Interior Point Methods are more easily parallelizable than simplex-like methods; for huge scale problems, for which high performance computing resources need to be used, the use of IPMs with proper parallelization may be the only viable strategy to solve these problems. siam
http://arxiv.org/abs/2307.04414v1
20230710084225
Optical-power-dependent splitting of magnetic resonance in nitrogen-vacancy centers in diamond
[ "Shuji Ito", "Moeta Tsukamoto", "Kensuke Ogawa", "Tokuyuki Teraji", "Kento Sasaki", "Kensuke Kobayashi" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "physics.app-ph" ]
Department of Physics, The University of Tokyo, Bunkyo-ku, Tokyo, 113-0033, Japan Department of Physics, The University of Tokyo, Bunkyo-ku, Tokyo, 113-0033, Japan Department of Physics, The University of Tokyo, Bunkyo-ku, Tokyo, 113-0033, Japan National Institute for Materials Science, Tsukuba, Ibaraki 305-0044, Japan Department of Physics, The University of Tokyo, Bunkyo-ku, Tokyo, 113-0033, Japan Department of Physics, The University of Tokyo, Bunkyo-ku, Tokyo, 113-0033, Japan Institute for Physics of Intelligence, The University of Tokyo, Bunkyo-ku, Tokyo 113-0033, Japan Trans-scale Quantum Science Institute, The University of Tokyo, Bunkyo-ku, Tokyo 113-0033, Japan Nitrogen-vacancy (NV) centers in diamonds are a powerful tool for accurate magnetic field measurements. The key is precisely estimating the field-dependent splitting width of the optically detected magnetic resonance (ODMR) spectra of the NV centers. In this study, we investigate the optical power dependence of the ODMR spectra using NV ensemble in nanodiamonds (NDs) and a single-crystal bulk diamond. We find that the splitting width exponentially decays and is saturated as the optical power increases. Comparison between NDs and a bulk sample shows that while the decay amplitude is sample-dependent, the optical power at which the decay saturates is almost sample-independent. We propose that this unexpected phenomenon is an intrinsic property of the NV center due to non-axisymmetry deformation or impurities. Our finding indicates that diamonds with less deformation are advantageous for accurate magnetic field measurements. Optical-power-dependent splitting of magnetic resonance in nitrogen-vacancy centers in diamond Kensuke Kobayashi Received / Accepted ============================================================================================== § INTRODUCTION A nitrogen-vacancy (NV) center in a diamond is a defect where a nitrogen atom replaces a carbon atom in the lattice with a vacancy at its neighboring site. The NV center has an electron spin S=1, and its peculiar spin-dependent optical transitions enable the optical initialization and readout of the ground-state spin. This property has been applied to the quantum sensing of local magnetic fields <cit.> and temperature <cit.>. Researchers have applied the technique to measure various physical properties, such as observing the electron flow in graphene <cit.> and the stray fields from magnetic domain walls of a single-crystal antiferromagnet Cr_2O_3 <cit.>. The basis for these achievements is the ability to accurately measure local magnetic fields on the order of μT using NV centers. Optically detected magnetic resonance (ODMR) is a typical and basic measurement technique for quantum sensing using NV centers. This technique measures the microwave (MW) frequency dependence of the photoluminescence (PL) intensity (red) when the NV centers are continuously irradiated with an excitation light (green) and MW. The ODMR spectrum presents a magnetic resonance signal between the ground state spin levels m_S=0 and m_S=±1. The resonance frequency splits against the magnetic field due to the Zeeman effect <cit.> and shifts in the same direction against temperature change <cit.>. In addition, the splitting of the resonance frequency is affected by crystal strain <cit.>, electric field <cit.>, and hyperfine interactions <cit.>. Therefore, it is essential for accurate sensing to estimate the splitting width purely due to the magnetic field from the ODMR spectra. Commonly used diamond samples are single-crystal bulk diamonds and nanodiamonds (NDs) with grain sizes ranging from tens to hundreds of nanometers <cit.>. Depending on whether the diamond is a bulk crystal or nanoparticles, there are variations in crystal strains, impurity density, and crystal orientation. The ODMR spectra of NV centers vary with the excitation light power. For example, the contrast and linewidth vary with the degree of initialization and spin relaxation associated with optical excitation <cit.>. These dependencies only affect sensitivity but not accuracy. Recently, however, it was reported that the ODMR spectra of NV centers in NDs at low magnetic fields change with the optical power, degrading the accuracy of temperature measurements <cit.>. They found that a change in the ODMR splitting up to 2.8 MHz (equivalent to Zeeman splitting for 50 μT) occurred depending on the optical power. This unexpected observation directly affects the accuracy of the conversion of the ODMR splitting to magnetic field, which is a critical issue in achieving the μT-order magnetic field measurements necessary for the physical properties measurements. In particular, in wide-field imaging of magnetic field and temperature using a CMOS camera and NV ensembles <cit.>, inhomogeneity of the optical power within the field of view could result in degradation of the measurement of the magnetic field and temperature distributions. Thus, it is crucial to investigate the extent to which this phenomenon is universal for various samples, i.e., bulk diamonds as well as NDs. In this study, we investigate the dependence of the ODMR splitting on the optical power using several NV ensemble samples. We first investigate the NV ensembles in NDs with a grain size of 100 nm, the same size as in the previous study <cit.>. We confirm the reported behavior of the ODMR splitting to decrease with increasing optical power. In addition, we measure the ODMR spectra over a broader optical power range than in the previous study. We thereby find the splitting decays exponentially with the optical power and saturates at a constant value. We observe similar behavior in NDs with a different grain size of 50 nm. We then investigate NV ensembles in a single-crystal bulk diamond with much fewer impurities and strain than NDs and find a weaker but similar behavior. We prove the irrelevance of magnetic field and temperature on this observation and discuss possible mechanisms to account for this phenomenon. Finally, we propose the possibility that repetitive photoionization of impurities averages the local non-axisymmetry environment of NV centers and a systematic method to deal with this phenomenon. This paper is organized as follows. Sec. <ref> describes the experimental setup and defines the optical power in this study. Sec. <ref> reproduces the previous study <cit.> using NDs and confirms that the ODMR spectra change with optical power. Sec. <ref> shows that a similar phenomenon occurs even in the single-crystal bulk diamond. Sec. <ref> analyzes the dependence of the ODMR splitting on the optical power. In Sec. <ref>, we discuss the influence of the magnetic field and temperature, possible mechanisms, and implications of the present finding. Sec. <ref> presents our conclusions. § EXPERIMENTS Figure 1(a) shows an overview of the experimental setup <cit.>. All measurements in this study are performed in a confocal system at room temperature. A green laser with a wavelength of 520 nm (Oxxius, LBX-520-70-CSB-PPA) is applied for initialization and readout of the NV centers. The intensity of the green laser is adjusted using several fixed neutral density filters as appropriate. The intensity of the red emission from the NV centers is detected by an avalanche photodiode (APD) after passing through a dichroic mirror, a 514 nm notch filter, a 650 nm long-pass filter, and an 800 nm short-pass filter. When measuring NV centers in nanodiamonds, the red emission counts were suppressed using a fixed neutral density filter to match the APD measurement range. We use a MW antenna for spin manipulation of the NV centers, which is a coplanar waveguide with ground consisting of a 1.6 mm thick PCB substrate and an 18 μm thick copper foil with a 2 mm width centerline terminated with a 50 Ω resistor. The antenna is impedance matched so that no frequency dependence of the MW power at a sample position is present during the measurement. We confirm that from S11 parameter. Microwaves are output from a vector signal generator at approximately -13 dBm and input to a microwave antenna after passing through an MW amplifier (typ. +45 dB). In all measurements in this paper, the microwave power is fixed at the above values. We use three types of diamond samples, #1, #2, and #3, in the present study: NDs with nominal grain sizes of ϕ50 nm (#1) and of ϕ100 nm (#2), and NV ensemble in a bulk diamond film (#3). The NDs are those commercially available from Adámas Nanotechnologies, NDNV50nmHi10ml for #1 and NDNV100nm10ml for #2. In the measurements of #1 and #2, we prepare a ND film [see Fig. 1(b)], which is the NDs spin-coated on a cover glass at 600 rpm <cit.>. The thickness of the ND film made by this method is typically about 200–1000 nm <cit.>. The number of NDs in #1 and #2 within a laser irradiation area is estimated to be several hundred and more than 20, respectively. The ND film is fixed to the antenna with carbon tape. A surface of the ND film is at a height of 0.44 mm above the antenna. In addition to NDs, this study investigates a bulk diamond film (#3). It was synthesized using a custom-built microwave plasma chemical vapor deposition (MPCVD) system <cit.>. High-pressure and high-temperature type-Ib (100) single crystalline diamond plates were used as substrates. ^12C concentrated (>99.95%) methane gas was used as a carbon source. First, an undoped thick film with a total thickness of ∼70 μm was grown on the substrate by chemical vapor deposition (CVD). A ^15N doped CVD layer was then overgrown on the undoped film with a gas ratio of ^15N/C of 4000 ppm. An expected ^15N concentration is ∼10 ppm and a film thickness is ∼5 μm. This nitrogen density is consistent with the NV's coherence T_2 = 29 μs obtained by Hahn echo <cit.>. We fix #3 directly to the antenna with carbon tape for the measurement. A surface of the bulk diamond film is at a height of 0.73 mm above the antenna. In this study, NV centers spontaneously formed during the MPCVD process are used for characterization. We perform the present study under three different magnetic fields: a zero field (A), an environmental field (B), and a biased field (C). We apply magnetic fields for the conditions A and C. We use two coils beside and beneath the sample stage to generate magnetic fields perpendicular and parallel to the optical axis, respectively, as shown in Fig. 1(a). Using a tesla meter (Lake Shore Cryotronics F71), we evaluate the magnetic fields at the sample position as 6.3 μ T, 88.7 μ T, and 196.7 μ T for the conditions A, B, and C, respectively. The upper panel of Fig. 1(b) shows an optical microscope image of the spin-coated NDs ϕ50 nm (#1). The lower panel shows the PL intensity map at the spot surrounded by a red frame in the upper panel. The color bar indicates PL intensity in a unit of kilo counts per sec (kcps). The data set for #1 is obtained using the standard ODMR measurement at the red circle. As the dependence of the ODMR spectra on the optical power of the excitation light is the central topic in this study, it is important to calibrate the optical power (P_opt). We evaluate P_opt from the green laser intensity and the irradiated area with an accuracy of 10 %. The green laser intensity is measured between the objective lens and the diamond sample using an optical power meter (Thorlab, Power Meter PM100D, sensor S121C). The irradiation area is estimated as the spot size of the red luminescence from a single NV center near the surface of a high quality bulk diamond provided by H. Watanabe in AIST, Japan <cit.>. The spot size is calculated as a circle whose diameter is the full width at half maximum of the intensity distribution. Figure 1(c) presents an example of the PL intensity map from a single NV center used to determine the spot size, where the diamond surface is defined as the xy-plane. Ten PL intensity maps of a single NV center are fitted by the two-dimensional (2D) Gaussian function, and the obtained average of their full width at half-maximum, 386 ± 2 nm, is used as the laser spot diameter. The cross sections of the experimental data (markers) and the 2D Gaussian fitting (solid line) are shown in the upper side and right side panels of Fig. 1(c). Both panels show that the fits are consistent with the experimental data. All the experimental conditions in this study are compiled in Table <ref>. NDs ϕ100 nm #2' in Table <ref> indicates the data set obtained at a different location of the same sample as NDs ϕ100 nm #2. The estimated densities of nitrogen, [N], and NV center, [NV], are also given in Table <ref>. We include the previous study (Ref. <cit.>) in Table <ref> in the same cell as 2B as their measurements were carried out in an environmental geomagnetic field (∼50 μ T) using NDs ϕ100 nm supplied by Adámas Nanotechnologies. § RESULTS AND DISCUSSIONS §.§ ODMR Spectra of Nanodiamond NVs The upper panel of Fig. 2(a) is the ODMR spectrum as a function of the MW frequency obtained at P_opt=0.55 kW/cm^2 shown by markers. This result is for 2A (see Table <ref>). The vertical axis indicates the PL contrast, namely the normalized contrast of the PL intensities with and without MW. In this measurement, the swept frequency range is 60 MHz. The splitting between dips in the ODMR spectrum is due to crystal strain and electric fields that break the axial symmetry of the NV centers. The impacts of such non-axisymmetry deformation were treated in Refs. <cit.>. Below we call these factors as “deformation”. We note that similar observations for the NDs ensemble were reported before, for example, in Fig. 1(d) of Ref. <cit.>. Their shapes are generally consistent with ours, while the splitting is slightly larger than that in the present study as they applied a magnetic field of 100 μ T. Also, similar ODMR spectra obtained in a single ND were reported in Fig. 3(a) of Ref. <cit.>. From now on, we focus on splitting quantitatively based on the values obtained from fitting with a double Lorentzian function. This fitting method is meaningful because it is often used for magnetometry using NVs. We will discuss the validity and limitations of this method later in Sec. <ref>. The solid line in the upper panel of Fig. 2(a) is a fitted curve. We define the difference in frequencies between the two dip values obtained by this fitting as the difference Δ. Δ is 11.5±0.2 MHz in this specific case, which is consistent with the literature values of 10–20 MHz for NDs <cit.>. We measure the ODMR spectra by increasing P_opt from 0.55 kW/cm^2. The lower panel of Fig. 2(a) shows the spectrum for 2A obtained at P_opt=38.4 kW/cm^2, which is the maximum optical power used in the present study. We discuss later that the temperature increase due to laser heating is inconsequential within the present optical power range. As in the upper panel, the markers show experimental data, and the solid curved line results from a double Lorentzian fitting. The PL contrast decreases from 2.7% at P_opt=0.55 kW/cm^2 to 0.5% at P_opt=38.4 kW/cm^2 because the increase in the optical power enhances the spin initialization rate, i.e., the transition rate from m_S=±1 to m_S = 0. The spectrum also possesses two dips, but careful inspection reveals a slight change in shape between the upper and lower panels. The dashed and solid vertical lines show the dip positions obtained by the fitting at P_opt=0.55 kW/cm^2 and P_opt=38.4 kW/cm^2, respectively. Δ is determined to be 9.4±0.3 MHz for P_opt=38.4 kW/cm^2. Thus, Δ decreases with increasing P_opt. Similar behavior was reported in Fig. 3(a) of Ref. <cit.>, suggesting that Δ of NVs in NDs actually depends on the optical power, which is usually not considered. In our case, Δ changes by approximately 2.1 MHz between the two different P_opt. Significantly, ignoring deformation, this variation corresponds to about 38 μT according to a magnetic field conversion widely used in the NV research field. Therefore, this phenomenon can be relevant in applying NVs to magnetic field measurements. The above finding is not an artifact caused by a double Lorentzian fitting. To confirm this, Fig. 2(b) presents the ODMR spectra measured at P_opt=0.55, 2.12, 4.24, 8.21, 15.2, and 31.3 kW/cm^2, which are incrementally shifted from bottom to top. The markers are the experimental data, where the spline interpolation curves are superposed by the solid lines. Since the PL contrast varies depending on P_opt, we appropriately normalize the spectra to focus only on the shape. The cross markers (+) point to the dip positions in the spline interpolation curves. Their behavior again supports that the two dips become closer for a larger P_opt. While we do not show the data, the results of the condition 2'A and the NDs of ϕ50 nm (1A) are consistent with the results of 2A. Some results are later shown in Figs. 4(d), 4(e), and 4(f). §.§ ODMR Spectra of Bulk Diamond NVs We focus on the bulk diamond film #3 to investigate whether or not the optical power dependence observed in NDs is relevant here. The upper panel of Fig. 3 presents the ODMR spectrum for the condition 3A obtained at P_opt=0.55 kW/cm^2. The horizontal axis range is 10 MHz, much smaller than that in Fig. 2(a). The obtained spectrum shown by the markers has two sharp dips, as expected for the NVs in bulk diamonds. As performed for the analysis of NDs, we fit the experimental data with a double Lorentzian function. We estimate the splitting between the two dips to be Δ=3.55±0.02 MHz, a comparable value to the width of 3.03 MHz due to the hyperfine interaction in ^15N <cit.>. Presumably, the deformation is much less than 1 MHz because it is buried in this hyperfine splitting. Thus, the bulk diamond differs from NDs because the hyperfine interaction prevails over the deformation. In addition, the resonance line width is significantly narrower than in the NDs. This reflects that the density of impurities, such as nitrogen impurities (P1 centers), which cause the decoherence <cit.>, is low in #3. Indeed, the typical nitrogen concentration of a type 1b diamond, the raw material of NDs, is about 100 ppm, whereas the single-crystal diamond in this study is about 10 ppm. Now, we discuss the ODMR spectra at increased optical powers. The lower panel in Fig. 3 shows the ODMR spectrum by the markers in the condition 3A obtained at P_opt=38.4 kW/cm^2. The markers are experimental data, and the solid curved line results from a double Lorentzian function fitting. As seen in NDs, the contrast decrease is also due to a larger initialization rate in larger optical power. In Fig. 3, the dashed and solid vertical lines indicate the dip positions obtained by the fitting at P_opt=0.55 kW/cm^2 and P_opt=38.4 kW/cm^2, respectively. Δ is now 3.44±0.01 MHz, smaller than Δ=3.55±0.02 MHz. As in the NDs case, Δ becomes smaller in the larger optical power in the bulk diamond. Interestingly, the optical power dependence is present even when the ^15N hyperfine interaction causes the splitting. However, the reduction of Δ in the bulk diamond is much smaller than in NDs. §.§ Analysis of Splitting We systematically examine the dependence of Δ on P_opt. We start with the condition 2A. The upward triangle markers in Fig. 4(a) show the experimentally observed Δ as a function of P_opt between 0.55 kW/cm^2 and 38.4 kW/cm^2. We already showed the results of Δ at the minimum (P_opt=0.55 kW/cm^2) and maximum (P_opt=38.4 kW/cm^2) optical powers in the upper and lower panels in Fig. 2(a), respectively. Figure 4(a) clearly tells that Δ monotonously decays with increasing P_opt and saturates at P_opt≳ 15 kW/cm^2. Previous study <cit.> reported a similar dependence of Δ on P_opt. Their results are superposed in Fig. 4(a) by the markers (+). Significantly, the decaying behavior is almost the same between their results and ours, while they did not reach the optical power to saturate Δ. It is well established that the PL intensity from an NV center, which is determined by the relaxation rate peculiar to its optical process, saturates for a large P_opt <cit.>. However, the present observation is irrelevant as we perform the experiment using a sufficiently small laser intensity such that the PL intensity is linear to P_opt. Figure 4(c) confirms that the PL intensity from NDs in the condition 2A is proportional to P_opt. Ref. <cit.> also treated this sufficiently small optical power region. The optical power dependence in such a very small intensity region is unexpected. Our work has quantitatively confirmed Ref. <cit.> for a wider optical power region. It was previously reported <cit.> that the linewidth of the ODMR spectrum of the NV ensemble decreases with increasing P_opt for an optical power as small as in the present study. However, they did not mention a decrease in Δ of the ODMR spectra. While we observe a systematic change in Δ, no systematic change in the linewidth is detected. For more quantitative discussion, we analyze the behavior of 2A shown in Fig. 4(a) using the following exponential fit. Δ(P_opt) = Aexp(-P_opt/P_0)+Δ_0, where A, P_0, and Δ_0 are the amplitude, the saturation power, and the offset, respectively. The dotted line in Fig. 4(a) is the result of this fitting. A semi-log plot of only the first term of Eq. (<ref>) is shown in Fig. 4(b) with the same markers as Fig. 4(a). The linear variation is consistent with the exponential function. Unlike Fig. 4(a), Fig. 4(b) does not include the previous result <cit.> because no convergence value (offset Δ_0) is available. Then, how about the behavior of the bulk diamond film (the condition 3A)? Figure 4(a) shows the P_opt dependence of Δ. While the decrease of Δ is not as significant as in NDs (2A), the magnified view in the inset of Fig. 4(a) proves that an exponential decay of Δ is also present in the bulk diamond case. Figure 4(b) depicts the decaying component extracted by the fitting to Eq. (<ref>), which looks very similar to the 2A case. The fact suggests a common mechanism behind the present exponential decay of Δ in the NDs and the bulk diamond, even though different reasons cause the dip splitting. We find similar behavior in all the measured conditions at zero fields (1A, 2A, 2'A, and 3A in Table I) and obtain the parameters A, P_0, and Δ_0. Figure 4(d) shows the obtained amplitude A for the four conditions. From left to right, the bars indicate the conditions 1A, 2A, 2'A, and 3A, and the vertical axis is expressed on a semi-log scale. Comparing 1A, 2A, and 2'A, the A values are almost the same for NDs with different grain sizes. On the other hand, the bulk diamond (3A) has A, one order of magnitude smaller than those of NDs (about 1/20). Figure 4(e) shows the saturation power P_0 for different conditions. While the amplitude A significantly differs between NDs and the bulk diamond, there is relatively little difference in P_0 between the two; P_0 ∼ 3.8 kW/cm^2 for NDs and P_0 ∼ 7.4 kW/cm^2 for the bulk diamond. It is vital that the values of P_0 are close for different diamonds. The offsets Δ_0 are shown in Fig. 4(f). They reduce in the order of conditions 1A, 2A, 2'A, and 3A, which seems to coincide with the degree of deformation of NVs. We intuitively expect that the smaller the crystal size is, the greater the deformation tends to be, affecting the sensitivity of the NVs to the optical power. We come back to this fact later. With the results and analysis explained so far, we have established that the ODMR spectra of NVs depend on the excitation light power even when the power is sufficiently small. This phenomenon occurs in both NDs and the bulk diamond. The amplitude of the decay (A) largely depends on the samples, but the behavior of exponentially decaying with the optical power characterized by P_0 seems an essential feature of NVs. The quantitative establishment of the universality of this phenomenon is the main achievement of the present study. The fact also means that the excitation light power can be relevant for accurate magnetic field measurements using NVs. §.§ Possible Mechanisms We are interested in the possible causes of the observed optical power dependence. The zero-field splitting (ZFS), the coupling between the NV spin and the magnetic field, and the deformation are the most critical factors in defining the energy structure of an NV center in the ground state <cit.>. The hyperfine interaction between the NV spin and the neighboring nuclear spins is also often relevant. Therefore, it is essential as a starting point to investigate whether the present phenomenon is related to these four factors. This section will examine them individually and then explore other possibilities. We start with the ZFS, which might be subject to the optical power through the heating by the laser. We define the ZFS as the average of the frequencies of the two dips obtained by a double Lorentzian fit. Around room temperature at zero magnetic fields, the ZFS in the ODMR spectrum decreases linearly with increasing temperature <cit.>. The dependences of ZFS on the optical power in the conditions 1A, 2A, and 3A are shown in Figs. 5(a), (b), and (c), respectively. The figures indicate no signal of systematic change in ZFS due to the optical power. Indeed, the variation of ZFS is much smaller than the amplitude A in Fig. 4(d). Thus, heating by laser irradiation is not responsible for the present optical power dependence. We estimate the maximum temperature change in this experiment to be about 12 K since the maximum frequency shift observed is approximately 850 kHz, as shown in Fig. 5(a). Next, we discuss the influence of the magnetic field. The upper and lower panels of Fig. 6 show the ODMR spectra in conditions 2A (zero magnetic field) and 2C (biased magnetic field of 196.7 μ T), respectively [the spectrum shown in the upper is the same as that in the upper panel in Fig. 2(a)]. Both are obtained with the minimum optical power (P_opt=0.55 kW/cm^2). The markers are experimental data, and the solid curved lines are fitted by a double Lorentzian function. The dashed and solid vertical lines show the dip positions obtained by the fit for 2A and 2C, respectively. As expected from the Zeeman effect, the solid vertical lines are outside the two dashed lines, confirming that Δ increases in the magnetic field. We obtain the spectra for the conditions 2A, 2B, and 2C as P_opt is modulated. The acquired behaviors of Δ are plotted as a function of P_opt in the inset of Fig. 7(a). Due to the Zeeman effect, Δ vertically shifts from 2A to 2B to 2C. Importantly, there is no significant variation in the spectral shapes of 2A, 2B, and 2C except for this vertical shift. We obtain the offset Δ_0 by the fitting to Eq. (<ref>) and plot Δ-Δ_0 against P_opt in the main panel of Fig. 7(a). The behavior of 2A, 2B, and 2C are superposed on each other almost perfectly. We plot the amplitude A, the saturation power P_0, and the offset Δ_0 for each field obtained by the fitting in Figs. 7(b), (c), and (d), respectively. Δ_0 increases with increasing magnetic field [Fig. 7(d)], reflecting the Zeeman effect, although further quantitative analysis is complicated in this magnetic field region due to the considerable influence of deformation in NDs <cit.>. On the other hand, A and P_0 do not change significantly as shown in Figs. 7(b) and (c), respectively. Thus, in our examined regime, there is no visible correlation between the optical power dependence and the magnetic field. Third, we consider the hyperfine interaction. The optical power dependence in the bulk diamond NVs is minimal, only about 1/20 of that in the nanodiamond NVs [see Figs. 4(a) and 4(d)]. However, the contribution of the hyperfine interaction to Δ is reasonably assumed to be almost similar in the two types of diamonds. Therefore, if the hyperfine interaction was responsible for the present phenomenon, it would be difficult to explain the marked difference between both. Consequently, we can conclude that the hyperfine interaction is not the leading cause of this phenomenon. As the final factor, we examine the deformation. In NDs, the deformation is about 10 MHz [Figs. 2(a) and 4(a)], while the value is well below 1 MHz in the bulk diamond, as discussed in Sec. IIIB. Now, the amplitude A to characterize the optical power dependence is ∼ 2 MHz for NDs and ∼ 0.1 MHz for the bulk diamond [Fig. 4(d)]. For the former, the ratio of A to the deformation is about 2/10 = 0.2. For the latter, the ratio is at least 0.1/1 = 0.1 and is comparable to the NDs' case. The ratio of ND to bulk diamond deformation also corresponds to the ratio of nitrogen impurity density [see Table <ref>]. This suggests that either the deformation/impurity itself or the impurity-derived deformation would be responsible for this phenomenon. Although this argument is not fully quantitative, it suggests a correlation between the deformation/impurity and the optical power dependence. We infer a reasonable idea of the possible mechanism based on the deformation caused by impurities. Previous work on single NV centers indicated that the electric field from charge traps causes deformation <cit.>. This might also be the cause with the deformations in the NV ensemble case in our study. If the charge traps originate from impurities, the magnitude of the deformation will correlate with the impurity density, consistent with our observations. It is known that the charge state of impurities changes with photoionization. For example, as the optical power is increased, the time that the NV center retains its charge state decreases exponentially on the millisecond scale <cit.>. As this charge generated by photoionization moves around, the electric field would be time-averaged, suppressing deformation. The relationship between the ionization rate at thermal equilibrium and the photoionization rate determines the coefficient of the exponential change. When the optical power is sufficiently large, the electric field and crystal strain, which cannot be averaged, remain as a finite deformation. Ref. <cit.> also noted that deformation due to charge can change the shape of the ODMR spectrum to a non-Lorentzian distribution. This is consistent with the fact that the ODMR spectrum deviates from the double Lorentzian fitting, and its shape changes with optical power [see Figs. 2(a) and (b)]. Investigating both the dip position and its shape will help to elucidate the mechanism. We note further experimental and theoretical efforts are needed because many parameters could be involved in the mechanism. On the experimental side, comparing bulk samples with systematically varying impurities and deformations and investigating this optical power-dependent splitting in a single NV center with charge-induced deformation <cit.> are helpful. The magnetic field can be swept over a sufficiently wide range compared to the deformation for bulk samples. This will clarify which parameters of the ground-state Hamiltonian appear to depend on optical power. Pulsed ODMR <cit.> will provide information on the time the effect of the laser irradiation remains, which can be used to validate the mechanism. On the theoretical side, it is helpful to investigate what fitting function is appropriate to reproduce the ODMR spectral shape and what defects are candidates for photoionization. § CONCLUSION We investigate the optical power dependence of splitting of the ODMR spectra using various NV ensemble samples. In addition to reproducing the previous study using NDs <cit.>, we find that the optical power dependence saturates in a larger optical power than in their study. Since we also observe the same phenomenon in the single-crystal diamond, which has very few impurities and non-axisymmetry deformation compared to NDs, we consider our observation due to the NV center's intrinsic nature. We quantitatively discuss the parameters that could be responsible for this phenomenon and infer that deformation is an important parameter. We point out the possible responsibility of slow dynamics in the optical excitation and emission process of single NV centers. The present optical power dependence can be critical in accurate magnetometry using NVs. This effect may degrade the accuracy of the magnetometry using NDs by about a few ten μT. Even when using high-quality bulk diamonds, we must be careful when discussing a few μT magnetic fields around zero magnetic fields. We can minimize degradation by introducing strong optical power based on the phenomenological exponential behavior discussed here. Also, we suggest that using diamonds with fewer impurities and deformation can reduce the influence on the accurate magnetic field measurement. Further experimental verification and theoretical discussion on deformation, impurity densities, and a comprehensive range of magnetic fields will help to identify the mechanism of this phenomenon. § ACKNOWLEDGEMENTS We thank K. M. Itoh for letting us use the confocal microscope system, and H. Watanabe for his high quality diamond, which we used in the estimation of the spatial resolution of our system [Fig. 1(c)]. We appreciate the fruitful discussion with J. Inoue. We also thank MEXT-Nanotechnology Platform Program “Microstructure Analysis Platform" for technical support. K.S. acknowledges the support of Grants-in-Aid for Scientific Research No. JP22K03524. K.K. acknowledges the support of Grants-in-Aid for Scientific Research (Nos. JP23H01103, JP19H00656, and JP19H05826). T.T. acknowledges the support of MEXT Q-LEAP (JPMXS0118068379), JST CREST (JPMJCR1773), JST Moonshot R&D (JPMJMS2062), MIC R&D for construction of a global quantum cryptography network (JPMI00316), JSPS KAKENHI (Nos. JP20H02187 and JP20H05661). 34 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Maze et al.(2008)Maze, Stanwix, Hodges, Hong, Taylor, Cappellaro, Jiang, Dutt, Togan, Zibrov, Yacoby, Walsworth, and Lukin]MazeNature2008 author author J. R. Maze, author P. L. Stanwix, author J. S. Hodges, author S. Hong, author J. M. Taylor, author P. Cappellaro, author L. Jiang, author M. V. G. Dutt, author E. Togan, author A. S. Zibrov, author A. Yacoby, author R. L. Walsworth, and author M. D. Lukin, https://doi.org/10.1038/nature07279 journal journal Nature volume 455, pages 644 (year 2008)NoStop [Degen(2008)]DegenAPL2008 author author C. L. Degen, https://doi.org/10.1063/1.2943282 journal journal Applied Physics Letters volume 92, pages 243111 (year 2008)NoStop [Balasubramanian et al.(2008)Balasubramanian, Chan, Kolesov, Al-Hmoud, Tisler, Shin, Kim, Wojcik, Hemmer, Krueger, Hanke, Leitenstorfer, Bratschitsch, Jelezko, and Wrachtrup]BalasubramanianNature2008 author author G. Balasubramanian, author I. Y. Chan, author R. Kolesov, author M. Al-Hmoud, author J. Tisler, author C. Shin, author C. Kim, author A. Wojcik, author P. R. Hemmer, author A. Krueger, author T. Hanke, author A. Leitenstorfer, author R. Bratschitsch, author F. Jelezko, and author J. Wrachtrup, https://doi.org/10.1038/nature07278 journal journal Nature volume 455, pages 648 (year 2008)NoStop [Taylor et al.(2008)Taylor, Cappellaro, Childress, Jiang, Budker, Hemmer, Yacoby, Walsworth, and Lukin]Taylor2008 author author J. M. Taylor, author P. Cappellaro, author L. Childress, author L. Jiang, author D. Budker, author P. R. Hemmer, author A. Yacoby, author R. Walsworth, and author M. D. Lukin, https://doi.org/10.1038/nphys1075 journal journal Nature Physics volume 4, pages 810 (year 2008)NoStop [Schirhagl et al.(2014)Schirhagl, Chang, Loretz, and Degen]SchirhaglARPC2014 author author R. Schirhagl, author K. Chang, author M. Loretz, and author C. L. Degen, https://doi.org/10.1146/annurev-physchem-040513-103659 journal journal Annual Review of Physical Chemistry volume 65, pages 83 (year 2014)NoStop [Rondin et al.(2014)Rondin, Tetienne, Hingant, Roch, Maletinsky, and Jacques]Rondin2014 author author L. Rondin, author J.-P. Tetienne, author T. Hingant, author J.-F. Roch, author P. Maletinsky, and author V. Jacques, https://doi.org/10.1088/0034-4885/77/5/056503 journal journal Reports on Progress in Physics volume 77, pages 056503 (year 2014)NoStop [Levine et al.(2019)Levine, Turner, Kehayias, Hart, Langellier, Trubko, Glenn, Fu, and Walsworth]Levine2019 author author E. V. Levine, author M. J. Turner, author P. Kehayias, author C. A. Hart, author N. Langellier, author R. Trubko, author D. R. Glenn, author R. R. Fu, and author R. L. Walsworth, https://doi.org/10.1515/nanoph-2019-0209 journal journal Nanophotonics volume 8, pages 1945 (year 2019)NoStop [Barry et al.(2020)Barry, Schloss, Bauch, Turner, Hart, Pham, and Walsworth]Barry2020 author author J. F. Barry, author J. M. Schloss, author E. Bauch, author M. J. Turner, author C. A. Hart, author L. M. Pham, and author R. L. Walsworth, https://doi.org/10.1103/revmodphys.92.015004 journal journal Reviews of Modern Physics volume 92, pages 015004 (year 2020)NoStop [Acosta et al.(2010)Acosta, Bauch, Ledbetter, Waxman, Bouchard, and Budker]AcostaPRL2010 author author V. M. Acosta, author E. Bauch, author M. P. Ledbetter, author A. Waxman, author L.-S. Bouchard, and author D. Budker, @noop journal journal Physical Review Letters volume 104, pages 070801 (year 2010)NoStop [Neumann et al.(2013)Neumann, Jakobi, Dolde, Burk, Reuter, Waldherr, Honert, Wolf, Brunner, and Shim]NeumannNL2013 author author P. Neumann, author I. Jakobi, author F. Dolde, author C. Burk, author R. Reuter, author G. Waldherr, author J. Honert, author T. Wolf, author A. Brunner, and author J. H. Shim, @noop journal journal Nano Letters volume 13, pages 2738 (year 2013)NoStop [Toyli et al.(2013)Toyli, Charles, Christle, Dobrovitski, and Awschalom]ToyliPNAS2013 author author D. M. Toyli, author F. Charles, author D. J. Christle, author V. V. Dobrovitski, and author D. D. Awschalom, @noop journal journal Proceedings of the National Academy of Sciences volume 110, pages 8417 (year 2013)NoStop [Tetienne et al.(2017)Tetienne, Dontschuk, Broadway, Stacey, Simpson, and Hollenberg]TetienneSciAdv2017 author author J.-P. Tetienne, author N. Dontschuk, author D. A. Broadway, author A. Stacey, author D. A. Simpson, and author L. C. L. Hollenberg, journal journal Science Advances volume 3, https://doi.org/10.1126/sciadv.1602429 e1602429 (year 2017)NoStop [Ku et al.(2020)Ku, Zhou, Li, Shin, Shi, Burch, Anderson, Pierce, Xie, Hamo, Vool, Zhang, Casola, Taniguchi, Watanabe, Fogler, Kim, Yacoby, and Walsworth]ku2020 author author M. J. H. Ku, author T. X. Zhou, author Q. Li, author Y. J. Shin, author J. K. Shi, author C. Burch, author L. E. Anderson, author A. T. Pierce, author Y. Xie, author A. Hamo, author U. Vool, author H. Zhang, author F. Casola, author T. Taniguchi, author K. Watanabe, author M. M. Fogler, author P. Kim, author A. Yacoby, and author R. L. Walsworth, https://doi.org/10.1038/s41586-020-2507-2 journal journal Nature volume 583, pages 537 (year 2020)NoStop [Hedrich et al.(2021)Hedrich, Wagner, Pylypovskyi, Shields, Kosub, Sheka, Makarov, and Maletinsky]hedrich2021 author author N. Hedrich, author K. Wagner, author O. V. Pylypovskyi, author B. J. Shields, author T. Kosub, author D. D. Sheka, author D. Makarov, and author P. Maletinsky, https://doi.org/10.1038/s41567-021-01205-3 journal journal Nature Physics volume 17, pages 659 (year 2021)NoStop [Foy et al.(2020)Foy, Zhang, Trusheim, Bagnall, Walsh, Wang, and Englund]FoyAPMI2020 author author C. Foy, author L. Zhang, author M. E. Trusheim, author K. R. Bagnall, author M. Walsh, author E. N. Wang, and author D. R. Englund, https://doi.org/10.1021/acsami.0c01545 journal journal ACS Appl Mater Interfaces volume 12, pages 26525 (year 2020)NoStop [Oort and Glasbeek(1990)]VanOort1990 author author E. V. Oort and author M. Glasbeek, https://doi.org/10.1016/0009-2614(90)85665-y journal journal Chemical Physics Letters volume 168, pages 529 (year 1990)NoStop [Dolde et al.(2011)Dolde, Fedder, Doherty, Nöbauer, Rempp, Balasubramanian, Wolf, Reinhard, Hollenberg, Jelezko, and Wrachtrup]Dolde2011 author author F. Dolde, author H. Fedder, author M. W. Doherty, author T. Nöbauer, author F. Rempp, author G. Balasubramanian, author T. Wolf, author F. Reinhard, author L. C. L. Hollenberg, author F. Jelezko, and author J. Wrachtrup, https://doi.org/10.1038/nphys1969 journal journal Nature Physics volume 7, pages 459 (year 2011)NoStop [Felton et al.(2009)Felton, Edmonds, Newton, Martineau, Fisher, Twitchen, and Baker]felton2009 author author S. Felton, author A. M. Edmonds, author M. E. Newton, author P. M. Martineau, author D. Fisher, author D. J. Twitchen, and author J. M. Baker, https://doi.org/10.1103/PhysRevB.79.075203 journal journal Physical Review B volume 79, pages 075203 (year 2009)NoStop [Igarashi et al.(2012)Igarashi, Yoshinari, Yokota, Sugi, Sugihara, Ikeda, Sumiya, Tsuji, Mori, Tochio, Harada, and Shirakawa]igarashi2012 author author R. Igarashi, author Y. Yoshinari, author H. Yokota, author T. Sugi, author F. Sugihara, author K. Ikeda, author H. Sumiya, author S. Tsuji, author I. Mori, author H. Tochio, author Y. Harada, and author M. Shirakawa, @noop journal journal Nano Letters volume 12, pages 5726 (year 2012)NoStop [Fu et al.(2007)Fu, Lee, Chen, Lim, Wu, Lin, Wei, Tsao, Chang, and Fann]fu2007 author author C.-C. Fu, author H.-Y. Lee, author K. Chen, author T.-S. Lim, author H.-Y. Wu, author P.-K. Lin, author P.-K. Wei, author P.-H. Tsao, author H.-C. Chang, and author W. Fann, @noop journal journal Proceedings of the National Academy of Sciences volume 104, pages 727 (year 2007)NoStop [Dréau et al.(2011)Dréau, Lesik, Rondin, Spinicelli, Arcizet, Roch, and Jacques]dreau2011 author author A. Dréau, author M. Lesik, author L. Rondin, author P. Spinicelli, author O. Arcizet, author J.-F. Roch, and author V. Jacques, https://doi.org/10.1103/PhysRevB.84.195204 journal journal Physical Review B volume 84, pages 195204 (year 2011)NoStop [Jensen et al.(2013)Jensen, Acosta, Jarmola, and Budker]acosta2013 author author K. Jensen, author V. M. Acosta, author A. Jarmola, and author D. Budker, https://doi.org/10.1103/PhysRevB.87.014115 journal journal Physical Review B volume 87, pages 014115 (year 2013)NoStop [Fujiwara et al.(2020)Fujiwara, Dohms, Suto, Nishimura, Oshimi, Teki, Cai, Benson, and Shikano]fujiwara2020 author author M. Fujiwara, author A. Dohms, author K. Suto, author Y. Nishimura, author K. Oshimi, author Y. Teki, author K. Cai, author O. Benson, and author Y. Shikano, https://doi.org/10.1103/PhysRevResearch.2.043415 journal journal Physical Review Research volume 2, pages 043415 (year 2020)NoStop [Scholten et al.(2021)Scholten, Healey, Robertson, Abrahams, Broadway, and Tetienne]ScholtenJAP2021 author author S. C. Scholten, author A. J. Healey, author I. O. Robertson, author G. J. Abrahams, author D. A. Broadway, and author J.-P. Tetienne, https://doi.org/10.1063/5.0066733 journal journal Journal of Applied Physics volume 130, pages 150902 (year 2021)NoStop [Tsukamoto et al.(2021)Tsukamoto, Ogawa, Ozawa, Iwasaki, Hatano, Sasaki, and Kobayashi]TsukamotoAPL2021 author author M. Tsukamoto, author K. Ogawa, author H. Ozawa, author T. Iwasaki, author M. Hatano, author K. Sasaki, and author K. Kobayashi, https://doi.org/10.1063/5.0054809 journal journal Applied Physics Letters volume 118, pages 264002 (year 2021)NoStop [Tsukamoto et al.(2022)Tsukamoto, Ito, Ogawa, Ashida, Sasaki, and Kobayashi]Tsukamoto2022 author author M. Tsukamoto, author S. Ito, author K. Ogawa, author Y. Ashida, author K. Sasaki, and author K. Kobayashi, https://doi.org/10.1038/s41598-022-18115-w journal journal Scientific Reports volume 12, pages 13942 (year 2022)NoStop [Misonou et al.(2020)Misonou, Sasaki, Ishizu, Monnai, Itoh, and Abe]misonou2020 author author D. Misonou, author K. Sasaki, author S. Ishizu, author Y. Monnai, author K. M. Itoh, and author E. Abe, https://doi.org/10.1063/1.5128716 journal journal AIP Advances volume 10, pages 025206 (year 2020)NoStop [Ogawa et al.(2023)Ogawa, Tsukamoto, Sasaki, and Kobayashi]OgawaJPSJ2023 author author K. Ogawa, author M. Tsukamoto, author K. Sasaki, and author K. Kobayashi, https://doi.org/10.7566/JPSJ.92.014002 journal journal Journal of the Physical Society of Japan volume 92, pages 014002 (year 2023)NoStop [Teraji et al.(2015)Teraji, Yamamoto, Watanabe, Koide, Isoya, Onoda, Ohshima, Rogers, Jelezko, Neumann, Wrachtrup, and Koizumi]TerajiPSSA2015 author author T. Teraji, author T. Yamamoto, author K. Watanabe, author Y. Koide, author J. Isoya, author S. Onoda, author T. Ohshima, author L. J. Rogers, author F. Jelezko, author P. Neumann, author J. Wrachtrup, and author S. Koizumi, @noop journal journal physica status solidi (a) volume 212, pages 2365 (year 2015)NoStop [Bauch et al.(2020)Bauch, Singh, Lee, Hart, Schloss, Turner, Barry, Pham, Bar-Gill, Yelin, and Walsworth]Bauch2020 author author E. Bauch, author S. Singh, author J. Lee, author C. A. Hart, author J. M. Schloss, author M. J. Turner, author J. F. Barry, author L. M. Pham, author N. Bar-Gill, author S. F. Yelin, and author R. L. Walsworth, https://doi.org/10.1103/physrevb.102.134210 journal journal Physical Review B volume 102, pages 134210 (year 2020)NoStop [Ohashi et al.(2013)Ohashi, Rosskopf, Watanabe, Loretz, Tao, Hauert, Tomizawa, Ishikawa, Ishi-Hayase, Shikata, Degen, and Itoh]ohashi2013 author author K. Ohashi, author T. Rosskopf, author H. Watanabe, author M. Loretz, author Y. Tao, author R. Hauert, author S. Tomizawa, author T. Ishikawa, author J. Ishi-Hayase, author S. Shikata, author C. L. Degen, and author K. M. Itoh, @noop journal journal Nano Letters volume 13, pages 4733 (year 2013)NoStop [Jelezko and Wrachtrup(2006)]JelezkoPSS2006 author author F. Jelezko and author J. Wrachtrup, https://doi.org/https://doi.org/10.1002/pssa.200671403 journal journal physica status solidi (a) volume 203, pages 3207 (year 2006)NoStop [Mittiga et al.(2018)Mittiga, Hsieh, Zu, Kobrin, Machado, Bhattacharyya, Rui, Jarmola, Choi, Budker, and Yao]Mittiga2018 author author T. Mittiga, author S. Hsieh, author C. Zu, author B. Kobrin, author F. Machado, author P. Bhattacharyya, author N. Z. Rui, author A. Jarmola, author S. Choi, author D. Budker, and author N. Y. Yao, https://doi.org/10.1103/physrevlett.121.246402 journal journal Physical Review Letters volume 121, pages 246402 (year 2018)NoStop [Aslam et al.(2013)Aslam, Waldherr, Neumann, Jelezko, and Wrachtrup]Aslam2013 author author N. Aslam, author G. Waldherr, author P. Neumann, author F. Jelezko, and author J. Wrachtrup, https://doi.org/10.1088/1367-2630/15/1/013064 journal journal New Journal of Physics volume 15, pages 013064 (year 2013)NoStop
http://arxiv.org/abs/2307.04955v1
20230711011529
Joint Radio Frequency Fingerprints Identification via Multi-antenna Receiver
[ "Xiaofang Chen", "Wenbo Xu", "Yue Wang" ]
eess.SP
[ "eess.SP" ]
IEEEexample:BSTcontrol IEEE TRANSACTIONS ON MICROWAVE THEORY AND TECHNIQUES, VOL. 60, NO. 12, DECEMBER 2012 Roberg et al.: High-Efficiency Diode and Transistor Rectifiers Joint Radio Frequency Fingerprints Identification via Multi-antenna Receiver Xiaofang Chen, Student Member, IEEE, Wenbo Xu, Member, IEEE, and Yue Wang, Senior Member, IEEE XXX. XXX. XXX. XXX. XXX. October 2023 ================================================================================================================================================== In Internet of Things (IoT), radio frequency fingerprints (RFF) technology has been widely used for passive security authentication to identify the special emitter. However, few works took advantage of independent oscillator distortions at the receiver side, and no work has yet considered filtering receiver distortions. In this paper, we investigate the RFF identification (RFFI) involving unknown receiver distortions, where the phase noise caused by each antenna oscillator is independent. Three RFF schemes are proposed according to the number of receiving antennas. When the number is small, the Mutual Information Weighting Scheme (MIWS) is developed by calculating the weighted voting of RFFI result at each antenna; when the number is moderate, the Distortions Filtering Scheme (DFS) is developed by filtering out the channel noise and receiver distortions; when the number is large enough, the Group-Distortions Filtering and Weighting Scheme (GDFWS) is developed, which integrates the advantages of MIWS and DFS. Furthermore, the ability of DFS to filter out the channel noise and receiver distortions is theoretically analyzed at a specific confidence level. Experiments are provided when both channel noise and receiver distortions exist, which verify the effectiveness and robustness of the proposed schemes. Emitter distortions, multiple independent oscillators, mutual information (MI), radio frequency fingerprinting identification (RFFI), receiver distortions. § INTRODUCTION The number of IoT access devices deployed in practical systems is rising quickly as a result of the expansion of IoT application scenarios including automotive industry <cit.>, healthcare <cit.>, smart living <cit.>, etc. According to the International Data Corporation, approximately 40 billion IoT devices will be available globally by 2025. As a large number of IoT devices have plunged into human life, network security has emerged as a growing public concern worldwide <cit.>. Traditional cryptography-based algorithms are used in most existing wireless communication systems to achieve secure authentication of upper-layer mechanisms <cit.>. However, these algorithms suffer from some limitations such as computer-limited assumptions, replay attack susceptibility, high communication overhead, complexity, etc <cit.>. In contrast, radio frequency fingerprints (RFF) as a very promising non-cryptographic authentication technology has recently gained a lot of research interest because of its information-theoretic security, low complexity, and high compatibility. The concept of RFF was first introduced in 2003 <cit.>. It extracts the inherent, stable, and unique fingerprints of different emitters to distinguish their physical layer properties <cit.>. Such fingerprints exist due to the unavoidable accuracy errors and randomness in the device production process <cit.>, which presents as the unintentional modulation at the emitter side, causing minor emitter distortions of the signal that are difficult to imitate. Although this unintentional modulation is not conducive to the demodulation of the signal, it is the basis for RFF identification (RFFI) to extract radio frequency (RF) features with uniqueness, stability, and intrinsicality to complete the special emitter identification. The current RF features can be categorized into transient and steady-state features <cit.>. For the transient features, though some scholars have confirmed their feasibility <cit.>, it is difficult to determine its starting point accurately due to the extremely short transient signal, which disables complete feature extraction. Therefore, a lot of researches focus on the feature extraction of the steady-state signal segment. For example, Q. Li et al. adopt the self-phase optical function to optimize variational mode decomposition (VMD) and suppress the modal aliasing after signal decomposition <cit.>. Y. Li et al. extract features through entropy information and spectral feature method <cit.>. In addition to RF feature extraction, RFFI contains another two steps: signal pre-processing and signal classification. The signal pre-processing, e.g., transformation <cit.>, data cleaning <cit.>, etc., is used to improve the distinguishability of the subsequent extracted features among different emitters. In terms of signal classification, some traditional classifiers, such as K-means, support vector machine (SVM), and neural networks <cit.>, are commonly adopted to classify the extracted RF features. Although the above-mentioned works have studied various methods of each step in RFFI, few of them have taken into account receiver distortions. Such distortions unavoidably affect the accurate extraction of emitter fingerprints, and thus impact the performance of RFFI <cit.>. It has been suggested to complement additional hardware to compensate for the distortions at the receiving side <cit.>, but it might lead to extra distortions that cannot be further explored. Taking into account the impacts of receiver distortions and complemented hardware, B. He et al. suggest that the performance of RFFI can be enhanced by utilizing the diversity gain of multiple received versions <cit.>. Therefore, a configuration of multiple receiving antennas is expected to obtain similar benefits. On the other hand, multiple-input multiple-output technology is indispensable in 5G communication. It is pointed out in <cit.> that the large and heterogeneous antenna systems equipped with separate oscillators for each antenna, which generate oscillator distortions with independent identical distribution characteristics, are necessary for the future. Thus, it is obvious that the independent oscillators generate an independent phase noise to the signal received at each antenna. Currently, no scholars have employed this independent identical distribution property in their RFFI studies. Considering the above two facts when multiple receiving antennas are configured, we propose three schemes to enhance the robustness and recognition accuracy of RFF. These three schemes are respectively suited to the scenarios with different numbers of receiving antennas. To begin with, Mutual Information Weighting Scheme (MIWS) is proposed when the number of receiving antennas is small. The MIWS is a weighting algorithm that performs a weighted voting operation on the RFFI result at each antenna. It estimates weights based on the mutual information (MI) between the emitter and the received signal at each antenna. Then, when the number of receiving antennas is moderate, Distortions Filtering Scheme (DFS) is proposed to filter out the channel noise and the receiver distortions by exploiting the independent identical distribution property of the received signal. Further, the Group-Distortions Filtering and Weighting Scheme (GDFWS) is proposed to solve the performance saturation phenomenon of DFS when the number of receiving antennas is large. Finally, we use absolute accuracy and confidence level as the metrics of filtering ability to theoretically derive the minimum number of receiving antennas required to satisfy certain performance of DFS. Thereby, the specific scenario in terms of the number of receiving antennas that is applicable for each scheme is derived. The contributions of our work can be summarized as follows. 1)Firstly, when the number of receiving antennas is small, the MIWS scheme is proposed. It utilizes the MI between the transmitting signal and each receiving signal to measure the quality of the latter. Then, the weights of signals at each antenna are calculated accordingly to get the weighted voting of the RFFI results. 2)Secondly, when the number of receiving antennas is moderate, the DFS scheme is proposed to deal with channel noise and receiver distortions. To our knowledge, this is the first attempt to filter out the receiver distortions in current literature. 3)Thirdly, when the number of receiving antennas is large, the GDFWS scheme is proposed, which enjoys the advantages of both DFS and MIWS. The GDFWS uniformly divides all the antennas into groups first, then filters out channel noise and receiver distortions using DFS within each group to get robust RFFI results, and obtains the weighted voting result of all groups by MIWS. 4) Finally, based on the absolute accuracy and confidence level metrics, we theoretically derive the ability of DFS to filter out negative factors to determine the application scenario of DFS. The results simultaneously indicate the specific application scenarios of another two schemes. The remainder of this paper is organized as follows. Section II briefly reviews the signal distortion model and generalizes the uplink multi-antenna received RFF system model. Three RFFI schemes with multi-antenna receive are described in Section III. In Section IV, we theoretically analyze the impact of the number of receiving antennas on the performance of DFS. Section V shows the results of the experiments, followed by the conclusion in Section VI. § BACKGROUND AND SYSTEM MODEL In this section, we first describe the emitter distortion, channel, and receiver distortion models, and then the uplink multi-antenna received RFF system model is established. §.§ Emitter distortion model Fig. <ref> depicts a typical end-to-end transceiver link and shows the source of both the emitter and receiver distortions highlighted with red dashed lines. As shown in this figure, the distortions experienced by the transmitted signal at the emitter include filter distortions, I/Q imbalance, spurious tones, and power amplifier nonlinearities. With reference to <cit.> and <cit.>, the specific mathematical models for these distortions are given as follows. 1) Transmit shaping filter distortion: We denote the nth transmitted symbol after constellation mapping as S_n, and the symbol interval is T_s. Subsequently, S_n passes through transmit shaping filter. Considering the inevitable filter distortions due to the limited precision during the manufacturing process, the actual transmit shaping filter is written as g_t(t)=g_t(t)⊗υ(t), where ⊗ stands for the convolution, g_t(t) is assumed to be the ideal transmit shaping filter, and υ(t) denotes the filter distortion. The Fourier transform form of υ(t) is as follows: Υ(f)=A_Υ(f)e^jΦ_Υ(f), where A_Υ(f) and Φ_Υ(f) denote the amplitude distortion and phase distortion of the filter, respectively. Current literature generally uses the second-order Fourier series model to characterize these two distortions <cit.>, such that A_Υ(f)=ρ_0+ρ_1cos(2πf/T_A), Φ_Υ(f)=2π q_0f+q_1sin(2πf/T_Φ), where ρ_i, q_i, i=0,1, T_A, T_Φ are the parameters of the Fourier series. With the filter in (<ref>), the transmission symbols after filter shaping can be written as s(t)=∑_n=-∞^∞g_t(t-nT_s)S_n. 2) I/Q distortion: We denote the in-phase and quadrature components of s(t) in (<ref>) by s_I(t) and s_Q(t), respectively. The imbalance between s_I(t) and s_Q(t) caused by the modulator is called I/Q distortion, which is mainly manifested as a gain mismatch and quadrature error, then the signal in (<ref>) changes into x(t) =G_Is_I(t)e^j(2π ft+ζ/2)+G_Qs_Q(t)e^j(2π ft-ζ/2) =α s(t)e^j2π ft+β s^*(t)e^j2π ft, where G_I and G_Q represent the gain mismatches of these two components, ζ denotes the quadrature error, and (·)^* stands for conjugate operation. To facilitate the following discussion, we define α=1/2 (G+1)cos(ζ/2)+j/2(G-1)sin(ζ/2) β=1/2 (G-1)cos(ζ/2)+j/2(G+1)sin(ζ/2) . G=G_I/G_Q 3) Spurious tone: Affected by oscillators and other active devices, DC offset commonly exists in the signal. The presence of DC offset will result in harmonic components, which we refer to as spurious tone. Considering the impact of spurious tone, the signal in (<ref>) becomes x^(1)(t)=x(t)+∑_i=1^Cc_ie^j2π (f+f_ζ,i)t, where C is the number of harmonic components, c_i and f_ζ,i are the amplitude and frequency offset of the ith harmonic component, respectively. It is worth noting that when f_ζ, i=0 and c_i≠ 0, the ith harmonic component is also known as carrier leakage. 4) Power amplifier nonlinearities: The nonlinearity of the power amplifier causes distortions in both the amplitude and phase of the signal. We express these nonlinear distortions by the Taylor series <cit.>. Considering a Taylor series of order B, the distorted signal in (<ref>) further becomes x^(2)(t)=∑_i=0^Bb_i(x^(1)(t))^2i+1, where b_i is the ith coefficient of the Taylor polynomial. §.§ Channel and receiver distortion models In this subsection, we consider the case of single antenna reception for simplicity. This discussion can be extended to multi-antenna reception easily, which will be described in detail in the next subsection. The channel attenuation is denoted by h(t) and the additive white Gaussian noise (AWGN) is defined as w(t), then the signal received by the antenna is z(t)=h(t)x^(2)(t)+w(t). In the aspect of receiver distortions, we concentrate on the phase noise caused by the oscillator, as well as the sampling jitter and quantization error caused by the ADC, which have a greater impact on the received signal compared with other hardware modules <cit.>. These receiver distortions are modeled as follows. 1) Phase noise: Assume a phase-locked loop (PLL) is used at the receiver side for phase synchronization. As a typical signal frequency tracker, PLL has the advantages of high output stability and continuously adjustable phase, etc. However, it inevitably produces phase noise, under the impact of which the output signal of PLL is given as: y^(1)(t)=h(t)x^(2)(t)e^-j(2π f^' t+θ(t))+ŵ(t)e^-j(2π f^' t), where ŵ(t)=w(t)e^-jθ(t), f^' is the local oscillator frequency and θ(t) denotes its phase noise. Similar to most studies, e.g., <cit.>, we model the phase noise as a Wiener process as follows: θ(t)=1/2πχdθ(t)/dt+c(t), where χ is the 3 dB bandwidth of the phase noise power spectrum and c(t) is the noise obeying the standard Gaussian distribution. 2) Sampling jitter: Sampling jitter means the deviation of the sampling point from the optimal position when the signal is downsampled by the ADC. In the presence of sampling jitter, the signal in (<ref>) changes into y^(2)(n)=y^(1)(nT+δ(n)T), where n denotes the nth sampling point, T is the sampling period and T≪ T_s, and δ(n) denotes the relative sampling jitter, which is a random process, and |δ(n)|≪ 1. 3) Quantization error: The signal is quantized after sampling, and the quantization error is usually modeled as additive noise in the case of uniform quantization. The quantized version of (<ref>) is written as: y(n)=y^(2)(n)+△(n), where △(n) denotes the quantization error at the nth sampling point. If quantization accuracy is ϵ and the dynamic range is [-V,V], then △ (n) obeys a uniform distribution within the interval [-2^-ϵV,2^-ϵV] with a variance of 2^-2ϵV^2/3. To summarize, based on (<ref>) to (<ref>), when the effects of the emitter distortions, channel, and receiver distortions are considered, the down-converted signal at the receiver is expressed as (<ref>) written at the top of next page, where the second equation is the simplified form of (<ref>) since sampling jitter does not affect the distribution of ŵ(nT). §.§ RFF system model with multiple receiving antennas Based on the single-antenna reception model in the previous subsection, this subsection extends it to a multi-antenna reception scenario. Assuming an uplink multi-antenna received RFF system model consists of M single-antenna IoT devices and a N-antenna receiver, each antenna of which is equipped with an independent oscillator. Suppose that multiple emitters adopt orthogonal access technology to communicate with the receiver, thus this paper does not consider the interference among multiple emitters. According to the previous subsections, the down-converted signal at the ith antenna is established as: y_i(k,n)=h_i(k,n)e^-jθ_i(k,n)x̂_m(k,n) +△_i(k,n)+ŵ_i(k,n) , where h_i(k,n)=h_i(k,nT+δ(k,n)T), x̂_m(k,n)=e^-j2π f^' Tδ(k,n)x_m^(2)(k,nT+δ(k,n)T), (k,n) represents the nth sample in the kth frame. h_i(k,n) is the channel fading coefficient between the emitter and the ith antenna of the receiver, θ_i(k,n) represents the phase noise of the ith antenna at the receiver side, δ(k,n) denotes the sampling jitter, △_i(k,n) indicates the quantization error at the ith antenna, ŵ_i(k,n) is the AWGN at the ith antenna, and x_m^(2)(k,nT+δ(n)T) in the form of (<ref>) for the mth emitter. As mentioned in the previous subsection, the antenna oscillator inevitably generates phase noise. Fortunately, it is reasonable to assume the phase noise remains constant within a single frame while varies frame-by-frame in this paper <cit.>. Meanwhile, we assume a slow fading channel, so h_i(k,n) remains constant within a signal frame. Based on these two assumptions, we define h_i(k)≜ h_i(k,n) and θ_i(k)≜θ_i(k, n) for any n, and the model in (<ref>) is further simplified as y_i(k,n)=h_i(k)e^-jθ_i(k)x̂_m(k,n)+△_i(k,n)+ŵ_i(k,n) . § RFFI SCHEMES WITH MULTI-ANTENNA RECEIVER In this section, we propose three schemes to realize RFFI as summarized in Table I. These three schemes are applicable to scenarios with different numbers of receiving antennas. As shown in Table I, MIWS is first proposed in case the number of receiving antennas is small. Then, when the number of receiving antennas is sufficient to derive the statistical characteristics of the received signals, we propose DFS to filter out channel noise and receiver distortions. Finally, to address the issue of performance saturation that DFS encounters when the number of antennas is too large, GDFWS is proposed. Fig. <ref> depicts the framework of RFFI with these three schemes, where only one scheme is activated by the control signal according to the number of configured antennas, and the modules that we propose are highlighted in yellow. §.§ Mutual information weighting scheme The MI between the emitted signal and the received signal reflects their information similarity. In this paper, the larger the MI is, the less the emitter signal is affected by channel noise and receiver distortions. Based on this fact, we first estimate the MI between the emitter signal and the received signal at each antenna. Then the weight of the received signal at each antenna is set to be proportional to its corresponding MI. Finally, the RFFI result is obtained as the weighted voting of all classification results predicted from received signals. In this subsection, we calculate the MI between the emitter signal and the received signal by taking one receiving antenna as an illustration, which can be extended to other antennas easily. For simplicity, the subscript i of the ith antenna in the subsequent discussion is ignored. Based on the previous analysis, we know that the emitter signal x^(2)(t) undergoes channel fading h(t), AWGN w(t), and receiver distortions before completing the classification. To facilitate subsequent analysis, we define g(t)=h(t)x^(2)(t). Firstly, with the definition in (<ref>), we transform the signal in (<ref>) into the frequency domain and get Z(f)=G(f)+𝐖(f), where Z(f), 𝐆(f), and 𝐖(f) are the expressions in frequency domain of z(t), g(t), and w(t), respectively. According to <cit.>, the MI between 𝐆(f) and Z(f) is calculated as ℐ(Z(f);G(f)) =ℋ(Z(f))-ℋ(Z(f)|G(f)) =ℋ(Z(f))-ℋ(W(f)) =1/2ln(2π(σ_g^2+σ_w^2))-1/2ln(2πσ_w^2) =1/2ln(1+σ_g^2/σ_w^2) , where σ_g^2 and σ_w^2 are the variance of g(t) and w(t), respectively. The function ℋ(·) implements entropy calculation. Consider a special case of (<ref>), i.e., no receiver distortion d(t) exists. In this case, σ_y^2=σ_z^2, where σ_y^2 represents the variance of y(t). Then the special case of above equation is ℐ_s(Z(f);G(f)) =ℐ_s(Y(f);G(f)) =1/2ln(2πσ_y^2)-1/2ln(2πσ_w^2) =1/2ln(σ_y^2/σ_w^2) . Note that (<ref>) is only an ideal case, which does not exist in practical systems. To measure the quality of the received signal, we calculate the difference between (<ref>) and (<ref>) as follows, △ I =ℐ_s(Z(f);G(f))-ℐ(Z(f);G(f)) =1/2ln(σ_y^2/σ_w^2+σ_g^2) =1/2ln(σ_y^2/σ_w^2+σ_x^(2)^2σ_h^2) , where σ_h^2 and σ_x^(2)^2 indicate the variance of h(t) and x^(2)(t) in (<ref>), respectively. In practical applications, σ_w^2 and σ_h^2 can be derived by some Signal to Noise Ratio (SNR) estimation techniques <cit.>, and channel coefficient estimation techniques<cit.>. Additionally, σ_x^(2)^2 can also be estimated based on the received pilots. Obviously, a smaller △ I indicates fewer distortions at the receiver. Based on this fact, we define the weight of y_i(t) as, ω_i=1/△ I_i/∑_j=1^N1/△ I_j, where △ I_i represents the MI difference at the ith receiving antenna in the form of (<ref>). We use s_i to denote the classification result of y_i(t). Then the weighted voting of all results is implemented based on their corresponding weights in (<ref>), and the final RFFI result can be obtained. §.§ Distortions filtering scheme Though the MIWS described in the previous subsection realizes the reduction of the negative impact of channel noise and receiver distortions through diversity gain, a more direct way is to eliminate these negative effects. Thus, this subsection proposes DFS to filter out the channel noise and receiver distortions when the number of receiving antennas is sufficiently large. If not specifically mentioned, the following derivations are given for single-frame signal, thus we omit the label k. By defining ϕ_i=h_ie^-jθ_i, the overall system model in (<ref>) is rewritten as y_i(n)=ϕ_ix̂_m(n)+△_i(n)+ŵ_i(n). To improve the quality of y_i(n), DFS attempts to filter out the adverse factors, i.e., ϕ_i, △_i(n), and ŵ_i(n). First, by considering all the sampling points of all antennas, (<ref>) is converted into the matrix form Y =Φx̂^T+Δ+Ŵ =Ξ+Δ+Ŵ, where Φ=[ ϕ_1 ϕ_2 ⋯ ϕ_N ] ^T∈ℝ^N×1, x̂=[ x̂_m(1) x̂_m(2)⋯ x̂_m(L) ] ^T∈ℝ^N×1, Ξ=Φx̂^T∈ℝ^N× N, Δ∈ℝ^N× L with the nth element of the ith row being △_i(n), Ŵ∈ℝ^N× L with the nth element of the ith row being ŵ_i(n), and L is the number of sampling points in a frame. The basic idea of DFS is to recover the matrix Ξ from Y, and then recover x̂ based on its relationship with Ξ given in (<ref>). In doing so, the impacts of Φ, Δ, and Ŵ are expected to be eliminated. To better illustrate the statistical properties of the received signals, the matrix Y is rewritten as (<ref>) shown at the top of the next page, where v_ij=(ŵ_i(j)+△_i(j))/ϕ_i. It should be noted that △_i(j) and ŵ_i(j) follow the uniform and Gaussian distribution, respectively. Both of them have a mean of 0. Hence, the mean of v_ij is 0. Based on this property, we calculate the mean of each row of the matrix in (<ref>) to obtain ℰ(Y_i,.)=ϕ_iℰ(x̂^T+v_i,.)=ϕ_iℰ(x̂), where Y_i,. denotes the ith row of the matrix Y in (<ref>), v_i,.= [ v_i1 v_i2 ⋯ v_iL ], and ℰ(u) calculates the mean of the vector u. Based on (<ref>), it is easy to obtain ℰ(Y_i,.)/ℰ(Y_j,.)=ϕ_i/ϕ_j. Next, we reconstruct the lth row of Ξ from Y, the processes of which apply to the other rows of Ξ as well. By multiplying the jth row of (<ref>) by ϕ_l/ϕ_j with j=1,2,...,N, which has been obtained by (<ref>) and (<ref>), the matrix in (<ref>) becomes (<ref>). With the derived Y^(l) in (<ref>), by calculating its column mean, we get ℰ_c(Y^(l))=ϕ_l [ x̂_m(1) x̂_m(2) ⋯ x̂_m(L) ], where ℰ_c(U) calculates the column mean of the matrix U. It is worth noting that ℰ_c(Y^(l)) is exactly the lth row of Ξ, which is denoted as Ξ_l,·. Following the above procedures, we are able to recover all the rows of Ξ by varying the value of l from 1 to N in order. By observing Ξ, we find that directly separating it into Φ and x̂ without any prior knowledge of x̂ is impossible. Fortunately, by defining x̂_m(1) as the first symbol of this frame, we have Ξ=Φx̂^T=x̂_m(1)Φx^T, where x =x̂/x̂_m(1) = [ 1 x̂_m(2)/x̂_m(1) ⋯ x̂_m(L)/x̂_m(1) ] =∑_i=1^NΞ_i,./Ξ_i,1/N. It is obvious that x is highly correlated with x̂. Since x retains all the information of x̂, it is reasonable to use x rather than x̂ as the signal for the subsequent RF feature extraction and classification without affecting the performance of RFFI. We find that in (<ref>) and (<ref>) the calculation of the mean is implemented. However, in practical scenarios, it can only be approximated by averaging. To ensure the effect of filtering out channel noise and receiver distortions, L and N should be large enough. Generally, L is sufficiently large in practical applications. Therefore, the filtering ability of DFS mainly depends on the value of N, and their relationship will be analyzed in detail in Section IV. §.§ Group-distortions filtering and weighting scheme The previous analysis has suggested the larger N is, the smaller the difference between the averaging result and the actual mean 0. When N is sufficiently large, this difference will be definitely very small. At this moment, even if we further increase N, the difference will converge and thus no significant performance enhancement will appear. We call such phenomenon of DFS as performance saturation. To alleviate this problem when the number of receiving antennas is large, this subsection proposes GDFWS that divides all antennas into several groups to avoid the appearance of saturation phenomenon. Fig. <ref> illustrates the overall structure of GDFWS, where signals received by all antennas are divided into four groups for illustration. In this figure, N_1=N/2, and N_2=N/2+1. First, DFS is applied in each group to filter out channel noise and receiver distortions. Then, the obtained x_i(t), i∈1,2,3,4 are delivered to the feature extraction module and weight calculation module, where the former extracts RF features that are subsequently fed into the classification module to obtain classification results s∈ℝ^1×N, and the later calculates their respective weights ω∈ℝ^1×N. Finally, the classification result s and its corresponding weight ω are sent into the weighted voting module to obtain the final RFFI result. Obviously, GDFWS enjoys the advantages of both MIWS and DFS in terms of diversity gain and adverse factors elimination. Meanwhile, the above structure can be easily extended to the case of multiple groups, where the number of antennas in each group is smaller than the one when DFS exhibits performance saturation. § THEORETICAL ANALYSIS OF DFS AND APPLICABLE SCENARIO DISCUSSION In this section, we theoretically analyze the ability of DFS to filter out channel noise and receiver distortions with varying numbers of receiving antennas. The metrics that we consider are confidence level and absolute accuracy. By revealing the relationship between N and these metrics, we obtain the conclusion about which scheme is more desirable for the case with different numbers of receiving antennas. We consider the asymptotic case, where L→ +∞, and then study the approximation degree of using averaging operation instead of the actual mean, which reveals its dependence on N. By averaging each column of (<ref>), the statistical average of the kth column is derived as: ℰ(Y^(l)_.,k) =ϕ_l·ℰ[ x̂_m(k)+v_1k ⋯ x̂_m(k)+v_Nk ] ^T =ϕ_l(x̂_m(k))+τ_k , where τ_k=ℰ[ ϕ_lv_1k ϕ_lv_2k ⋯ ϕ_lv_Nk ]. In the above equation, τ_k represents the difference between the average result and the mean 0. Therefore, the smaller τ_k is, the better the filtering effect of DFS on adverse factors, i.e., channel noise and receiver distortions. For simplicity, we define u_ik=ϕ_lv_ik , where i∈1,2, ... , N. According to (<ref>), the u_ik in the above equation can be rewritten as u_ik =ŵ_i(k)+△_i(k) =w_i(k)e^-jθ_i(k)+△_i(k), where the definition of each variable remains the same as in Section II.B. To further simplify, in the following analysis, we ignore k in the above equation, which reduces to u_i=w_ie^-jθ_i+△_i. To obtain the distribution of τ, i.e., τ_k in (<ref>), we first analyze the distribution of u_i. Referring to Section II.B, we know that △_i is uniformly distributed between [-2^-ϵV,2^-ϵV]. Suppose the number of quantization bits is 16, i.e., ϵ=16, and V=1, we have △_i∼ U[-2^-16,2^-16]. The following discussions can be easily extended to the other cases of V and ϵ. On the other hand, w_i obeys a Gaussian distribution with mean 0 and variance σ_w^2, and θ_i obeys a standard Gaussian distribution, so we easily obtain w_ie^-jθ_i∼𝒩(0,σ_w^2). According to (<ref>), we have σ_w^2=σ_x^(2)^2σ_h^2/10^SNR/10, where σ_x^(2)^2 and σ_h^2 are the same as defined in Section III.A. For better illustration, we assume that σ_x^(2)^2=1 and σ_h^2=1, thus the above equation is further simplified as σ_w^2=10^-SNR/10. When 0 dB≤ SNR≤ 30 dB, it is clear that 2^-16≪ 10^-3≤σ_w^2 ≤1, thus u_i=w_ie^-jθ_i+△_i≈ w_ie^-jθ_i∼𝒩(0,σ_w^2). Considering that the average of N possible samples selected randomly from u_i, i∈1,2,...,N also follows the Gaussian distribution, so τ in (<ref>) obeys the Gaussian distribution. Furthermore, based on the sampling properties of the sample means <cit.>, it is known that τ∼𝒩(0,σ_w^2/N). Next, we discuss two performance metrics, i.e., confidence level α and absolute accuracy ξ. If τ is required to be less than ξ with a confidence level α, we have P(|τ|<ξ)=∫_-ξ^ξ√(N)/√(2π)σ_we^-τ^2N/2σ_w^2 dτ=α. Let τ=√(N)τ/√(2)/σ_w, then the above equation is rewritten as P(|τ|<a)=∫_-a^a1/√(π)e^-τ^2 dτ=erf^-1(a)=α, where a=√(N)ξ/√(2)/σ_w. As a result, the mathematical relationship between the confidence level α, absolute accuracy ξ, and the number of receiving antennas N is ξ^2 =2[erf^-1(α)]^2σ_w^2/N =2[erf^-1(α)]^2N^-110^-SNR/10. Clearly, a smaller ξ indicates that DFS is more effective at filtering the adverse factors. To measure the advantage of DFS, we define the performance gain p such that p=ξ_1-ξ/ξ_1=1-√(1/N), where ξ_1=√(2)erf^-1(α)σ_w^2. Only when the gain p is larger than a threshold p_0, we regard that introducing DFS can bring benefits. That is p=1-√(1/N)>p_0. In this paper, we set p_0=1/2, and get N>4 based on the above equation. It means that when N≤4, no obvious gain can be obtained when DFS is employed. In such a scenario, MIWS serves as an alternative. Table II presents the relationship between N and ξ in (<ref>) when confidence level α=0.95 and SNR=15dB. From this table, we note that the decreasing rate of ξ, i.e., △ξ, slows down as N increases. When N>128, the decreasing rate of ξ is much slower than that when N ≤ 128. This coincides with the expectation in the previous subsection, i.e., the performance of DFS will be saturated when N is large enough. To avoid this saturation phenomenon, we use the GDFWS scheme in the cases with medium to high SNR to divide the antenna set into groups of no more than 128 antennas each, which avoids the saturation of DFS in each group. Table III gives the relationship between N and ξ in low SNR, where α=0.95 and SNR=5dB. It is noted that the saturation of DFS appears when N>2048, which is much larger than that in Table II. This means that DFS is more likely to saturate with a smaller N when SNR is higher. In practical applications, the number of receiving antennas is generally limited, i.e., it will not reach 2048, thus DFS is still preferable rather than GDFWS at low SNR. § EXPERIMENT AND DISCUSSION In this section, some experiments are provided to verify the efficiency of the proposed schemes. §.§ Experimental setting According to the produces of RFF, we describe the settings of the simulation experiments in this paper from five aspects in turn: emitters, channel, receiver, RF feature extraction methods, and classifiers, the detail of which are as follows. 1) The settings of emitters: The RF signal is generated according to the emitter distortion model described in Section II. It should be noted that we assume the number of harmonic components in (<ref>) and the order of the Taylor series in (<ref>) both are 2. We consider 5 emitters with the distortion parameters provided in Table IV, where the E and P are abbreviations for Emitters and Parameters, respectively, and T_1 to T_5 labels for emitter 1 to emitter 5. The modulation mode of the RF signal is QPSK, with the oversampling factor T/T_s=10, 1/T_s=10^6 MHz, and the signal center frequency is 1GHz. A frame consists of 128 symbols, wherein 32 symbols carry pilots. 2) The settings of channel: AWGN channel is considered in our experiments, so the channel fading coefficients h_i(k) in (<ref>) is set to be 1. Nonetheless, the following experimental conclusions also apply to the case where the channel coefficients are random. 3) The settings of the receiver: The receiver distortions are generated according to the receiver distortion model described in Section II. The distortion parameters of sampling jitter and quantization errors are set as follows: δ(n)=0.003, V=1, ϵ=16. The parameter χ in (<ref>) for phase noise varies to show its influence on RFFI accuracy. 4) RF feature extraction methods: Two classical RF feature extraction methods, i.e., least mean square (LMS)-based feature extraction <cit.> and intrinsic time-scale decomposition (ITD)-based feature extraction <cit.>, are used in our experiments. The LMS-based feature extraction method updates its filter weights recursively based on some criteria until convergence, then uses its converged weight vector as features. The ITD-based feature extraction method first decomposes the signal by ITD and then calculates the skewness and kurtosis of each decomposed signal as the feature vector. 5) Classifiers: Currently, there have been many successful classifiers applied to RFFI. However, since classifiers are not the focus of our paper, we choose the typical multi-classification SVM for RFF classification. In the following experiments, the number of training frames and testing frames for each emitter is 200 and 100, respectively. Each experiment result is obtained by averaging over 1000 trials. We use ORS to represent the original scheme without distortion filtering and the weighted voting operation, which is used as a benchmark for the proposed schemes. §.§ Experimental results of MIWS Fig. <ref> depicts the results of RFFI accuracy with varying SNR for MIWS, where five emitters are present, the ITD-based feature extraction method is adopted, and the receiver is equipped with four antennas with χ of 0.001, 0.01, 0.1, 1, respectively. In this figure, UWS stands for the scheme of equally weighting the signals received by each antenna, while ORS i represents the scheme that directly uses the signals at the ith receiving antenna without any distortion filtering or weighting operation. It is noteworthy that the results of each ORS in this figure reveal that the larger the χ, the lower the recognition accuracy, which indicates that the ITD-based feature extraction method is sensitive to the phase noise at the receiver. Furthermore, the experimental results indicate that both MIWS and UWS outperform the ORS of each antenna, while the performance of MIWS is better than that of UWS, which highlights the benefit of setting weights according to MI. Fig. <ref> depicts the results of RFFI accuracy with varying SNR for MIWS, where the LMS-based feature extraction method is employed, and the other settings are the same as those in Fig. <ref>. Unlike the ORS with significant differences in Fig. <ref>, the ORS of each antenna maintains similar RFFI accuracy when χ varies in this figure. This observation suggests that the ITD-based feature extraction method is sensitive to phase noise at the receiver, whereas the LMS-based feature extraction method is robust to it. Therefore, there is no significant difference in the performance of MIWS and UWS when the LMS-based feature extraction method is applied. We also note that both MIWS and UWS enjoy a 10% accuracy gain when compared with ORS, thereby confirming the benefit of weighting when multiple received versions are configurated. §.§ Experimental results of DFS Fig. <ref> shows the RFFI results of DFS, where the LMS-based feature extraction method is adopted with M=5 and χ=0.01 for all receiver antennas. As can be seen from this figure, the superiority of DFS over ORS becomes increasingly apparent as N increases. Such phenomenon that DFS outperforms ORS in terms of RFFI accuracy demonstrates that DFS can effectively filter out channel noise and receiver distortions. Moreover, there are two noteworthy points: 1) when N=4, DFS performs worse than MIWS, suggesting that MIWS is more appropriate when N≤4; 2) when SNR=0dB, the performance gain of DFS over ORS is not obvious, whereas when SNR>15dB, such gain becomes more significant for different Ns, indicating that the level of the performance gain of DFS with respect to ORS is related to the SNR. These two observations are consistent with the statements in Section IV. Fig. <ref> and Fig. <ref> show the RFFI accuracy versus receiving phase noise when using the ITD-based RF feature extraction method. The former varies SNR with N=8, and the latter varies N with SNR=15dB. As seen from these two figures, the larger the χ, the worse the RFFI accuracy of ORS. In contrast, DFS shows its stability and robustness under different χ, which indicates that DFS filters out receiver phase noise effectively. Fig. <ref> illustrates that the performance of DFS becomes better with the increasing number of receiving antennas, which remains consistent with the conclusion of Fig. <ref>. §.§ Experimental results of GDFWS Figure <ref> provides the performance of BDFWS in comparison with DFS when the LMS-based RF feature extraction method is employed. The receiving antennas are evenly divided into four groups, and the other settings are consistent with those in Fig. <ref>. When SNR≥10 dB and N=256 or N=512, GDFWS with weighted voting outperforms DFS. However, when SNR<10 dB, performance saturation does not appear in DFS, thus its performance is comparable with that of GDFWS. Moreover, when N=128, the overall performance of GDFWS is inferior to that of DFS, which suggests that GDFWS is unsuitable for scenarios where N≤128. Overall, the results presented in this figure demonstrate the effectiveness of GDFWS with weighted voting in scenarios where N>128 and SNR≥10 dB when compared with DFS, the finding of which is consistent with the theoretical analysis presented in Section IV. § CONCLUSION This paper investigates three RFFI schemes to cater to the different numbers of receiving antennas. When the number is small, we propose MIWS that uses the weighted voting of intermediate classification results for RFFI. For a moderate quantity of receiving antennas, DFS is proposed to perform statistical averaging to filter out channel noise and receiver distortions. If a large amount of receiving antennas are available, GDFWS, which enjoys the advantages of both MIWS and DFS, is developed to solve the performance saturation problem in DFS and improve classification accuracy. We further study the impact of the number of receiving antennas on DFS performance, and provide guidelines on selecting appropriate schemes for different scenarios in the following aspects: 1) When the number of antennas is N ≤ 4, MIWS is recommended. 2) When the number of antennas is 4<N≤128, DFS is the best choice. 3) The performance saturation in DFS occurs commonly when SNR is high. Hence, when N > 128 and SNR≥10dB, GDFWS is preferable. ieeetr
http://arxiv.org/abs/2307.04805v1
20230710180052
The Dragon-II simulations -- I. Evolution of single and binary compact objects in star clusters with up to 1 million stars
[ "Manuel Arca Sedda", "Albrecht W. H. Kamlah", "Rainer Spurzem", "Mirek Giersz", "Peter Berczik", "Sara Rastello", "Giuliano Iorio", "Michela Mapelli", "Massimiliano Gatto", "Eva K. Grebel" ]
astro-ph.GA
[ "astro-ph.GA" ]
firstpage–lastpage Autonomous feedback stabilization of a cavity-coupled spin oscillator Dan M. Stamper-Kurn August 12, 2023 ===================================================================== We present the first results of the Dragon-II simulations, a suite of 19 N-body simulations of star clusters with up to 10^6 stars, with up to 33% of them initially paired in binaries. In this work, we describe the main evolution of the clusters and their compact objects (COs). All Dragon-II clusters form in their centre a black hole (BH) subsystem with a density 10-100 times larger than the stellar density, with the cluster core containing 50-80% of the whole BH population. In all models, the BH average mass steeply decreases as a consequence of BH burning, reaching values ⟨ m_ BH⟩ < 15 M_⊙ within 10-30 relaxation times. Generally, our clusters retain only BHs lighter than 30 M_⊙ over 30 relaxation times. Looser clusters retain a higher binary fraction, because in such environments binaries are less likely disrupted by dynamical encounters. We find that BH-main sequence star binaries have properties similar to recently observed systems. Double CO binaries (DCOBs) ejected from the cluster exhibit larger mass ratios and heavier primary masses than ejected binaries hosting a single CO (SCOBs). Ejected SCOBs have BH masses m_ BH = 3-20 M_⊙, definitely lower than those in DCOBs (m_ BH = 10-100 M_⊙). methods: numerical – galaxies: star clusters: general – stars: general, black holes § INTRODUCTION Massive star clusters in the range (10^4-10^6), like globular clusters or young massive clusters, represent galactic repositories of stellar compact objects, and are ideal laboratories to study the interplay of stellar evolution and dynamics. Several hundreds of stellar black holes (BHs), neutron stars (NSs), and white dwarfs (WDs) are expected to form in a typical massive cluster. In the last decade, it became clear that the fraction of BHs that massive clusters can retain is much larger than previously thought, as suggested by numerous theoretical and numerical works <cit.>, providing support to the crescent number of observations of stellar BH candidates in Galactic clusters <cit.>. The progress in stellar evolution of massive stars <cit.>, partly triggered by the discovery of gravitational-wave (GW) emission by merging BH and NS binaries <cit.>, has completely changed our understanding of BHs. Stellar models demonstrated that the evolution of single massive stars is significantly influenced by the possible development of so-called pair instability supernovae (PISN), which causes the complete disruption of stars that develop an He core with a mass of M_ He = 64-135, and pulsational pair instability supernovae (PPISN), a mechanism that leads to an enhanced mass-loss in stars with a He core mass of M_ He = 32-64. This leads to a maximum stellar BH mass in the range m_ BH, max = (40-60), depending on the theoretical model adopted and the stellar metallicity. Direct consequence of these two processes is the well known upper-mass gap of BHs, a region of the mass-spectrum where no remnants are expected <cit.>. The boundaries of the upper-mass gap are highly uncertain and depend on the adopted stellar evolution model and metallicity <cit.>. Only stars with a zero age main sequence mass beyond M_ ZAMS > (200-250) can avoid PISN and, depending on their metallicity, directly collapse to an intermediate-mass BH with little mass loss in the process <cit.>. Stellar collisions might lead to the formation of BHs in the upper-mass gap <cit.>, thus suggesting that star clusters could be perfect laboratories to form mass-gap BHs <cit.>, but it is unclear how the stellar merger frequency depends on the cluster initial properties <cit.> or the stellar conditions at merger <cit.>. More in general, the formation of a population of compact objects can significantly affect star cluster dynamics. Massive stars and BHs rapidly sink into the cluster centre via mass-segregation, possibly forming a massive subsystem on a core-collapse timescale <cit.> which can contract and determine the onset of runaway stellar collisions if the time does not exceed the stellar evolution timescale <cit.>. The runaway growth of a massive star can be hampered by the formation of tight binaries that supply energy to the cluster core, cause BH ejection, deplete the cluster's BH reservoir, and eventually kick each other out via super-elastic encounters <cit.>. The competing effect of binary energy supply and stellar collisions likely depends on the cluster mass, density, metallicity, the fraction of primordial binaries, the initial mass function and its boundaries, the natal kicks of BHs and NSs, and the compact object mass spectrum. Typically, the exploration of a tiny part of such parameter space is performed with numerical models capable of simultaneously accounting for stellar dynamics and evolution, either via direct N-body <cit.> or Monte Carlo techniques <cit.>. Direct N-body simulations offer most likely the highest level of accuracy in terms of stellar dynamics modelling, but their computational cost forced the vast majority of works in the literature to focus on star clusters with less than a few × 10^5 stars and/or with a relatively small fraction of primordial binaries <cit.>, with a few notable exceptions. For example, several works have explored the impact of a large primordial binary fraction, up to 100%, on the dynamics of isotropic <cit.> and anisotropic <cit.> low-mass star cluster models, i.e. with N < 20,000, with equal-mass stars, and recently in intermediate-mass GCs, i.e. N∼ 10^5 <cit.>. With regards to simulations tailored to represent massive globular clusters, the DRAGON simulations remain the only one that exploited 10^6 particles <cit.>. Since the development of such pioneering simulations, and especially after the discovery of GWs, numerical tools underwent major upgrades in terms of stellar evolution and treatment of relativistic binaries. In this work, we present the simulation database, a suite of 19 direct N-body simulations performed with the code[<https://github.com/nbody6ppgpu/Nbody6PPGPU-beijing>] representing star clusters with N=(0.12-1)× 10^6 stars, half-mass radius densities in the ρ_h = 1.3× 10^4 - 6.9 × 10^6 M_⊙ pc^-3 range, and a fraction f_ 2b = 0.10-0.33 of stars initially paired in primordial binaries. This work, which is the first one of a series, focuses on the evolution of single and binary BHs and compact objects in massive and dense star clusters, paying particular attention to the relation between the BH population (mass, average BH mass, density) and the cluster properties (mass, radius). Our models explore a portion of the parameter space still uncharted by direct N-body simulations, thus complementing previous works that either rely on Monte Carlo simulations or exploit star cluster models with old stellar evolution recipes or a significantly smaller number of stars. The paper is organised as follows: Section <ref> describes the main properties of the clusters and the improvements integrated in the code; Section <ref> presents our main results in terms of overall star cluster evolution (Section <ref>), main properties of single and binary compact objects (Sections <ref> - <ref>), and the possible implementation of N-body outputs into semi-analytic tools (Section <ref>); whilst Section <ref> is devoted to summarise the main outcomes of our work. § NUMERICAL METHODS All the models are carried out exploiting the code <cit.>, which represents the current state-of-the-art of direct N-body codes optimised to exploit GPU-accelerated high-performance supercomputing <cit.> altogether with several recently developed codes, like Petar <cit.> or Bifrost <cit.>. belongs to a long-standing family of direct N-body integrators initiated by Sverre Aarseth and developed for almost 50 years <cit.>. implements a 4th-order Hermite integrator with individual block-time steps <cit.> and sophisticated algorithms for close encounters and few-body dynamics, namely the Kustaanheimo-Stiefel (KS) regularisation <cit.>, the Ahmad-Cohen (AC) scheme for neighbours <cit.>, and algorithmic chain regularisation <cit.>, which enables us to closely follow the evolution of binaries with periods 10^-10 times smaller than the dynamical timescales of star clusters, which typically exceed O(10) Myr. In the last few years, the code underwent a series of major upgrades related to the treatment of relativistic compact objects <cit.>, the implementation of flexible stellar evolution recipes <cit.>, and the inclusion of a dedicated treatment for spins <cit.>. Here, we expand the possible choices for BH natal spin distribution and implement relativistic recoil for post-merger remnants. In the following, we briefly summarize the features of the code that are most relevant for this work, and discuss the newest upgrades that we implemented into the code and use here for the first time. §.§ Stellar evolution implements stellar evolution for single and binary stars via the and routines <cit.>, which we heavily updated to include up-to-date prescriptions for the evolution of massive stars. We refer the reader to <cit.> for a comprehensive discussion about the updated stellar evolution encoded in . In this work, we adopt the level-B of stellar evolution as defined in <cit.>. This implies that our models take into account the formation of electron-capture supernovae (ECSNe, following ), the delayed SN scheme <cit.>, and the development of pair-instability (PISN) and pulsational pair instability supernovae (PPISN) <cit.>. For the formation of compact objects, we adopt mass loss from <cit.> with additional metallicity-dependent correction factors taken from <cit.> and a dedicated treatment for mass loss of hot and massive H-rich O/B stars <cit.>. The adopted stellar evolution models imply that the maximum BH mass attainable by massive stars with zero-age main-sequence mass <150 is m_ BH, max = 40.5 <cit.>. The BHs falling in the so-called upper mass-gap can still form via stellar collisions, accretion of stellar material onto stellar BHs, and BH-BH mergers, as we discuss in our companion papers. Natal kicks for NSs forming via ECSNe, accretion induced collapse (AIC), and merger-induced collapse (MIC) are drawn from a Maxwellian distribution with dispersion 3 km/s <cit.>, whilst for all other NSs we adopt a Maxwellian distribution with dispersion 265 km/s <cit.>. This latter value is adopted also for BHs, but the kick amplitude is reduced by a factor that accounts for the amount of fallback material <cit.>. For binary stars, we model common envelope evolution via the parametrised α_ CE-λ_ CE scheme, according to which it is possible to regulate the fraction of orbital energy injected into the envelope (α_ CE) and to scale the binding energy of the envelope by a factor λ_ CE in a way similar, but not equal, to the one followed by <cit.> <cit.>. In this work, we adopt α_ CE = 3 <cit.>. §.§ Dynamics of compact objects In particularly dense clusters, stellar interactions can trigger collisions among stars and/or compact objects. The aftermath of such collisions is still a poorly understood process that can crucially affect the formation and evolution of stellar BHs. Whilst the outcome of stellar mergers is better understood, also thanks to recent detailed hydrodynamical simulations coupled with stellar evolution models <cit.>, it is still unclear how much mass a massive star can accrete onto a stellar BH. Several works have shown that in the case of a star with a mass ∼ (1-10) merging with a stellar BH, there is little accretion as most of the energy is radiated away via jets, although the mechanism is highly uncertain and likely depends on the star structure and evolutionary stage <cit.>. Hydrodynamical simulations of star–BH close encounters have shown that up to 70% of the star mass remains bound to the BH, but energy arguments suggest that even a tiny amount of accreted matter, O(10^-3-10^-2) would suffice to evaporate the accretion disk and halt the BH growth <cit.>. Nonetheless, recent simulations modelling the common envelope phase of a tight star–BH binary have shown that the BH accretes the stellar core and expels the envelope, a process accompanied by a SN-like transient and spin-up of the BH to nearly extreme values regardless of the initial spin <cit.>. In multiple main-sequence star collisions, the merger product is expected to have a compact core and a tenuous envelope with densities as low as 10^-10 g cm^-3 <cit.>. Therefore, if: a) most of the merger product mass is in the core <cit.>, and b) the core can efficiently feed the BH <cit.>, it is reasonable to assume that a BH would accrete a significant fraction of it. Given the aforementioned uncertainties, in we parametrise the outcome of star-BH collisions via the fraction of star mass accreted onto the BH, f_c <cit.>. Throughout this paper we adopt f_c = 0.5. Natal spins are another poorly known property of stellar BHs. implements the so-called “Geneva”, “MESA”, and “Fuller” models <cit.>, and four additional choices implemented in this work, namely: zero-spins, uniform spin distribution, Gaussian spin distribution with mean value χ = 0.5 and dispersion σ_χ = 0.2, and a Maxwellian distribution with dispersion σ_χ = 0.2. also features a treatment for compact binary mergers based on an orbit-averaged formalism <cit.>, which enables us to follow the formation and evolution of in-cluster compact binary mergers, a feature implemented in a number of recent works modelling young star clusters <cit.>. In this work, we present the implementation of three new features of the code: mass and spin of the merger remnant, calculated via numerical relativity fitting formulas <cit.>, and the recoil kick imparted by asymmetric GW emission promptly after merging events <cit.>. We follow the implementation depicted in our previous works <cit.>. v⃗_ = v_mê_,1 + v_(cosξê_,1 + sinξê_,2) + v_∥ê_∥, v_m = Aη^2 √(1-4η) (1+Bη), v_ = Hη^2/1+q_(S_2,∥ - q_ S_1,∥), v_∥ = 16η^2/1+q_[ V_11 + V_A Ξ_∥ + V_B Ξ_∥^2 + V_C Ξ_∥^3 ] × ×| S⃗_2, - q_S⃗_1,| cos(ϕ_Δ - ϕ_1). Here η≡ q_/(1+q_)^2 is the symmetric mass ratio, Ξ⃗≡ 2(S⃗_2 + q_^2 S⃗_1) / (1 + q_)^2, and the subscripts and ∥ mark the perpendicular and parallel directions of the BH spin vector (S⃗) with respect to the direction of the binary angular momentum. We adopt A = 1.2 × 10^4 km s^-1, B = -0.93, H = 6.9× 10^3 km s^-1, and ξ = 145^∘ <cit.>, V_11 = 3677.76 km s^-1, and V_A,B,C = (2.481, 1.793, 1.507)× 10^3 km s^-1. The quantity ϕ_Δ represents the angle between the direction of the infall at merger (which we randomly draw in the binary orbital plane) and the in-plane component of the quantity Δ⃗≡ (M_a+M_b)^2 (S⃗_b - q_S⃗_a)/(1+q_), while ϕ_1 = 0-2π is the phase of the binary, extracted randomly between the two limiting values. §.§ Massive star cluster models with up to one million stars We generate the 19 star clusters with the updated software <cit.>, as described in <cit.> and <cit.>. All star clusters are modelled via <cit.> dynamical models with a central dimensionless potential well W_0 = 6, and are characterised by three values of the half-mass radius, R_ = 0.47, 0.80, 1.75 pc, four values of the initial number of stars, N = (1.2, 3, 6, 10)× 10^5, and two values of the primordial binary fraction, as described below. All clusters have the same metallicity Z = 0.0005, a value typical of several clusters proposed to host a dense subsystem of stellar BHs, like NGC3201 or a central intermediate-mass black hole (IMBH), like NGC6254 <cit.>. All simulations were conducted on the Juwels BOOSTER supercomputer and the GRACE HPC workstation over a ∼ 2 yr timespan. Eventually, the whole database consists of almost 35 Tb of data. Stellar masses are drawn from the <cit.> initial mass function limited between m_* = 0.08-150, which implies an initial average stellar mass is ⟨ m_* ⟩≃ 0.59. The corresponding initial mass and density scale in clusters are M_c = (0.7-5.9)× 10^5 and densities ρ_c ≃ 1.3× 10^4 - 6.9 × 10^6 pc^-3, respectively. All clusters move on a circular orbit at a distance of 13.3 kpc from the centre of a galaxy whose gravitational potential is modelled via a simple Keplerian potential assuming a total galaxy mass of M_g = 1.78× 10^11. As a consequence, our clusters have initially a tidal radius in the range R_ tid = 67-138 pc and they can all be considered as underfilling systems, thus the gravitational field has a smaller impact on the cluster evolution with respect to internal dynamics, at least at the beginning. clusters would underfill their Roche lobe even in the case of a rather extremely eccentric orbit, e.g. e = 0.9. We assume that a fraction of the total number of stars is initially paired in a primordial binary system. Following in , we define the binary fraction as the ratio between the number of binaries and the sum of single stars and binaries, f_b = n_b/(n_s+n_b). We set a f_b = 0.05-0.2 depending on the cluster model as summarized in Table <ref>. Our simulation grid contains two sets that differ only in f_b, thus their comparison could unveil some effects triggered by primordial binary dynamics. Note also our definition of f_b implies that the number of stars in binaries over the total is f_ 2b = 2f_b/(1+f_b)= 0.10-0.33. Binaries are initialised assuming the same mass function of single stars and a uniform mass ratio distribution in the range q=0.1-1 for stars heavier than m_*>5 or random pairing for the lighter ones <cit.>. Following previous works on the same topics, we adopt a thermal distribution of the eccentricity and a semi-major axis distribution flat in logarithmic values, with an upper limit set to 50 AU and a lower limit set by the sum of the stars' radii <cit.>. In the majority of the cases, for each value of R_ and N we run two simulations with different random seeds to explore possible dependencies on the randomness of the star distribution. The only exception is the case R_ = 0.47 pc and N = 300k stars, which was limited to only one model because of the available computational time. The simulations are performed until either the average mass of stellar BHs falls below ⟨ m_⟩≲ 15, no BHs with a mass above 30 are retained in the cluster, or the simulated time exceeds at least one relaxation time <cit.>, which can be expressed in the form <cit.> T_ rlx = 282 Myr1/m_* lnγ_n N√(M_c/10^5)(R_/1 pc)^3/2, where γ_n = 0.11-0.4 for a monochromatic mass spectrum <cit.> but it can be as low as γ_n=0.02 for a multi-mass mass spectrum <cit.>. These choices result in a physical simulated time ranging between T_ sim∼ 0.1-2.3 Gyr and lead to an optimal balance between the computational cost of the simulations and the portion of parameter space that can be explored. Table <ref> summarizes the main properties of models. As sketched in Figure <ref>, in comparison to the most recent studies based on N-body <cit.> and Monte Carlo simulations <cit.>, the clusters occupy a region of the N-ρ_h plane mostly populated by Monte Carlo simulation grids. This, coupled with the fact that simulations with N>10^5 stars usually adopt a binary fraction <20%, makes our simulations an unprecedented grid of models that complements, and further expands, the phase space accessible with direct N-body models. § RESULTS §.§ Star cluster evolution The clusters were originally devised to explore compact object dynamics, compact binary mergers, and intermediate-mass black hole build-up in dense star clusters, thus they are not meant to be representative of any observed cluster. Nonetheless, it is interesting to compare in Figure <ref> the time evolution of the modelled mass and half-mass radius with relatively young, i.e. typical ages 0.1-1 Gyr, massive star clusters in the Milky Way (MW), the Small (SMC) and Large Magellanic Cloud (LMC), M31 <cit.>, the Henize 2-10 starburst dwarf galaxy <cit.>, and the M83 galaxy <cit.>. Over the simulated time, our models overlap with observed clusters, thus indicating that the adopted initial conditions lead to numerical models that can represent one possible evolutionary pathway of some observed clusters. We find that the mass and half-mass radius evolution is well described by the following relations: M_ cl(t) = M_ cl,0[1 + α_M(t/T_ rlx)^-β_M], R_(t) = R_,0[1+t/α_R T_ rlx]^β_R. The values of the fitting parameters, which are summarised in Table <ref>, are independent of the initial cluster mass, and weakly depend on the initial value of the half-mass radius. This owes to the fact that the mass-segregation time scales with M_c^1/2 R_^3/2, thus it is mostly affected by the choice of the half-mass radius. Figure <ref> shows the ratio between the final and initial values of R_ as a function of the simulated time, normalised to the initial relaxation time. The plot clearly highlights how the cluster expansion depends only on the dynamical age of the cluster, regardless of the initial cluster mass. By the end of the simulations, our clusters have typically lost ∼ 25-50% of their initial mass and their radius has expanded by a factor of 1.5-10, thus implying a reduction of the density at the half-mass radius by up to four orders of magnitude and a reduction of the velocity dispersion of around 1-1.5 times. The drop in density and velocity dispersion crucially affects the rates at which dynamical interactions take place. A thorough comparison among simulations and the models discussed in the past literature is made hard by the many different assumptions of previous works, like the use of equal-mass stars to represent the cluster, the different binary fraction, the properties of the primordial binary population, the lack of a dedicated treatment to deal with compact binaries, and the use of outdated prescriptions for the evolution of massive stars (m_ ZAMS > 50). In order to test the new features of the code, we have carried out an extensive comparison of the evolution of star clusters with 110,000 stars in N-body and Monte Carlo simulations in our companion paper <cit.>, where we have shown, among other things, that N-body models of the same clusters seem to evolve toward sparser configurations compared to Monte Carlo models with large tidal radii simulated with the MOCCA code. This difference is likely due to the different criteria used to identify escapers in the two methods, which can lead to an early removal of escaping stars in MOCCA simulations compared to . §.§ Stellar and compact object binaries Mass-segregation of the most massive stars enhances strong dynamical interactions, which can trigger the ejection of the tightest binaries, the ionisation of the loosest ones, and the formation and hardening of new binaries. In the clusters, the processes responsible for the formation and disruption of binaries counterbalance efficiently, determining a slow variation of the overall binary fraction. As shown in Figure <ref>, the binary fraction decreases by a small fraction, down to f_b,fin∼ 0.16-0.18 in models starting with f_b=0.2 and to f_b,fin=0.04-0.05 in models with f_b = 0.05. Interestingly, this variation in the binary fraction is similar, within the simulation time, to results obtained for lower-N cluster simulations <cit.>. The decrease of the binary fraction is mostly due to the disruption of the softest binaries in the cluster and, for a small fraction (< 5%), to hard binaries that are ejected in strong dynamical interactions. These binaries have typical semi-major axes broadly distributed in the 10^-2-5× 10^2 AU. For the sake of comparison, Figure <ref> shows the initial period-mass distribution and mass-ratio of the population of primordial binaries in our models. Figure <ref> shows the distribution of the ratio between the semi-major axis of ejected binaries and the hard-binary separation, both measured at the moment of the ejection, and the ejection velocity distribution for two different simulations. The plot makes clear that the vast majority of ejected binaries are hard and that this population is dominated mostly by binaries with a mass m_ bin < 2. The velocities of the ejected binaries generally remain in the range of 1-100 km s^-1, too small compared to the circular velocity of the Galaxy to permit the identification of these escapers as former cluster members. The upper panel of Figure <ref> shows the variation of the fraction of binaries normalised to the total number of stars in a given mass bin and at a given time. Initially, around 35-50% of all stars with a mass above 20 are initially binary members, with the maximum percentage achieved for stars heavier than 100. However, the population of heavy objects is rapidly depleted (note that t/T_ rlx = 0.22 corresponds in this case to t = 18.8 Myr) owing mostly to stellar/binary evolution, which causes a sharp drop in their number. The maximum stellar mass keeps decreasing over time, whilst a small population of binaries with components in the 5-100 develops – clearly owing to the formation of binaries with one or two BHs. The mass distribution of objects in binary systems, shown in the lower panel of Figure <ref>, highlights that the number of binaries with at least one component heavier than 10 is relatively small compared to the total number of objects in binaries. Assuming initially N=120,000 stars and f_b=0.2, we see that less than 1,000 binaries contain a component with a mass m_* > 10, most of them being former components of a primordial binary. The progenitors of compact objects, which are the most massive stars and stellar binaries in the cluster, have already sunk into the cluster centre when compact objects form. Therefore, to dissect the properties of compact binaries in clusters, we focus on binaries forming within the cluster half-mass radius, calculated along the cluster evolution. Figure <ref> shows the number of binaries with a WD, NS, or BH as a function of time for all models. The population of binaries containing at least one WD (dWDs), N_ dWD, depends on the half-mass radius and binary fraction. At fixed half-mass radius, the number of binaries with a WD significantly decreases at decreasing f_b, because most of these binaries are of a primordial origin. In fact, at fixed N stars and R_, the ratio between the number of dWDs is 4-5 times higher in models with f_b=0.2 compared to those with f_b=0.05, thus comparable to the ratio between the initial amount of primordial binaries in one case or the other. At fixed value of f_b, instead, the smaller the half-mass radius, the smaller is the number of dWDs. In general, by the end of the simulations we find N_ dWD≃ 200-700 dWDs per cluster. The amount of binaries with a WD monotonically increases over the simulated time, highlighting the competition between WD depletion via dynamical encounters and the formation of new WDs, mostly via binary stellar evolution <cit.>. The evolution of the number of binaries with a NS (dNS) shows two clear peaks at 20 and ∼ 100 Myr. These peaks correspond to the formation of NSs from stars in the high-end (the first) and low-end (the second) of the NS progenitor mass range. The drop after each of the peaks is due to NS natal kicks, which cause the ejection of a substantial fraction of NSs from the parent cluster. The width of the peaks is related to the time needed for the NS to leave the cluster, i.e. when their distance from the cluster centre exceeds twice the tidal radius. After the second peak, the number of binaries with a NS decreases in all simulations, regardless of the initial conditions. We find that the largest value of N_ dNS is reached in the case of R_=1.75 pc, f_b=0.2, and N=600k. At fixed value of R_ and N we find that a larger initial binary fraction leads to a more numerous population of binaries with a NS, around 50% more for models with f_b = 0.2. At fixed value of N and f_b the number of binaries with a NS increases at increasing values of R_ because in denser clusters it is more likely that massive stellar binaries either are ejected or merge before stellar evolution becomes dominant. The population of binaries with a BH (dBH), similarly to those with a NS, are characterised by two peaks of formation, one at around 10 Myr, driven by stellar evolution, and another at later times driven by dynamics. The number of binaries with a BH, N_ dBH, in the primary peak depends on the initial number of stars – the larger N_0 the larger N_ bBH, whilst the number in the secondary peak depends on both the half-mass radius and binary fraction, although it is hard to discern the effects of different initial conditions in this case. the clusters, §.§ Ejection of single and double compact objects Over the simulated time, all clusters lose around 20–70 single BHs, depending on the cluster initial conditions, and 10–70 binaries containing either one or two compact objects. Figure <ref> shows the mass distribution of ejected single BHs, which is characterised by two peaks, one at m_ BH∼ 3 and another at m_ BH∼ 25, and a tail that extends up to m_ BH∼ 10^2. The first peak is due to the natal kick of NSs and low-mass BHs, with masses in the range m_ BH = 2.5-6, and develops in the first 10–50 Myr, whilst the secondary peak is due to dynamical interactions[In our simulations the minimum mass allowed for BHs is m_ BH,min = 2.5]. The population of ejected binaries hardly depends on the cluster initial conditions. Therefore, for the sake of simplicity, we gather the ejected binaries from all simulations to have a statistically larger sample. In the following, we distinguish between binaries containing two compact objects, labelled as DCOB, and those containing one compact object and a star, labelled as SCOB. Figure <ref> shows the component mass, semi-major axis, and eccentricity distribution of the ejected binaries in all the clusters. Around 94% of the ejected binaries are primordial. A clear difference between double and single compact object binaries arises from these Figures. In total, we find 229 ejected DCOBs of both dynamical (144) and primordial (85) origin. The DCOBs exhibit a similar mass distribution for the primary and the companion, characterised by a plateau in the m_1,2 = 2-20 and a clear peak at m_1 ∼ 45 for the primary and m_2 ∼ 27 for the companion. The resulting mass ratio distribution is quite peculiar, with a clear dominance of DCOB with a mass ratio q>0.6, owing to the tendency of dynamical interactions to pair objects of comparable mass. The eccentricity distribution is dominated by a peak around 0, caused by a sub-population of primordial binaries that underwent the common envelope phase (64.7%), and a nearly flat distribution in the range e=0.5-1. Additionally, we find 375 ejected SCOBs, the vast majority of which coming from primordial binaries (353) with a small contribution from dynamically assembled systems (22). The mass distribution of the compact objects in SCOBs peaks at a value, m_ CO∼ 2-4, in the range of NSs and small BHs, definitely smaller compared to the mass distribution of the stellar companion, which peaks at 10, but with a secondary peak at ∼ 0.3-0.5. The binary mass-ratio distribution of SCOBs clearly differs from DCOBs, showing a peak at q∼ 0.2 and a decrease toward larger values. The compact object in the SCOBs is mostly a low-mass BH (200) – typically with a mass m_ BH<10 (173) – or a NS (173), and in only two cases a ONeWD (2). The stellar companion is a main-sequence star in the vast majority of the cases (353), followed by core He burning stars (20) (all with a primary weighing <5), and 2 naked He main-sequence (MS) star. Stellar companions in the MS phase are relatively massive: 18 of them have a mass m_ MS < 1, 245 have a mass in the range 1<m_ MS/<10, 74 in the range 10<m_ MS<20, and just one with a mass m_ MS = 29. All stars in the CHeB phase have a mass in the m_ CHeB = 5-16 range and are paired with an object lighter than m_ CO < 5, all of them come from primordial binaries. Focusing on DCOBs, we find a few peculiar and interesting systems. Among all ejected BBHs only 5 merge within a Hubble time, because most BBHs were ejected when the density and velocity dispersion of the cluster had already dropped due to its expansion and mass loss. In two cases, the ejected BBH contains an IMBH with mass either M_ IMBH = 120 or 350. In five cases, instead, we find an ejected BBH with a merging time smaller than a Hubble time. Table <ref> summarises the number of ejected single and binary BHs, and of BBHs and BH-IMBH binaries that merge within a Hubble time. §.§ Black hole – main sequence star binaries The sample of known BH–MS star systems has significantly grown over the last few years <cit.>. Some of the BHs observed in a BH–MS binary appear to reside in star clusters both in the Milky Way <cit.> and the Large Magellanic Cloud <cit.>, whilst others appear to be in the Galactic disc <cit.>. It is an open question whether these BH–MS systems come from primordial or dynamically assembled binaries. In the case of a dynamical origin it is also unknown whether the stellar companion captured the BH or its progenitor. In these regards, the models offer us a novel way to look for BH–MS binaries in simulated clusters and identify possible properties of BH–MS binaries formed through different channels. Since the cluster database is relatively small and limited to a single metallicity, we cannot perform a comprehensive comparison between observed and simulated BH–MS binaries. Nonetheless, it is illustrative to qualitatively compare the properties of BH–MS binaries formed in models and the observed one. For example, models permit us to dissect the population of BH–MS binaries into those forming inside the cluster, some of which have a lifetime much shorter than the cluster life and are disrupted via interactions with other cluster members, or that have been ejected from the cluster. Figure <ref> shows the component masses, period, and eccentricity of in-cluster and ejected BH–MS binaries. We assume that in-cluster binaries are those forming at any time over the simulated time, therefore the same binary or one or both components can appear multiple times in the plot. We see that in-cluster binaries are markedly different from ejected binaries. The latter can be divided in two sub-classes. The first sub-class exhibits a short period (P<0.1 day) and an almost null eccentricity, e ∼ 0. Binaries in this sub-class are characterised by a BH with mass m_ BH < 10 and a MS star with a mass in the 2-10 range. They originate from binary evolution, and, in particular, underwent a common envelope phase that shrank the semi-major axis and damped the eccentricity of the binary. The ejection engine of these binaries is a SN explosion. The second sub-class, instead, comprises heavier BHs (m_ BH = 10-100) and lighter MS stars (m_ MS < 1), and is characterised by eccentricities in the range e = 0.2-1, indicating that these binaries come from dynamical interactions sufficiently strong to eject the binary from the cluster. In-cluster BH–MS binaries can contain BHs and MS stars significantly heavier than the ejected binaries and are characterised by longer periods (P>10 d) compared to ejected binaries. Most in-cluster binaries with a period P≲ 10^3 d have zero eccentricity, whilst practically all those with a longer period have eccentricity >0.1 and up to extreme values. From Figures <ref>, it is evident that in-cluster binaries exhibit a peculiar distribution in the m_ BH-m_ MS, which suggests the existence of two sub-classes. We find that the first class is characterised by a companion with a mass m_ MS/m_ BH = k (m_ BH/1)^-1/2, with k=2-10. Most binaries falling in this class have a period shorter than 100 d, whilst the second class involves binaries with m_ BH>10 and m_ MS<5. An even more clear distinction is shown in Figure <ref>, where the MS-to-BH mass ratio is shown against the orbital period and eccentricity. This plot highlights four interesting peculiarities of in-cluster BH–MS binaries: * the vast majority of binaries with e<0.1 are primordial. Most of them are characterised by m_ MS/m_ BH > 0.3, heavy MS stars m_ MS > 1 M_⊙, and periods below P < 100 d; * primordial binaries with e > 0.1 have larger periods (P = 10^2-10^6 d), and similar mass ratio and MS mass as circular primordial binaries; * the vast majority of dynamically formed binaries have e>0.1 and periods in the range (P=10^2-10^9 d). They are generally characterised by a mass ratio m_ MS/m_ BH < 0.3, MS stars with a mass m_ MS < 10 and a BH with mass m_ BH = (10-100); * only a handful dynamically formed binaries have e < 0.1, and are all characterised by a period P=1-10 d. As shown in Figure <ref>, we find that the longer is the orbital period the larger the binary eccentricity, and almost all binaries with eccentricity e>0.1 have a period P>100 d, with a handful exceptions. Most binaries with a period shorter than P<100 d, instead, are primordial and involve a MS star heavier than m_ MS > 1. The difference between primordial and dynamical BH–MS binaries is further highlighted in Figure <ref>, which shows the component masses of these two classes of binaries. From the plot, it is apparent that dynamically assembled binaries dominate the region of the plane with m_ BH > 10 and m_ MS < 10. The observed BH–MS binaries have orbital properties quite different from our ejected binaries, especially if we consider the observed period and eccentricity. However, only the quiescent BH candidates in NGC3201 are still associated with a star cluster, whilst the origin of the other binaries is unknown. Two of the six observed binaries <cit.> have component masses compatible with our primordial binaries, one of them <cit.> falls in a range where only dynamically assembled binaries are present, and the three sources observed in the Galactic globular cluster NGC3201 have component masses compatible with both in-cluster and ejected binaries. In our models, the vast majority of ejected binaries have a primordial origin and their small period (P < 0.01 d) owes to mass transfer episodes. The few ejected binaries formed dynamically are characterised by a period P<1 d, still much shorter than observed values. Wider, and more numerous, ejected binaries could form in substantially looser or lighter star clusters. On the one hand, decreasing the cluster mass or density would enlarge the hard-binary separation and possibly increase the semi-major axis of ejected binaries <cit.>. On the other hand, a smaller cluster mass would correspond to a lower escape velocity and thus it is more likely for binaries to escape the parent cluster. In principle, MS–MS binaries ejected in the earliest phase of the cluster life could further contribute to the population of BH–MS binaries, but these binaries are removed from our simulations before they can further evolve. Nonetheless, we find that only two ejected MS–MS binaries have at least one component with mass above the threshold for BH formation, i.e. ∼ 18, thus ensuring that ejected MS–MS binaries do not contribute to the population of ejected BH–MS binaries. Among all observed data, the binaries observed in NGC3201 are probably the ones more suited for a comparison with our models, given the metallicity and mass of NGC3201. From the central and bottom panel of Figure <ref>, it is apparent that our in-cluster binaries have periods, eccentricities, and BH masses compatible with those observed in NGC3201. The fact that our models do not match well the companion mass may be due to NGC3201's age. In fact, this cluster is relatively old <cit.>, thus its population of binaries has likely been heavily processed over time, and most of its stellar population with super-solar mass already left the MS. Figure <ref> favours this interpretation. Note that both the mass of BHs and MS stars in dynamically formed BH–MS binaries tend to be smaller compared to primordial binaries. As the BH-burning process proceeds, the average BH mass will keep decreasing, while stellar evolution processes will deplete the high-end tail of the MS mass distribution, possibly favouring the formation of BH–MS binaries in the region populated by NGC3201 sources. §.§ Black hole subsystem In all clusters, the segregation time is generally shorter than the stellar evolution timescale of massive stars, therefore massive stars sink to the cluster centre before evolving to BHs. This implies a possible enhancement of the probability for stellar mergers and star–BH collisions. Given the short segregation times, BHs dominate the dynamics in the cluster core already after a time t=20-40 Myr, making up the 50-80% of the mass in the cluster core and around 10% of the mass within the half-mass radius, as shown in Figure <ref>. Given the amount of mass in BHs enclosed within the core radius, this length scale can be regarded as the BH sub-system scale radius <cit.>. A similar trend of the BH mass fraction inside R_ has been found also in recent simulations performed with the Monte Carlo code MOCCA <cit.> and the N-body code PeTar <cit.>, which both exploit similar stellar evolution recipes. Both the primordial binary evolution and the onset of three-body and multiple gravitational scattering favour the formation of binaries containing at least one BH. Figure <ref> shows the BH formation efficiency, defined as the ratio between the number of BHs inside the cluster core radius and the initial cluster mass, i.e. ϵ_ BH, BBH = N_ BH,BBH(<R_c)/M_ cl,0. We find that, regardless of the initial cluster mass, half-mass radius, or binary fraction, all models are characterised by ϵ_ BH≃ (0.8-2)× 10^-3^-1 for single BHs and ϵ_ BBH≃ (0.8-2)× 10^-4^-1 for binary BHs. As shown in the right panel of Figure <ref>, the BH formation efficiency slightly increases with the simulation time, although it is unclear whether this quantity saturates already at t_ sim/T_ rlx≳ 10. Note that our definition of ϵ_ BBH implies that a cluster with initial mass 7× 10^4(6× 10^5) contains around 7(60) BHs in a binary system after 10 relaxation times. It might seem trivial that ϵ is independent of the cluster initial conditions, as it suggests that it is just a consequence of the adopted mass function. However, the BH-burning mechanism <cit.>, by which the most massive BHs pair in binaries that first eject the lighter BHs from the cluster and then get themselves ejected via super-elastic binary-single and binary-binary scatterings, could significantly affect the population of BHs. This does not seem the case in the models. The small spread observed in the BH binary formation efficiency is related to the initial cluster half-mass radius and binary fraction, whilst the weak increase of ϵ_ over time is the result of dynamically formed binaries. Figures <ref>-<ref> show the cluster and BH subsystem density profiles at different times for three cluster models with N = (0.3-1)× 10^6 and R_ = 0.47-1.75 pc. The central density of BH subsystems attains values around ρ_ BHS≃ (10^4-10^5) pc^-3, i.e. values 10–100 times larger than the density of stars, whilst their scale radius is roughly R_ BHS≃ (0.5-1) pc in all models, corresponding to the radius at which the density contribution from the BHs and stars equal each other. Looking at the different panels it is possible to identify the signatures of the whole BH burning process as described in <cit.>. Firstly, BHs start forming and interacting, driving the formation of the BH subsystem and its subsequent expansion over a timescale t∼ T_ rlx. Secondly, dynamical BH interactions cause the steepening of the BHS density and the contraction of its structure, driven by BH ejections over a time 1<t/T_ rlx<5. Thirdly, the BH subsystem rebounces and expands again, reaching a seemingly stable structure, at least within the simulated time. Figure <ref> shows the BH mass distribution at different times for a model with N=1.2× 10^5 stars, R_ = 1.75 pc, and f_b=0.2. This plot shows all BHs inside the cluster at a given time, regardless whether they are components of a binary system or single BHs. For the sake of comparison, we added in the plots the overall BH mass distribution inferred by the LVC <cit.>. The plot highlights an initial phase in which the first BHs start to form, some of them falling in the upper-mass gap, but as the evolution proceeds new, lighter, BHs form while the most massive BHs are ejected via binary-single and binary-binary scatterings, as expected in the BH-burning scenario. Interestingly, our simulations suggest that the evolution of the cluster can naturally lead to the peak around 10 inferred from GW detections, mostly owing to stellar dynamics that crucially sculpts the BH population. Nonetheless, any comparison among our data, which show all BHs in the cluster, and LVC observations, which are representative of BH mergers, must be taken with a grain of salt. There are other potential explanations for the 10 peak, like isolated binary stellar evolution <cit.>, impact of primordial binary evolution in star clusters <cit.>, metal rich star clusters <cit.>. Hopefully, the new data acquired during the forthcoming four LVC observation run could help pinning down the impact of different processes on the BH mass distribution. We find that almost all BHs heavier than >30 are ejected from the simulated clusters reaching more than ∼ 15 relaxation times. To further highlight the BH burning process, we reconstruct the time evolution of the average BH mass, ⟨ m_ BH⟩, for all BHs enclosed within the half-mass radius. As shown in Figure <ref>, ⟨ m_ BH⟩ follows the same trend regardless of the cluster initial condition, namely: i) the most massive BHs form first and the average mass sets close to the peak allowed by the adopted stellar evolution model (35-40); ii) more numerous, lighter BHs start to form causing a rapid decrease of the average mass down to 15-20; iii) dynamical processes kick in and trigger BH ejection, leading to a secular decrease of the BH average mass down to ∼ 8-10 <cit.>. The similar ⟨ m_ BH⟩ time evolution observed in different models supports the idea that the BH burning process is substantially due to dynamics. This is further highlighted in Figure <ref>, which shows the BH average mass as a function of the time normalised to the cluster relaxation time. We find that at a time t > T_ rlx the average BH mass is well described by a simple relation: ⟨ m_(t) ⟩≃ m_,rlx - 4 Log(t/T_ rlx), where m_, rlx = 17.4±0.1. Although our models are not meant to be representative of any observed cluster, and although there are certainly many pathways leading to the same final cluster evolutionary stage, our results suggest that old Galactic globular clusters and massive clusters in the Small Magellanic Cloud could be harboring a population of relatively light BHs (see Figure <ref>). This would explain why observations of BHs in binary systems are generally characterised by masses m_ BH<20, relatively lighter than the typical value inferred for the population of merging BHs, i.e. m_ BH,GW≃ 30. §.§ Using scaling relations as input for semi-analytic codes It is well known that N-body simulations of star clusters require generous computational resources to enable an exploration of the phase space and to reach an appreciably long simulated time. The simulations make no exceptions, as they required in total approximately 2.2 million core hours. To overcome this problem, many works have proposed semi-analytic tools specifically devoted to study the evolution of compact objects in the last few years, and especially BH binary mergers <cit.>. One ingredient missing in some of these fast and accurate codes is a treatment of the co-evolution of the star cluster and the BH population, which may significantly affect the formation of merging compact objects <cit.>. The models could provide important fitting formulas to implement the evolution of under-filling cluster models in such semi-analytic tools. The overall evolution of star clusters can be described by simple expressions (Equations <ref> and <ref>). If the cluster initial mass and half-mass radius are known, the aforementioned relations enable an accurate description of its evolution, at least in the case of under-filling star cluster models. Moreover, our models offer also insights on the internal evolution of the cluster, providing, for example, details about the mass distribution of ejected single and double compact objects, and the properties of the central black hole subsystem. These ingredients can be easily implemented in semi-analytic tools to obtain a fast and accurate description of compact object dynamics in clusters too massive to be modelled with N-body models. A simple implementation of the cluster evolution has been already developed by <cit.> in their B-POP code, showing that the inclusion of cluster mass loss and expansion causes a critical decrease of the probability of high-generation mergers in dense and massive star clusters <cit.>. § SUMMARY AND CONCLUSIONS In this work, we have presented the first results from the star cluster simulations: a suite of 19 direct N-body simulations, performed with the code, modelling the evolution of star clusters with up to 1 million stars and up to 33% of stars initially in a binary, over a timescale of ∼ 0.5-2 Gyr. These simulations contain up-to-date stellar evolution models, and for the first time a series of recipes to treat relativistic binaries in terms of merger remnant mass, spin, and post-merger recoil. Our models represent clusters initially under-filling their Roche lobe, and therefore their evolution can be considered quasi-isolated. The models considerably expand the portion of parameter space covered with full N-body simulations, opening the possibility to compare with large-N Monte Carlo models. Clearly, there is a vast number of parameters whose impact on the simulation results remains unclear. For example, adopting a sufficiently large value of the metallicity would imply the impossibility to form IMBHs from stellar collapse. However, we expect that our main conclusions about the properties of the BH population should not be severely affected by cluster metallicity, as they appear to be driven mostly by dynamics. We find that the amount of primordial binaries seems to poorly affect the overall evolution of the cluster and the evolution of the BH population; however the adopted initial orbital properties could become important when comparing our data with observations, like in the case of BH–MS binaries. For example, a different assumption on the initial mass-ratio distribution could lead to primordial binaries with final BH–MS component masses more similar to the observed one. However, discrepancies among observations and models could arise from a combination of different assumptions, making it hard to pinpoint the main source of uncertainty. Finally, our simulations model initially underfilling clusters, meaning that the impact of the Galactic field is almost negligible compared to clusters' internal dynamics. This choice enabled us to have a clean view at the impact of stellar interactions on the evolution of the whole cluster and its BH population, and incidentally lead to star cluster models that resemble observed clusters in term of mass and radius. Future simulations adopting filling or overfilling clusters may help understanding whether the evolution of BH subsystems is intrinsically linked to the overall evolution of the host cluster, for example in terms of mass-loss and expansion. The main outcomes of the models can be summarised as follows. * mass-loss and expansion of clusters is mostly determined by internal dynamics and can be described by simple analytical expressions, with parameters that weakly depend on the initial conditions. The binary fraction varies mildly over the simulated time, within 10-15% of its initial values. Nonetheless, stellar evolution and dynamics cause a progressive drop of the fraction of stars in binary systems for primary masses m_1>2 [Figures <ref>-<ref>]; * over a Gyr timescale, clusters contains around 200–700 binaries with at least one WD, whilst the number of binaries with a NS or a BH generally remains below 1–10 and 5–40, respectively. In general, binaries with at least one compact object are more numerous in clusters with a larger initial binary fraction, suggesting that most of these binaries have a primordial origin. Moreover, the denser the cluster is the smaller the number of binaries, owing to energetic dynamical interactions that disrupt binaries more efficiently [Figure <ref>]; * ejected binaries with one (SCOB) or two (DCOB) compact objects have different properties. DCOBs exhibit masses following a nearly flat distribution around 2-20 and a peak at m_ BH = 45, a peculiar mass-ratio distribution that peaks around q≳ 0.6, and a flat eccentricity distribution in the range e=0.5-1. SCOBs, most of which formed from primordial binaries, typically involve low-mass BHs (m_ BH = 3-10) and fairly massive MS stars (m_ ST = 1-10) [Figure <ref>]; * we find a substantial population of BH–MS binaries in models. Most BH–MS binaries forming inside the cluster have typical BH masses m_ BH>10, a companion star with mass m_ MS = 0.7-100, orbital periods >10 days, and span the entire eccentricity range. Ejected BH–MS binaries, instead, feature significantly smaller BH masses m_ BH < 10, shorter periods (<10 days), and are mostly contributed by primordial binaries. We find that the properties of the modelled binaries are compatible with some features of observed BH–MS binaries, especially those observed in the globular cluster NGC3201 [Figures <ref>-<ref>]; * in all models, BHs form a long-lived subsystem in the cluster centre already after 0.5 relaxation times, with a typical density 10-100 times higher than that of stars. The cluster core radius represents a good proxy of the BH subsystem size, as BHs make up 50-80% of the mass enclosed within this radius. We find that the ratio between the number of BHs inside the core radius and the bound cluster mass, which we refer to as formation efficiency, attains values of ϵ_ BH,BBH/ M_⊙=10^-3(10^-4), for single and binary BHs, respectively. This quantity is only mildly dependent on the initial conditions, suggesting that dynamical processes have a relatively minor effect on the overall BH population over the simulation time [Figures <ref> - <ref>]; * dynamics in the BH subsystem critically affects the BH mass spectrum, owing to the BH-burning process. The peak of the mass distribution generally shifts from initial values m_ BH,pk = 25 down to m_ BH,pk = 5-15, and the average mass steadily decreases after one relaxation time, following an identical evolution regardless of cluster properties [Figures <ref>-<ref>]. Our simulations suggest that dynamically old star clusters harbour in their centre a population of BHs whose amount scales linearly with the cluster bound mass. The older the cluster is, the smaller the peak of the BH mass spectrum and the average BH mass. § ACKNOWLEDGEMENTS The authors thank the referee for their insightful feedback, which helped us improving our analysis. The authors warmly thank Agostino Leveque for their help and assistance in using their implementation of the code, and Vincenzo Ripepi for useful discussions and comments. This work benefited of the support from the Volkswagen Foundation Trilateral Partnership through project No. 97778 “Dynamical Mechanisms of Accretion in Galactic Nuclei” and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 138713538 – SFB 881 “The Milky Way System” (in particular subproject A08), and by the COST Action CA16104 “GWverse”. The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. for funding this project by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS Booster at Jülich Supercomputing Centre (JSC). MAS acknowledges funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 101025436 (project GRACE-BH, PI: Manuel Arca Sedda). AWHK is a fellow of the International Max Planck Research School for Astronomy and Cosmic Physics at the University of Heidelberg (IMPRS-HD). The work of PB was supported by the Volkswagen Foundation under the special stipend No. 9B870. PB acknowledge the support within the grant No. AP14869395 of the Science Committee of the Ministry of Science and Higher Education of Kazakhstan ("Triune model of Galactic center dynamical evolution on cosmological time scale"). The work of PB was supported under the special program of the NRF of Ukraine Leading and Young Scientists Research Support - "Astrophysical Relativistic Galactic Objects (ARGO): life cycle of active nucleus", No. 2020.02/0346. RS thanks Max Planck Institute for Astrophysics (Thorsten Naab) for hospitality during many visits MG was partially supported by the Polish National Science Center (NCN) through the grant UMO-2021/41/B/ST9/01191 GI, MM, and SR acknowledge financial support from the European Research Council for the ERC Consolidator grant DEMOBLACK, under contract no. 770017. § DATA AVAILABILITY The data from the runs of these simulations and their initial models will be made available upon reasonable request by the corresponding author. The Nbody6++GPU code is publicly available[<https://github.com/nbody6ppgpu/Nbody6PPGPU-beijing>]. The McLuster version used in this work will soon be available. A similar version is described in <cit.>. mnras
http://arxiv.org/abs/2307.04613v1
20230710145555
Encapsulation Structure and Dynamics in Hypergraphs
[ "Timothy LaRock", "Renaud Lambiotte" ]
cs.SI
[ "cs.SI", "math.DS", "physics.soc-ph" ]
Mathematical Institute, University of Oxford, UK [email protected] Mathematical Institure, University of Oxford, UK Turing Institute, London, UK [email protected] June 2023 Hypergraphs have emerged as a powerful modeling framework to represent systems with multiway interactions, that is systems where interactions may involve an arbitrary number of agents. Here we explore the properties of real-world hypergraphs, focusing on the encapsulation of their hyperedges, which is the extent that smaller hyperedges are subsets of larger hyperedges. Building on the concept of line graphs, our measures quantify the relations existing between hyperedges of different sizes and, as a byproduct, the compatibility of the data with a simplicial complex representation – whose encapsulation would be maximum. We then turn to the impact of the observed structural patterns on diffusive dynamics, focusing on a variant of threshold models, called encapsulation dynamics, and demonstrate that non-random patterns can accelerate the spreading in the system. Keywords: Higher-order Networks, Hypergraphs § INTRODUCTION Networks provide a powerful language to model and analyze interconnected systems <cit.>. The building blocks of networks are pairwise edges, and these blocks can then be combined to form walks and paths, making it possible for systems to be globally connected yet sparse. Since the seminal work of Watts and Strogatz 25 years ago <cit.>, a key focus of network science has been to investigate the relationship between the structure of a network and the dynamics taking place on its nodes <cit.>. This program requires the design of metrics to capture significant, non-random structural properties of networks, e.g., the clustering coefficient, the degree distribution or modularity, as well as the specification of dynamical models, both linear and non-linear, for the diffusion between neighbouring nodes. An important observation is that the same structural property may affect different dynamical models in different ways, e.g., a high density of triangles tends to slow down simple diffusion, but facilitate complex diffusion <cit.>. Finding the right modeling framework for interacting systems is a challenging task. While networks have the advantage of simplicity, it has been recognized that they may also neglect critical aspects of a system and even lead to a misleading representation. Driven by the availability of datasets with richer connectivity information in recent years, different frameworks have emerged to enrich the network representation, leading to different types of higher-order networks <cit.>. One branch of this research has extended pairwise graph-based models to multiway interaction frameworks, most notably as hypergraphs or simplicial complexes, to account for group interactions among arbitrary numbers of nodes <cit.>. Multiway interactions naturally appear in many systems, ranging from social interactions, where people interact in groups rather than in pairs <cit.>, to joint neuronal activity in brains <cit.> and cellular networks <cit.>. Different computational tools have been adapted to multiway systems, for instance for centrality measures <cit.> and community detection <cit.>. Researchers have also investigated how the structure of multiway interactions impacts dynamical processes <cit.>, especially the conditions under which dynamics on hypergraphs and simplicial complexes differ from those on networks <cit.>. The objectives of this paper are twofold: to propose metrics that characterise the non-random patterns of encapsulation in multiway systems, and to explore dynamical models that may be affected positively or negatively by this type of hypergraph structure. These objectives are motivated by a well-known conceptual difference between the two main representations for multiway systems, hypergraphs and simplicial complexes. By definition, a simplicial complex of size k nodes includes all of the subfaces of the complex. In contrast, a hyperedge of size k nodes does not imply the existence of any subsets as hyperedges in the same hypergraph. We refer to this difference as the simplex assumption. For example, using a simplicial complex to represent the relationship between 3 nodes {a,b,c} assumes that the subfaces {a,b}, {a,c}, {b,c} all exist, along with the individual nodes. This is a strong assumption that is unlikely to hold, even approximately, in real data. A classic example is co-authorship, where a jointly authored paper between three co-authors does not imply that each pair of co-authors have also authored separate papers together, nor that each co-author has published a single-author paper. Recent work has investigated the relationship between these two representations <cit.>, and shown that the choice of higher-order representation does effect the outcome of dynamical processes <cit.>. Simplicial complexes and hypergraphs can be seen as poles on a spectrum of multiway interaction structure, and it is likely that real data falls somewhere in-between. In this work, we build on previous investigations of this spectrum of overlapping higher-order structures, as well as random models for hypergraphs and simplicial complexes <cit.>. Our approach builds on the notion of line graph, that has been used in different contexts in network science, where nodes are the edges of the original graph and there is a link between two nodes if their corresponding edges have a node in common <cit.>. The interactions between hyperedges of arbitrary sizes make it possible to define a variety of different line graphs for hypergraphs. As each hyperedge can be seen as a set of nodes, this problem is equivalent to that of comparing two sets. There exist multiple ways to compare sets, which leads to multiple ways to build a line graph for a hypergraph <cit.>. We will focus in particular on what we refer to as an encapsulation graph, where two hyperedges are connected (by a directed edge from larger to smaller) if one is the subset of the other. We then analyze the properties of the resulting directed acyclic graphs built from real-world hypergraphs and from a synthetic hypergraph model called the Random Nested Hypergraph Model (RNHM) <cit.>, which allows for some control over the extent of nested structure through random rewiring of simplicial complexes. Finally, we define a process for the spread of a complex contagion on a hypergraph through its hyperedges, and show how varying levels of encapsulation structure impact the spread of the contagion in both synthetic and real hypergraphs. § MEASURING OVERLAP AND ENCAPSULATION IN HYPERGRAPHS Consider a list of multiway interactions, where each item in the list is a set of nodes that represent a group interaction. We will represent these interactions as a hypergraph, and focus in particular on aggregated, static hypergraphs, where all interactions are included regardless of any dynamic or temporal information. In fact, for all of the empirical datasets we will examine, this static hypergraph is actually the result of aggregating interactions that happen over time. We will also make our hypergraphs simple, meaning that no edges are repeated, i.e., hyperedges are contained in a set, rather than a multiset. In the future, the techniques we develop here could be extended to study the relationships between hyperedges over time, extending, for example, work on simplicial closure <cit.> or temporal dynamics of group interactions <cit.>. Formally we represent the multiway interactions as a hypergraph H=(V, E) where V=1, 2, ..., n is the set of n nodes and E={e_1, e_2, ..., e_m} is the set of m hyperedges representing interactions between the nodes in V, with the size of each interaction measured as the number of nodes and represented by ℓ_i = |e_i|. To understand the extent of nestedness in the structure of a hypergraph, we build a line graph where the nodes are hyperedges and where there is a a directed link between two hyperedges if one is a subset of the other. These links represent what we call encapsulation relationships between hyperedges. More formally, given two hyperedges e_i and e_j such that ℓ_i > ℓ_j, we say e_j is encapsulated by e_i if e_j ⊂ e_i. The line graph representing encapsulation relationships is a Directed Acyclic Graph (DAG) D_e_i,e_j of H, where a directed edge from hyperedge e_i to hyperedge e_j means that e_i encapsulates e_j. Since for every connected e_i and e_j we know that ℓ_e_i > ℓ_e_j, a cycle in this graph would imply that a smaller hyperedge encapsulates a larger hyperedge, which is impossible, thus the graph is always a DAG. We refer to this DAG as the encapsulation DAG of a hypergraph. By construction, a hypergraph corresponding to a simplicial complex would have the maximum possible number of edges in the encapsulation DAG. The center panel of Figure <ref> shows an example of encapsulation DAG. The number of edges in the encapsulation DAG is the number of encapsulation relationships present in the hypergraph. As we will show, these structures are useful for studying dynamical processes where the spreading occurs at the hyperedge level. The encapsulation DAG is closely related to a Hasse Diagram representing a partial ordering of a set of sets. However, a Hasse Diagram is transitively reduced by construction, meaning that an edge between two nodes is removed if there is an alternative path between the nodes. Hasse Diagrams with weights associated to their nodes have been used to define weighted simplicial complexes of hypergraphs, which were further used to predict the evolution and recurrence of small groups <cit.>. While we will examine transitively reduced encapsulation DAGs in Section <ref>, for consistency we will refer to the line graph with edges representing encapsulation relationships as an encapsulation DAG throughout. The encapsulation DAG is just one way to build a line graph from a hypergraph. Other objects can be defined by considering other relations between hyperedges. An important relation is the intersection between the hyperedges, which defines an overlap graph. Given two hyperedges e_i and e_j, an undirected edge exists between them if |e_i ∩ e_j| > 0, and the weight of the edge is the size of the overlap |e_i ∩ e_j| (or, alternatively, normalized as |e_i ∩ e_j|/minℓ_i, ℓ_j). If we remove the edges between hyperedges of the same size and impose directionality on the remaining undirected edges, for example by directing edges from larger hyperedges to smaller, we obtain a DAG that we call an overlap DAG. The right graph in Figure <ref> shows an example of the intersection relation. We note that overlap graphs are also related to clique-graph representations of pairwise networks <cit.>. Let us make a short digression about dynamics here, a topic that we will cover in more detail in Section <ref>. The encapsulation and intersection relations capture different ways in which hyperedges may be related with each other, but they also have different implications for dynamical processes on the hypergraph. The intersection graph is compatible with dynamics centered on the nodes of the hypergraph. One can think here, for instance, of a threshold model where all of the nodes in a hyperedge become activated if a certain number (or fraction) of its nodes are already activated. The intersection graph then provides us with information on how the activation of one edge may spread into others. Take the hyperedge {e,f} in Figure <ref> for instance. In that case, activating the nodes in {e,f} may result, depending on the details of the dynamical model, in activation of the hyperedges {a,b,c,e} and {b,e}, and trigger a cascade of activations in the hypergraph. The picture is strikingly different in the encapsulation DAG where {e,f} is disconnected and therefore has no impact on future activations. Indeed, from that perspective, it is not the fact that node e is activated that matters, but instead that hyperedges encapsulated in others are activated. In other words, the encasulation DAG is more naturally associated to dynamics where the states are defined on the hyperedges, in a way reminiscent of the Hodge Laplacian for diffusive processes <cit.>. A more thorough discussion of the interpretation of this type of dynamics, and its simulation on both synthetic and real-world hypergraphs, will be given in Section <ref>. Computationally, we construct the encapsulation and overlap graph structures using the following algorithms. For both algorithms, we first assign each hyperedge a unique label and construct a mapping between each node and the hyperedges it participates in. We then loop over each hyperedge α∈ E, and for each node u∈α we add edges from α to other hyperedges β∈ E based on the relation we are interested in. In the intersection graph, this means adding edges from hyperedge α to other hyperedges β∈ E, u∈β with the weight defined above. For the encapsulation DAG, we only add edges to hyperedges β that are encapsulated by α, meaning we add edges where β⊂α. After repeating this loop for each node in α, the out-neighbors of α represent all of the hyperedges in E that have the relevant relationship with the α. The complexity of this construction has two terms. We first loop over all hyperedges m=|E| to construct a mapping from hyperedges to labels, and a mapping from nodes to the hyperedges they are members of, which takes O(m·ℓ_max) time, where ℓ_max = max_e ∈ Eℓ_e is the maximum length of a hyperedge. Once the mapping is constructed, we again loop over all m hyperedges to find encapsulation and overlap relationships. The worst case time for a loop is the size of the largest hyperedge ℓ_max times the highest degree node k_max = max_u ∈ V |{e | u ∈ e; e∈ E}|. This second term dominates the first and so the worst case running time is O(m·ℓ_max· k_max). § ENCAPSULATION IN EMPIRICAL DATA In this section we introduce basic measurements of encapsulation relationships in some empirical hypergraph datasets, all of which were made available online with the publication of <cit.>. We focus in particular on coauthorship <cit.>, social contact <cit.>, and email communication datasets <cit.>. In Table <ref>, we show some statistics of the largest connected components of the hypergraphs. Following <cit.>, we exclude hyperedges of size greater than 25 nodes to keep some amount of consistency across the datasets. As mentioned above, we also ignore multiedges in the datasets and therefore consider the simple hypergraph representation of each. The coauthorship datasets, which include decades of published papers in multiple fields, contain numbers of nodes and edges that are multiple orders of magnitude larger than the face-to-face contact and email datasets. They are also orders of magnitude less dense in terms of the proportion of edges that exist in the projected graph where an edge exist between two nodes if they occur in the same hyperedge at least once. §.§ Degree in the Encapsulation DAG For each hyperedge, we are interested in the extent to which it encapsulates other hyperedges present in E, or equivalently we are interested in its out-degree in the encapsulation DAG. In the top row of Figure <ref>, we report the total number of hyperedges of each size m that are encapsulated by hyperedges of larger sizes n>m. The total number of hyperedges of each size n is shown as a dotted line. For each m, the number of observed hyperedges encapsulated decreases with n, but so does the number of size-n hyperedges. To account for the distribution of hyperedge sizes, in the bottom row of Figure <ref> we report the same counts but divided by the number of size-n hyperedges, giving us the number of encapsulated size-m hyperedges per size-n hyperedge. We also show the same quantity in a randomization of the hypergraph which we call the “layer randomization”. The name comes from the fact that in this randomization procedure we view the sets of hyperedges of each size k as a layer, similar to the multiplex approach taken in <cit.>. The procedure then works as follows: for each layer of the hypergraph consisting of hyperedges of size k, we gather all of the hyperedges and the set of their constituent nodes, then shuffle the labels of the nodes. We repeat this procedure for every layer independently. The result is a hypergraph where the hyperedge size distribution and the unlabeled node degree distribution within each size layer are preserved, but the labeled node degree distributions within size layers, the node hyperdegree distribution, and, most importantly, the cross-size encapsulation and overlap relationships are randomized. In other words, we randomize the hypergraph across layers, but not inside layers. This is the reason why we opted for this randomization procedure, and not, for example, the configuration model for hypergraphs introduced by <cit.>. Future work could investigate the effect of other randomization procedures such as those discussed in <cit.>. In Figure <ref>, we show the proportion of encapsulation and overlap relationships destroyed by the layer randomization. Only the coauthorship datests include hyperedges of size 1 (i.e., nodes representing papers authored by a single individual). The number of encapsulations of 1-node hyperedges increases with n after accounting for the number of size-n hyperedges across all three datasets. This indicates that authors who are part of large collaborations also publish single author papers. However, the relationship is not as strong as would be expected under the simplex assumption. In that case, every node should appear as a 0-simplex and the number of encapsulations would grow exactly as y=n since all n nodes would be encapsulated for every size-n hyperedge. Instead the encapsulation relationship for single nodes grows sublinearly, indicating that there are many nodes which appear in hyperedges of size larger than 1, but never appear alone. Note also that the layer randomization does not substantially reduce the number of encapsulations of 1-node hyperedges, since the shuffling of the 1-node layer has no effect, and shuffling at each higher layer still results in some encapsulations of 1-node hyperedges necessarily, since the set of nodes in each layer does not change. The relationship is even weaker for larger values of m. In this case, the simplex assumption would lead to the relationship y = n m, since for every size-n hyperedge all possible size-m hyperedges would have to exist. However, for all values, the number of encapsulations per size-n hyperedge stays well below 1, meaning that, on average, a size-n hyperedge encapsulates few smaller hyperedges relative to its maximum capacity. Notably, for all of the coauthorship datasets, encapsulation relationships tend to be destroyed among hyperedges of any size after the layer randomization is applied, as expected. The encapsulation structure of the face-to-face social contact hypergraphs appears to be more sparse than the rest of the datasets, partly due to the fact that there are fewer large interactions with a maximum interaction size of only 5 nodes. However, even with this more sparse structure, there are substantial encapsulation relationships, especially for hyperedges with 2 and 3 nodes. The email communication hypergraphs show a substantially nested structure where large group emails are composed of groups with many smaller interactions in separate email chains, especially pairwise and 3-node interactions. This is consistent with an intuitive understanding of how email communication works within organisations: many small group email chains will naturally occur to facilitate day-to-day operations and side conversations, while large group emails will occur around big meetings, decisions, or announcements that involve larger proportions of the organisational structure. Interestingly, compared to the coauthorship data, the layer randomization keeps substantially more of the encapsulation relationships in the email communications. We hypothesize that this is due to the smaller number of nodes in the email datasets, which constrains the possible randomizations. In Figure <ref>, we show the distribution of encapsulation for each (n,m) pair; that is, one distribution for each point in Figure <ref> up to n=5. We compute for each α∈ E with ℓ_α = n the number of out-neighbors of α in the encapsulation DAG that are of size m. We then normalize this quantity by the maximum number of subsets of size m, which is n m. Thus if a histogram is fully concentrated on 1, there is full encapsulation and the simplex assumption holds. The bottom row of Figure <ref> shows the same histograms computed on the layer randomized version of the hypergraph. As we observed in Figure <ref>, the number of encapsulations decreases for all of the coauthorship datasets when n increases. The distributions in Figure <ref> show that the most common amount of encapsulation is exactly one subset (leftmost point of each line), and relatively few hyperedges fully encapsulate all of the possible subsets (rightmost point in each line). However, we observe the opposite pattern in the social contact and email datasets, where full encapsulation of 2-node hyperedges by 3-node and 4-node hyperedges is common in the observed data, and these relationships are destroyed by the layer randomization. §.§ Paths Through Encapsulation DAGs In this section we show how analysis of encapsulation DAGs can help understand the structure of encapsulation relationships. An encapsulation DAG encodes interaction structure in at least 3 ways. As shown above, we can use the out-degree of a hyperedge in the DAG to measure the extent to which subsets of that hyperedge also appear as hyperedges. Similarly, the in-degree of a hyperedge in the DAG indicates the extent to which the supersets of a hyperedge exist, e.g., how much a given hyperedge is encapsulated. Finally, and this is the purpose of this section, the length of paths in the DAG indicates the “depth" of encapsulation relationships. Here we analyze the height of rooted paths in the transitively reduced DAG, inspired by the approach taken in <cit.>. A rooted path is one that begins from a root node, which we define as a node in the DAG with zero in-degree and non-zero out-degree. We consider paths starting from root nodes because they indicate the maximum possible path lengths through the DAG. A transitively reduced DAG is one in which all edges representing shorter redundant paths are removed. For example, if we have the edges A-B, B-C, and A-C, in the transitively reduced DAG the edge A-C would be removed, since there would still be a path from A to C without that edge. Analyzing the DAG after removing these “shortcut” edges gives us a sense for the extent to which intermediate sized hyperedges are or are not present. The distribution of path lengths in the transitively reduced DAG indicates the depth of the encapsulation relationships in the hypergraph. If the distribution is skewed towards the maximum length (k-1 edges for a hyperedge on k nodes), this indicates a hierarchy of encapsulations in the sense that multiple intermediate hyperedges of different sizes are all encapsulated by the same larger hyperedge (the root). In contrast, if most path lengths are short, this indicates that encapsulation relationships in the hypergraph are concentrated between only two different sizes at a time, a kind of shallow encapsulation. Note that transitively reduced DAGs corresponding to two hypergraphs with very different encapsulation structures could have similar numbers of edges, but very different path length distributions. As we will discuss below, deeper and more hierarchical encapsulation relationships can have important implications for how a contagion can spread over the hyperedges of a hypergraph. In the top row of Figure <ref>, we show the distribution of heights in each dataset compared to the average over multiple layer randomizations. After randomization, the maximum path length through the transitively reduced DAG drops substantially in every dataset, and the number of paths of length 2 drops by multiple orders of magnitude in all of the coauthorship and contact datasets, but not in the email datasets. In the middle and bottom rows of Figure <ref>, we plot for each root hyperedge its degree in the DAG against its maximum height (path length) in the transitively reduced DAG. The middle row shows the relationship without normalization for the observed (left) and layer randomized (right) hypergraph. The DAG degree of a hyperedge and its maximum length path in the transitively reduced DAG are positively correlated to varying extents across all of the datasets, but in the coauthorship datasets there are many hyperedges with high DAG degree that have relatively low maximum path lengths of only 2 or 3 edges. As mentioned previously, the maximum height is bounded by k-1, where k is the size of the hyperedge, since the maximum path length will pass through exactly one node (hyperedge) of each size 0 < k' < k, of which there are k-1. The bottom row of Figure <ref> again shows the relationship between DAG degree and maximum height, but with both quantities normalized by their maximums. As expected, when a root hyperedge has maximum degree in the DAG, it also has maximum path length (the opposite need not hold). The dark colored points in the top right of each normalized scatter plot indicate that only the hyperedges with small degrees have the maximum degree, meaning that they are also small hyperedges. §.§ Random Nested Hypergraph Model In this section we describe the Random Nested Hypergraph Model (RNHM) developed in <cit.>, which we will use as a starting point for analyzing the relationship between nested hypergraph structure and a hyperedge contagion process. The parameters of the model are: the number of nodes N; the maximum sized hyperedge s_m; the number of hyperedges of size s_m, denoted H_s_m; and ϵ_s, the probability of rewiring a hyperedge of size s<s_m. Hypergraphs generated by this model are sampled by the following process. First, H_s_m hyperedges of the maximum size s_m are sampled, where the probability of a node being included in a hyperedge is uniform. Second, all of the subsets of those hyperedges (i.e., the powerset of every edge excluding sets with size less than 2) are added to the hypergraph. In some simulations, we also include all of the individual nodes as 1-node hyperedges. Finally, each of the encapsulated hyperedges with size 1<s<s_m are rewired with probability 1-ϵ_s, meaning that when ϵ_s is small, hyperedges of size s are more likely to be rewired. Rewiring a hyperedge involves (i) choosing a pivot node in the edge uniformly at random; (ii) deleting all other nodes from the edge; and (iii) replacing the deleted nodes with nodes chosen uniformly at random from outside of the hyperedges that are supersets of the original edge, ensuring that the new edge does not already exist in the hypergraph. Since this model will be used as a substrate for contagion dynamics in the next section, we further constrain the RNHM by rejecting hypergraphs that are not connected. In Figure <ref> we show DAG representations of random nested hypergraphs, where edges of the encapsulation DAG are drawn in black and edges from the overlap DAG are drawn in green. As ϵ_s decreases, so does the number of encapsulation relationships (DAG edges). When ϵ_s=1, no hyperedges are rewired, so all encapsulation relationships exist. As ϵ_s decreases, rewiring of hyperedges reduces the number of encapsulation relationships until, when ϵ_s=0, almost no encapsulation relationships between 4-node and s-node hyperedges exist. However, since the s-node hyperedges were constructed based on the set of nodes that appeared in the 4-node hyperedges, some encapsulation relationships may randomly remain after rewiring. § THE ROLE OF ENCAPSULATION STRUCTURE IN DYNAMICS In this section we show that encapsulation plays a role in modulating the relationship between higher-order interactions and dynamical processes. We study a complex contagion process for which encapsulation and overlapping structures are vital to spreading. Our work builds on advances in the study of dynamical processes on higher-order structures, including the relationship between spreading dynamics on hypergraphs compared with simplicial complexes, where encapsulation relationships are implied <cit.>. It is important to emphasise that our analysis focuses on a purely higher-order effect, as the notion of encapsulation has no counterpart in classical networks. We study a hypergraph complex contagion process where in each discrete timestep, every node u∈ V and hyperedge α∈ E in the hypergraph is in a binary state, either inactive or active. We represent these states using two binary vectors, s_u for nodes and x_α for edges, which both take a value of 0 if the corresponding node or edge is inactive, and 1 if active. At each time step, an inactive hyperedge α, x_α=0 is activated if more than a threshold τ of hyperedges which it directly encapsulates, i.e., hyperedges of size |α|-1, are also active. Therefore activation can only spread in hypergraphs with an encapsulation structure that is tightly nested, with many encapsulation relationships between adjacent layers of the DAG. We refer to this class of contagion as encapsulation dynamics and focus on two variants depending on the influence we allow individual nodes to have on the dynamics.[Inspired by the language of topology, we may also call these dynamics subface dynamics, referring to the fact that a subface of a simplicial complex would need to be activated for a larger face to activate.] In the first variant, which we refer to as strict encapsulation dynamics, individual nodes can only have influence in the dynamics if they appear in the hypergraph as a 1-node hyperedge. These 1-node hyperedges only appear in the coauthorship datasets, meaning that in the other datasets, individual nodes have no influence on the spreading process and their being in an active or inactive state has no bearing on the process beyond their participation in an active hyperedge that is encapsulated. In contrast, in the non-strict variant we allow any individual node to influence pairwise interactions in which it participates. This corresponds to an assumption that all individual nodes are also 1-node hyperedges in the hypergraph and makes the state of individual nodes relevant to how the process can evolve. It also allows for exactly one kind of “backwards” activation, since activation of a large hyperedge will activate the individual nodes, while in general we do not allow activation of a large hyperedge to activate any of its subhyperedges in encapsulation dynamics. Instead, all activation flows upward through the encapsulation DAG from smaller to larger hyperedges. For a further discussion of the possible variants of encapsulation dynamics, see <ref>. Intuitively, encapsulation relationships are necessary to the spreading process in encapsulation dynamics, since larger hyperedges can only be activated if they encapsulate smaller hyperedges, which in turn must encapsulate still smaller hyperedges. We make an analogy between this process and building a campfire, where the smallest hyperedges correspond to dry leaves and twigs, medium hyperedges correspond to kindling, and the largest hyperedges correspond to the logs. Thus the “goal” of the encapsulation dynamical process we have defined is to catch the logs on fire by first lighting the fuel. The encapsulation dynamics can be seen as a generalisation of threshold models, which have been studied systematically in the context of opinion dynamics on graphs <cit.> and hypergraphs <cit.>. An important difference is that only activated nodes that are all connected by an active hyperedge can activate a larger hyperedge. From an opinion dynamics perspective, for instance, this could be interpreted as follows: a set of nodes that is part of a larger set may change the collective behavior only if nodes in the smaller set form an interacting unit, which allows them to coordinate their action. In the illustration of Figure <ref>, for instance, if we assume nodes a and b are activated in both hypergraphs, then their impact on node c would be identical in the case of threshold models. The encapsulation dynamics distinguishes the two configurations, and the activation of node c via the hyperedge {a,b,c} is only possible when nodes a and b can coordinate their action via the encapsulated hyperedge {a,b}. In the non-strict encapsulation dynamics setting, where individual nodes are assumed to exist and have encapsulation relationships only with 2-node hyperedges, activation of node b would also activate {a,b} and lead to the activation of {a,b,c}. Thus we can view the non-strict variant of the dynamics as falling between the strict dynamics and node-based threshold models, where the existence and structural patterns of 2-node hyperedges are key to determining whether the non-strict dynamics behave more like strict or node-based threshold dynamics.[We also report simulations using more traditional threshold contagion dynamics based on node activations in  <ref>.] We simulate encapsulation dynamics by constructing the encapsulation DAG, but only keeping edges between hyperedges at adjacent layers, i.e., where the difference in size is 1. In our simulations, we first place a given number of seed-activated hyperedges using one of the strategies described below. We then count for each hyperedge how many of its encapsulated hyperedges are seeds and deterministically simulate the dynamics forward. After each iteration, for every hyperedge α with size ℓ_α nodes we update the number of its encapsulated ℓ_α-1 hyperedges that became activated. In practice, it is more efficient to update these counts by maintaining a reverse adjacency list of the encapsulation DAG so that we need only loop over the newly activated hyperedges and update the counts for the inactive hyperedges that they are encapsulated by. We consider 4 different strategies for choosing seed hyperedges: * Uniform: Choose hyperedges uniformly at random. * Size Biased: Choose hyperedges with probability proportional to their size (i.e., choose the largest hyperedges first). * Inverse Size Biased: Choose hyperedges with probability proportional to their inverse size (i.e., choose the smallest hyperedges first). * Smallest First: Explicitly choose the smallest hyperedges first. Practically, arrange the hyperedges in a vector ordered by increasing size, with hyperedges of the same size in random order. Choose seed hyperedges starting from the beginning of this vector. We expect that in a hypergraph with deep encapsulation relationships the smallest first seeding strategy will be the most effective for strict encapsulation dynamics, since the small hyperedges must be activated or the dynamics will never reach the entire structure. In contrast, in non-strict encpsulation dynamics it may be the case that activating the largest hyperedges first will activate the most nodes that will in turn activate many pairwise hyperedges, potentially leading to more activation overall. §.§ Simulations on the Random Nested Hypergraph Model In Figure <ref> we compare the encapsulation dynamics on random nested hypergraphs with varying combinations of ϵ_s for RNHM parameters N=20, s_m=4, H_s_m=5. In these simulations, we also include all of the individual nodes in the hypergraph. We show results using both uniform (top row) and smallest first (bottom row) seeding strategy, with number of seeds the same as the number of nodes N. Each point is an average over 50 realizations of the hypergraph and 50 simulations per realization. The smallest first strategy is more effective for all parameters, consistent with the “campfire” intuition of lighting the fuel to burn the logs. In the smallest first simulations, all hyperedges are activated consistently when there is no rewiring of any hyperedges (ϵ_2=ϵ_3=1, red line), as expected. Interestingly, the dynamics are qualitatively different when either the 2- or 3-node hyperedges are rewired, but the other is left alone. More hyperedges are activated when only 3-node hyperedges are rewired (ϵ_2=1, ϵ_3=0, green line) compared to when only 2-node hyperedges are rewired (ϵ_2=0, ϵ_3=1, orange line). However, it is not the case that the most rewiring leads to the slowest activation dynamics. We attribute this to a combination between the stochasticity of the rewiring process and the relatively small number of nodes N, which can lead to situations where rewired hyperededges encapsulate each other randomly (see the encapsulation DAG in black for ϵ_2=0, ϵ_3=0 in Figure <ref>, for example). We also note that the smallest first seeding strategy as used in this setting would make node-based threshold dynamics trivial, since every node is activated in the seeding process. This illustrates the key conceptual difference between node-based and encapsulation-based dynamics: the latter requires explicit higher-order coordination among activated nodes, as well as encapsulation in the hypergraph structure. Figure <ref> shows the average outcome of simulations on RNHMs with an increasing number of seed hyperedges again chosen with either uniform or smallest first strategy (25 realizations, 100 simulations per realization). In both cases there appear to be two distinct trends in the encapsulation dynamics results depending on whether ϵ_2 is zero, meaning all 2-node hyperedges are rewired. Activation spreads to a larger number of hyperedges when ϵ_2 > 0, consistent with the result from Figure <ref>. When 2-node hyperedges are fully rewired, even with 50% of edges being activated as seeds, only about 75% of the total edges are activated by the end of the process in the best case. §.§ Simulations on Empirical Data We also simulated the encapsulation dynamics on the same empirical datasets described in Table <ref> and their randomizations. In the top rows of Figures <ref> and <ref>, we show the proportion of non-seed hyperedges activated after 25 steps across all datasets with varying seed strategies and increasing number of initially active seed hyperedges.[Since the dynamics are deterministic once the seed hyperedges are chosen, usually only a small number of simulation steps are needed before the spreading stops. 25 steps is more than necessary for all of these datasets.] In the bottom rows of each figure, we show the difference between the observed and randomized outcomes. In strict encapsulation dynamics (Figure <ref>), where pairwise edges can only be activated if one of their constituent nodes is present as a hyperedge, no further hyperedges are activated on average for small numbers of seeds across the coauthorship and face-to-face contact datasets. In the email datasets, the dynamics already take off with just 10 seed hyperedges and the smallest hyperedges first strategy clearly has an advantage in both the observed and randomized datasets. In fact, across all of the datasets the smallest first strategy is the most effective, and it also tends to be the strategy with largest difference in final activations between the observed and layer randomized hypergraphs. In general, activations on the layer randomization are much lower than in the observed hypergraphs, which is as expcted since the observed data contains many more encapsulation relationships. In the non-strict encapsulation dynamics (Figure <ref>), we again see that more non-seed edges are activated in the observed hypergraph with more encapsulation relationships. In the face-to-face social contact datasets, a single seed is enough to activate the entire observed hypergraph. Similarly, in the email datasets the final number of activations is consistent regardless of the number of seeds, until falling off at high numbers of seeds, likely due to the smaller proportion of available hyperedges to activate. However, in the layer-randomized hypergraphs, in the face-to-face contact and email datasets there appears to be a limit on the amount of non-seed hyperedges that can become activated. We also note that in non-strict encapsulation dynamics, there is not a clearly best hyperedge seed placement strategy across the datasets. It is intuitive that the size biased strategies work well in non-strict dynamics with small numbers of seeds, since this strategy will by definition activate the most nodes, and these nodes can in turn activate pairwise edges they participate in, essentially translating into more seeds. § CONCLUSION Higher-order networks have emerged, in recent years, as a promising approach to represent and model interacting systems. Among this broad family of models, approaches based on hypergraphs help to characterise the global structure and collective dynamics when interactions involve more than two agents. In this work, we have proposed novel ways to quantify the relations between hyperedges in real-world datasets. Based on the notions of overlap and encapsulation, we propose two alternative ways to represent a hypergraph as a graph where the nodes are the original hyperedges. In this line graph representation, edges may be directed to encode the encapsulation of a hyperedge in another, or undirected to encode the number of nodes in common between them. We have focused in detail on the structure induced by encapsulation, proposing a randomization strategy to erase encapsulation relations between hyperedges, while preserving other structural patterns, and quantifying how different real-world data are from what would be expected in a simplicial complex representation. As a second step, we turned to dynamics. In contrast with works focusing on the difference exhibited by a dynamical process on a hypergraph and on its corresponding projection on a graph, we explore the impact of encapsulation on spreading and compare the dynamics taking place on real-work hypergraphs and their randomization. To do so, we focus on a dynamical process specifically designed for hypergraphs – the encapsulation dynamics is trivial on graphs – and demonstrate that encapsulation facilitates spreading in situations when smaller hyperedges fuel the activation of larger hyperedges. Our work contributes to the recent efforts to understand how hypergraph structure impacts dynamics. Future research directions include a more thorough focus on the importance of overlap, but also testing our metrics to study other dynamical models, e.g. for synchronisation. There remain many potential avenues for future work in this area. We have focused on a simple, size-layer-based approach to randomizing hypergraphs, but there exist in the literature other ways of randomizing hyperedges, including the configuration model approach introduced in <cit.> and the multiplex approach in <cit.>. In contrast to our randomization, which preserves the size distribution of hyperedges and the unlabeled within-layer node degree distributions, both of these models preserve more general notions of degree, including the overall hyperdegree and the detailed within-layer degree of each node. Another potential research direction concerns the encapsulation dynamics, that was kept as simple as possible for the purpose of this work, but could be defined in different variants, as we allude to in <ref>, in the same was that different types of threshold dynamical models have been explored in the literature. Finally, as we noted, the intersection and encapsulation relations are just two out of the several ways in which the relation between hyperedges can be measured. A combined analysis of the multiple line graphs that can be associated to the same hypergraph is also a promising research direction. In this work, we ignored the temporal aspect of hypergraphs, however in the future the ideas introduced here could be extended to understand encapsulation patterns in temporal or dynamic hypergraphs, following work such as <cit.>. Our work could also be integrated with existing literature on higher-order motifs in hypergraphs <cit.>. Further research could also be done on analyzing the DAG structures we investigated here using recent work on the cyclic analysis of DAGs <cit.>. § AVAILABILITY OF CODE AND DATA Code implementing the measurements and simulations shown in this paper will be made available at <https://github.com/tlarock/encapsulation-dynamics/> <cit.>. All of the empirical data was made available with the publication of <cit.> and can be found online at <https://www.cs.cornell.edu/ arb/data/>. § ACKNOWLEDGMENT The authors acknowledge support from the EPSRC Grant EP/V03474X/1. TL acknowledges the use of open source code made available by the developers of many projects including NumPy <cit.>, SciPy <cit.>, NetworkX <cit.>, MatPlotLib <cit.>, and compleX Group Interactions (XGI) <cit.>. § DATA Table <ref> shows the same statistics as Table <ref>, but for the whole hypergraph, rather than just the largest connected component. § DISCUSSION OF ALTERNATIVE DYNAMICS Due to the multidimensionality inherent to hypergraphs, there are numerous valid choices for specifying a spreading process of the type we study here, each of which have their own conceptual and practical advantages and pitfalls. In this Appendix we discuss some of the potential alternatives that could be investigated in the future. We focus specifically on the specification of spreading over hyperedges - for a brief discussion of node-based threshold models on hypergraphs, see <ref>. The first and most important choice in specifying the dynamics is deciding which hyperedges can influence one another. In the main text, we presented a model where only hyperedges at adjacent levels in the encapsulation DAG can influence each other, e.g. one in which only hyperedges of size k-1 can influence a hyperedge of size k. These are in some sense the most directly applicable to the “ideal” encapsulation DAG, since the dynamics directly spread over the DAG structure. However, we are also interested in how our spreading process unfolds on empirical hypergraphs, and we cannot know in advance whether the DAG connectivity will be suitable for spreading. With this limitation in mind, we can also specify a version of encapsulation dynamics where we relax the condition from requiring immediately adjacent hyperedges to empirically adjacent hyperedges, meaning that a hyperedge α can be influenced by hyperedges it encapsulates that are of the maximum size k < |α| existing in the hypergraph. For example, if a hyperedge on 4 nodes does not encapsulate any hyperedges on 3 nodes, but does encapsulate a hyperedge on 2 nodes, we allow this smaller hyperedge to influence the larger. The encapsulation dynamics presented in the main text are the most true to the spirit of the encapsulation relation, since they require that the encapsulation DAG has a specific structure. The empirical encapsulation relaxation is more flexible and compatible with the variety of structures we expect to see in empirical data, but the cost of this flexibility is that in some cases very small hyperedges can “punch above their weight” by activating much larger hyperedges just by virtue of being the only observed encapsulated edge. We can address this issue in a few ways. In the first place, we could set the threshold τ to be at least the number of individual nodes in the hyperedge. With this threshold, it would only be possible for single nodes to activate a larger hyperedge if all of them were activated. However, this “global” threshold could have the effect of making it impossible to activate some hyperedges, for example a hyperedge with only one encapsulation, but where that encapsulation is of size k-1, which would also be counter-intuitive. Instead, size-specific threshold models could be given, such that a different number of different sized hyperedges could be necessary to activate a hyperedge. Finally, there is the question of whether activation should go in only one direction, from smaller hyperedges up to larger hyperedges, or in both directions. In this work we have only allowed activation to flow from smaller to larger hyperedges, but it would be equally reasonable to assume that once a larger hyperedge has been activated, all or some of its subsets also become active. We leave investigation of this style of model for future work. § THRESHOLD CONTAGION MODEL In this Appendix, we show some results on a traditional node-based threshold contagion model on a hypergraph to contrast with the encapsulation dynamics we introduced in the main text. Just as in encapsulation dynamics, in our threshold model every node u∈ V and hyperedge α∈ E in the hypergraph is in a binary state, either inactive or active, in each discrete timestep. At each step, an inactive hyperedge α, x_α=0 is activated if the number of already-activated nodes within the hyperedge is larger than a threshold. When a hyperedge is activated, all of its member nodes u ∈α are also activated. We define the threshold based on the size of the hyperedge, specifically |α|-τ. An inactive hyperedge α will be activated if ∑_u ∈α s_u ≥ |α| - τ, that is, if the number of activated nodes is greater than the size of the hyperedge minus the threshold. These dynamics could still be sensitive to encapsulation structure in a hypergraph, however the overlap structure of the hypergraph can play an equally important role, since there is no requirement that smaller hyperedges are activated first to activate enough nodes to finally activate larger hyperedges. We run simulations on empirical datasets using two threshold values: τ=0 and τ=1 and present the results in Figure <ref>. When τ=1 (top plot), meaning that an inactive hyperedge α becomes active when the number of inactive nodes remaining in α is 1, a single seed activates the entire hypergraph for both the face-to-face contact and email datasets. In the coauthorship datasets, full activation is never achieved in either observed or randomized datasets. When τ=0, meaning all nodes must be activated for a hyperedge to become active, a sort of unanimity condition, the outcomes are dependent on the dataset. Starting with the email-Eu dataset, we see that as the number of seed hyperedges increases, choosing the largest hyperedges first is the most effective strategy on the observed data until the number of seeds increases passed 10^3, where all of the methods converge. In the email-Enron dataset there is a similar pattern, but the difference between the outcome on the observed hypergraph and the random hypergraph is smaller across the simulations. The two contact datasets show similar patterns across all of the seeding strategies and in both observed and randomized hypergraphs, with full activation being achieved for the largest numbers of seeds. Finally, in the coauthorship datasets almost no activation occurs until more than 10^4 hyperedges are activated as seeds, and choosing seeds proportional to their size is the best strategy. abbrv
http://arxiv.org/abs/2307.05042v1
20230711065042
Direct sampling of short paths for contiguous partitioning
[ "Wesley Pegden", "Anish Sevekari" ]
math.PR
[ "math.PR", "cs.DS", "math.CO" ]
Stationary striations in plasma, created by a short microwave pulse in a waveguide filled with a neutral gas Ya.E. Krasik August 12, 2023 ============================================================================================================= In this paper, we provide a family of dynamic programming based algorithms to sample nearly-shortest self avoiding walks between two points of the integer lattice ^2. We show that if the shortest path of between two points has length n, then we can sample paths (self-avoiding-walks) of length n+O(n^1-δ)) in polynomial time. As an example of an application, we will show that the Glauber dynamics Markov chain for partitions of the Aztec Diamonds in ^2 into two contiguous regions with nearly tight perimeter constraints has exponential mixing time, while the algorithm provided in this paper can be used be used to uniformly (and exactly) sample such partitions efficiently. § INTRODUCTION Analysis of political redistrictings has created a significant impetus for the problem of random sampling of graph partitions into connected pieces—e.g., into districtings. The most common approach to this problem in practice is to use a Markov Chain; e.g., Glauber dynamics, or chains based on cutting spanning trees (e.g., <cit.>). Rigorous understanding of mixing behavior is the exception rather than the rule; for example, <cit.> established rapid mixing of a Markov chain for the special case where both partition classes are unions of horizontal bars, which in each case meet a common side. No rigorous approach is known, for example, which can approximately uniformly sample from contiguous 2-partitions even of lattice graphs like the n× n grid in polynomial time In this paper we consider a direct approach, where instead of leveraging a Markov chain with unknown mixing time to generate approximate uniform samples, we use a dynamic programming algorithm and rejection sampling to exactly sample from self-avoiding walks in the lattice ^2 (which correspond to partition boundaries) in polynomial expected time. Counting self-avoiding lattice walks is a significant long-standing challenge; the connective constant—the base of the exponent in the asymptotic formula for the number of such walks—is not even known for ^2. But we will be interested in sampling nearly-shortest self avoiding walks, motivated by districting constraints which discourage the use of large district perimeters relative to area. In particular, we will prove: For any C and ε>0 and for any n_1,n_2, and n=n_1+n_2, there is a randomized algorithm which runs w.h.p in polynomial time, and produces a uniform sample from the set of self-avoiding walks in ^2 from (0,0) to (n_1,n_2) of length at most n+Cn^1-ε. A variant of this algorithm can be used to sample from contiguous 2-partitions of the Aztec diamond with restricted partition-class perimeter, by sampling short paths between nearly-antipodal points on the dual of the Aztec diamond. These paths are in bijection with the contiguous 2-partitions of the Aztec diamond, by mapping a partition to it's boundary which gives us a path. This approach generates samples in polynomial time w.h.p. In contrast, we show that the traditional approach using Markov chains is inefficient: For any C and ε>0, Glauber dynamics has exponential mixing time on contiguous 2-partitions of the Aztec diamond A_k when constrained by perimeter slack Ck^1-ε. Organization of the Paper: The paper is organized in the following manner: <Ref> describes a dynamic programming algorithm (<Ref>) to sample walks without short cycles and proves its correctness. <Ref> show that the algorithm actually returns a self-avoiding path from (0,0) to (n_1,n_2) in the unbounded lattice graph ^2 in polynomial time with high probability, enabling the random sampling of paths for rejection sampling. <Ref> provides the same result for wide subgraphs of the lattice, the notion of wide subgraph is also defined in this section. The last section, <Ref> is dedicated to proving <Ref>, and showing that Aztec diamond is a wide subgraph of the lattice. Notation: For the rest of the paper, we will typically use letters A,B, … for denoting paths from O = (0,0) to P = (n_1,n_2). We will use letters Q,R, … to denote points on the grid. Each path A from O to P of length n + 2k has two representations, we can describe A by the sequence of moves a_1, …, a_n+2k where a_i ∈L,R,U,D denotes the direction of next step in the path. On the other hand, we can also denote path A by the sequence of points that it visits, namely, O = A_0, …, A_n+2k = P. Typically, we will also use B to denote a shortest path, and A to denote a larger path. We will further let P_k, W_k, W_k^l denote the number of paths (self-avoiding walks), number of walks, and number of walks without cycles smaller than 2l from O to P of length n + 2k respectively. § DYNAMIC PROGRAMMING ALGORITHM In this section, we will describe the dynamic programming algorithms that counts W_k^l, the number of walks of length n+2k without short cycles, that is, without cycles of length smaller than 2l from O = (0,0) to P = (n_1,n_2) in a subgraph S of the grid ^2. The algorithm memorizes the number of paths from every point Q ∈ S to P, along with previous 2l steps, which is given by a walk w of length 2l ending at Q. Let Φ_l(Q) denote the set of paths ending at Q of length at most 2l. 0.48 0.48 Once we have number of these paths, we can sample a walk of length n+2k without cycles of length smaller than 2l by starting at O and sampling points in the walk with correct probability using memoized values obtained by <ref>. Since there are at most 4^2l paths of length 2l, Φ_l(Q)≤∑_i=0^l 16^i = 2 · 16^l for any point Q. Therefore, size of the DP table in <Ref> is S· 16^l, and each entry in this table takes O(l) time to compute, since deg of each vertex in S is at most 4. Therefore, <Ref> takes OS· l · 16^l = O(S) time for constant l. Note that these paths are restricted to the set of points ℛ = Q O - (k,k) ≤ Q ≤ P + (k,k). Thus, for large S (in particular for S = ^2), we can restrict the algorithm to S' = ℛ∩ S. Further, once the DP table is computed, <Ref> runs in O(n+2k) time. We will prove in <Ref> that for k ≤ Cn^1 - ε and S = ^2, <Ref> actually returns a path with probability 1 - o(1) for l > 1ε. This implies that <Ref> runs in O(n+2k) time with high probability, completing the proof of <Ref>. We will provide a sufficient condition for subgraphs S ⊆^2 in <Ref> which implies the same probability bound for these specific subgraphs S. § NUMBER OF PATHS IN A GRID This section focuses on getting bounds on the number of paths from O = (0,0) to P = (n_1, n_2) in the grid. Recall that paths are in fact self-avoiding walks. Let n = n_1 + n_2 be the length of a shortest path from O to P. We will provide some upper and lower bounds on the number of paths of length n + 2k from O to P in terms of number of shortest paths from O to P. These upper and lower bounds are based on constructing extensions of shortest paths. In general, we will associate a shortest base path to every path from O to P. This association is described in <Ref>. We will also provide procedures for extending shortest paths to larger paths, which respects the base path mapping. Then the lower bound on paths of length will follow by bounding the number of extensions of each shortest path, and upper bound will follow from bounding the number of paths of length n+2k that have a specific given path as the associated base path. Let a shortest path B be described by sequence of moves b_1, …, b_n where n = n_1 + n_2, where each b_i ∈U,R describes the direction of move at i^th step. Then we have the following procedure to extend the path B to a path A from O to P of length n + 2k. Given a shortest path B represented by b_1, …, b_n from O = (0,0) to P = (n_1,n_2) where n=n_1+n_2, and a set M = i_1, …, i_k of indices, we define the extended path A = 𝒜(B,M) obtained by performing following replacements for all j = 1, …, k: * If b_i_j = R, replace it by DRU. * If b_i_j = U, replace it by LUR. For an edge b_i, we will also refer to the operation above as bumping the edge. Further, we will say that an edge b_i can be bumped if bumping the edge b_i gives us a path. <Ref> illustrates how <Ref> behaves when extending shortest paths. It is not true that for all choices of M the map 𝒜(B,M) is a path. But, we will show that for a large choice of set M, it is a path. For any choice of M such that b_i_j - 1 = b_i_j for all j, the map 𝒜(B,M) gives us a path. Let path B be go through the points O = B_0 … B_n = P. Then for any point B_i = (x_i, y_i) if the point X = (x_i - 1, y_i) is also in the path B then X must be connected to B_i, and hence B_i-1 = X since otherwise there is a subpath from (x_i,y_i) to (x_i - 1, y_i + 1) (or the other way around) in B, which implies that B is not a shortest path. In particular, if b_i-1 = b_i = U then the points to the left of B_i-1 and B_i, that is, the points (x_i-1 - 1, y_i-1) and (x_i - 1, y_i) are not in B. Therefore, if we replace b_i by LUR, we change the portion of path from B_i-1 to B_i to look like B_i-1 = (x_i-1, y_i-1) → (x_i-1 - 1, y_i-1) → (x_i-1 - 1, y_i-1 + 1) = (x_i - 1, y_i) → (x_i, y_i) = B_i which is a path since newly added points were not in B initially. Similar argument works for DRU modifications. The modifications of type U → LUR and R → DRU happen on opposite side of the path B, and hence don't intersect. Further, all the modifications of type U → LUR don't intersect unless they are adjacent to each other. Therefore, if the set M contains non-adjacent indices, then we can perform all the modifications simultaneously without creating any loops. Further, observe that these modifications do not intersect each other if the set M contains non-adjacent indices, and can be performed simultaneously. We will use this procedure described in <Ref> to generate a family of paths of length n + 2k. To ensure that there are a lot of choices for M, we need to argue that most shortest paths from O to P have n2 - o(n) many places where the hypothesis of <Ref> is satisfied. This is formalized in the next lemma. For any point P = (n_1, n_2) with n_1 + n_2 = n, a shortest path from O = (0,0) to P drawn uniformly at random has at least n/2 - O(√(-n logε)) places with two consecutive moves in the same direction with probability 1 - ε. still need to replcae ϵ's with ε's in some places Let B be a shortest path from O to P. B can be denoted as a sequence of exactly n_1 right moves and exactly n_2 up moves. Let us denote this path by b_1, …, b_n where b_i ∈R,U. We can draw a path uniformly at random by picking uniformly at random from a bag with n_1 R symbols and n_2 U symbols without replacement. Let X_i be the indicator random variable for the event that b_i = b_i+1. Now, we first observe that X_i b_1, …, b_i-1 = p(p-1)/r(r-1) + q(q-1)/r(r-1) = p^2 + q^2 - r/r(r-1)≥1/2 - 1/2(r-1) where p is number of U symbols left in the bag, q is number of R symbols left in the bag and r = p + q. Now, we will show that X_i X_1, …, X_i-1≥1/2 - 1/n- i - 1. It suffices to show that X_i = 1 b_1, …, b_i-2, X_i-1≥1/2 - 1/n - i - 1. We will show this by doing two cases: X_i-1 = 0 and X_i-1 = 1. In the first case, X_i-1 = 0, X_i = 1 b_1, …, b_i-2, X_i-1 = 0 = X_i = 1, X_i-1 = 0 b_1, …, b_i-2/X_i-1 = 0 b_1, …, b_i-2 = p q (q-1) + q p (p-1)/r(r-1)(r-2)/pq + qp/r(r-1) = pq(p + q - 2)/2pq(r-2) = 1/2 where p is number of U symbols left, q is the symbol of R symbol left, and r = p + q. In the second case, using the same notation, we have X_i = 1 b_1, …, b_i-2, X_i-1 = 1 = X_i = 1, X_i-1 = 1 b_1, …, b_i-2/X_i-1 = 1 b_1, …, b_i-2 = p(p-1)(p-2) + q(q-1)(q-2)/r(r-1)(r-2)/p(p-1) + q(q-1)/r(r-1) = p(p-1)(p-2) + q(q-1)(q-2)/(p(p-1) + q(q-1))(r-2) = p^3 + q^3 - 3(p^2 + q^2) + 2 (p+q)/(p^2 + q^2 - (p + q))(r-2) = r^3 - 3 pqr - 3(r^2 - 2pq) + 2r/(r^2 - 2pq - r)(r-2) = r^3 - 3r^2 + 2r - 3pq(r-2)/(r^2 - r - 2pq)(r-2) = r(r-1)(r-2) - 3pq(r-2)/(r^2 - r - 2pq)(r-2) = r(r-1) - 3pq/r(r-1) - 2pq Note that this term is maximized when pq is minimized, and is minimized when pq is maximized. Constrained to the fact that p+q = r and p,q ≥ 0, we get 1 ≥r(r-1) - 3pq/r(r-1) - 2pq≥4r^2 - 4r - 3r^2/4r^2 - 4r - 2r^2 = r-4/2(r-2) = 1/2 - 1/r-2 Therefore, in both cases, we have X_i = 1 X_1, …, X_i-1≥1/2 - 1/n-i-1 Now, we couple variables X_i with variables e_i, drawn independently such that e_i = 1 = 1/2 - 1/n-i-1. To begin with, we draw b_1 with correct probabilities. Then for each i, we draw f_i uniformly at random from [0,1]. We set e_i = 1 if f_i ≤e_i = 1 and we set e_i = 0 otherwise. Further, if f_i ≤X_i = 1 X_1,X_2,…,X_i-1, then we set a_i+1 such that X_i = 1, otherwise we set a_i+1 such that X_i = 0; note that the status of X_i+1 uniquely determines the choice of a_i+1. Therefore, e_i = 1 X_i = 1, and hence ∑_i=1^n-1 e_i ≤∑_i=1^n-1 X_i. Notice that e_i are still independent random variables. Therefore, ∑ X_i ≤∑ e_i - t≤∑ e_i ≤∑ e_i - t≤exp-2t^2/n Where the last inequality follows from Hoeffding's inequality. Note that ∑ e_i = ∑1/2 - 1/n-i+1≥n/2 - 2 log n Given any ε > 0, and t = √(- n logε), we get that ∑ X_i ≤n/2 - 2 log n - √(-n logε)≤ε This proves the required result. This allows us to lower bound the number of paths of length n + 2k from O = (0,0) to P = (n_1, n_2) where n_1 + n_2 = n. Recall that P_k denotes the number of these paths. For any k ≤ 0.1 n and 1 > ε≥ 0, we have the lower bound P_k ≥ (1 - ε) P_0 t - 2kk where t = n2 - 2 log n - √(n log (1/ε)). Further, there is n_0 = n_0(ε), such that for all n ≥ n_0, P_k ≥ (1 - ε) P_0 (0.49)^k n^k/k!exp-O k^2/n Consider a path B of length n from O to P. Let B be represented by b_1, …, b_n where b_i ∈R,U. Then using <Ref>, we can extend B to a path A = 𝒜(B,M) of lenght n+2k if we choose M to be a set such that there are no adjacent indices in M and further, for each i ∈ M, b_i-1 = b_i. There are at least t = n/2 - 2 log n - √(n log (1/ε)) such indices, for at least (1 - ε)P_0 many paths. For each of these paths, we need to choose a set of k non-adjacent indices. This can be done in at least t(t-3)(t-6)…(t-3(k-1))/k!≥(t-2k)(t-2k-1) … (t-3k+1)/k! = t-2kk many ways, since after picking first index, we lost 3 possible choices for rest of the indices. Further, observe that any longer path A that is obtained in this way corresponds to exactly one shortest path B. We can find this path B by looking at patterns LUR and DRU and replacing them by U and R respectively. If M is choosen satisfying conditions of <Ref>, then it is clear that every L in the extended path A is followed by UR and every D in A is followed by RU. Hence, these replacements can be made unambiguously. Since we can do this for all (1-ε)P_0 paths, we get the lower bound. P_k ≥ (1-ε)P_0 t-2kk Since 2 log n + √(n log(1/ε)) = o(n), there is n = n(ε) such that for all n ≥ n(ε), 2 log n + √(n log (1/ε))≤ 0.01 n, and hence t ≥ 0.49 n. This gives us the lower bound P_k ≥ (1-ε)P_0 0.49n-2kk Using <Ref>, we have P_k ≥ (1-ε) P_0 (0.49)^k n^k/k!exp-4k^2 - k^2 + k/0.49 n - 2k(2k+k)/0.49 n ≥ (1-ε) P_0 (0.49)^k n^k/k!exp-25k^2/n P_k ≥ (1-ε) P_0 (0.49)^k n^k/k!exp-Ok^2/n completing the proof of the lemma. The next task is to extend this result to get similar bounds for extending paths of length n+2k to paths of length n+2k+2l. We will prove the following: For any k, l ≤ 0.1 n and 1 > ε≥ 0, there is n_0 = n_0(ε) such that for all n ≥ n_0(ε), P_k+l≥ (1 - ε) P_k t - 8k - 3llk+ll^-1 where t = n2 - 2 log n - √(n log (1/ε) + 2 k n log n + 30k^2). Further, there is n_1 = n_1(ε), such that for all n ≥ n_1, P_k+1≥ (1 - ε) P_k (0.49)^l n^l k!/(k+l)!exp-Ok(k+l)/n The outline of proof of this lemma will be similar to <Ref>. Consider a path A of length n+2k from O to P. We want to show that for a large number of sets M = i_1, …, i_k, we can construct the extended path C = 𝒜(A,M). To ensure we can find a large number of candidates for M, we will associate a shortest path to each path A. We define a map ℬ in <Ref> such that ℬ(A) gives us such a shortest path. We further associate each the edges of B = ℬ(A) to some of the edges of A, and we call these the good edges of A and all other edges of A as bad edges of A. This mapping is defined in <Ref>. We claim that the set of indices where we cannot do modifications in the extension procedure defined in <Ref> corresponds to either a corner of B or a bad edge of A. Then we can bound the number of corners and bad edges to get the bound required. We begin the proof begin by defining lattice boxes to make notation easier, and then use those to define the map ℬ. Given points P_1, P_2 ∈^2, such that P_1 ≤ P_2, we define the lattice box ℛ(P_1, P_2) with left bottom corner P_1 and right top corner P_2 to be the rectangle with sides parallel to the axis with P_1 and P_2 as diagonally opposite corners. To be precise, ℛ(P_1, P_2) = x ∈^2 P_1 ≤ x ≤ P_2 We further define boundary of a lattice box (and more generally of any set S ⊆^2) to be the set of vertices v ∈ S such that v has at least one neighbor outside S in the infinite grid graph. We define the map ℬ as follows. Consider a path A given by points O = A_0, …, A_n+2k = P from O = (0,0) to P = (n_1,n_2) with n_1,n_2 ≥ 0 and n = n_1 + n_2. We will build ℬ(A) = B inductively, starting at O = (0,0). We will do this by constructing a sequence of points R_i which will all lie in the intersection A ∩ B. Let R_0 = O. Suppose we have constructed R_0, …, R_i. * Construct a box ℛ_i = ℛ(R_i,P) with R_i as the bottom left corner and P as the top right corner. * Find the next point R_i+1 on A, after R_i such that R_i+1∈ℛ_i. * Extend B to R_i+1 using the shortest path along the boundary of ℛ_i if R_i+1≠ P. * If R_i+1 = P, then let A̅ be part of A between R_i=(R_i(x),R_i(y)) and P. * If A̅ intersects y = n_2 before x = n_1, define R̅ = (R_i(x), n_2) * Otherwise define R̅ = (n_1, R_i(y)). Extend B from R_i to R̅ to P. The map ℬ in <Ref> is well defined. Given R_i ≠ P, we can always find R_i+1 since P ∈ℛ_i and P ∈ A, so A eventually intersects ℛ_i. Therefore, steps (1,2) in <Ref> are well defined. For step (3), observe that P is the only point on boundary of ℛ_i that has two shortest paths from R_i along the boundary. Therefore, (3) is well defined as long as R_i+1≠ P. For step (4), observe that if ℛ_i is degenerate, then there is a unique path from R_i to P, and this step is well defined. Suppose ℛ_i is non-degenerate. That is, R_i and P differ at both x and y coordinates. In this case, A̅ cannot intersect both the lines y = n_2 and x = n_1 simultaneously, and it must intersect both of them eventually. Hence, step (4) is well defined as well. We now define the good edge mapping. First, we will start by making a few notational definitions. Given a path A from O to P with points O = A_0, …, A_n+2k = P, we can represent it as a sequence of moves, a_1 … a_n+2k, where each move is one of the four directions (U,D,L,R). We say that i^th point (A_i) on this path is a corner if a_i ≠ a_i+1. We further include O and P to be corner points. We define last corner point to be the corner point Q ≠ P with highest index. We will also refer to O as the starting point and to P as the ending point. Let A be a path of length n + 2k from O = (0,0) to P = (n_1, n_2), where n = n_1 + n_2. Let A be given by point O = A_0, …, A_n+2k = P. Then, we divide edges of A into two categories. Any edge going in the directions D or L will be reffered to as a reverse edge, and any edge going in the direction U and R will be reffered to as a forward edge. In the setting described in the previous definition, let B = ℬ(A), where ℬ is defined in <Ref>. Let B be given by O = B_1, …, B_n = P. We define a good edge mapping to be any function ℱ_A: _[0,n-1]→_[0,n+2k-1], where _[0,t] = ∩ [0,t] satisfying * ℱ_A is injective. * For i < j, ℱ_A(i) < ℱ_A(j). * The edges A_ℱ_A(i) A_ℱ_A(i) + 1 and B_i B_i+1 are super-parallel, that is * If edge B_i B_i+1 = (x,y) → (x,y+1), the edge A_ℱ_A(i) A_ℱ_A(i) + 1 = (x̅,y) → (x̅,y+1) for some x̅. * If edge B_i B_i+1 = (x,y) → (x+1,y), the edge A_ℱ_A(i) A_ℱ_A(i) + 1 = (x,y̅) → (x+1,y̅) for some y̅. Given such a mapping ℱ, we will refer to any edge of form A_ℱ(i) A_ℱ(i) + 1 to be a good forward edge, and any edge that is not a good forward edge as a bad forward edge. <Ref> illustrates the definitions above. We show that such a mapping exists in the lemma below. Given a map A of length n + 2k and let B = ℬ(A). Using notation in <Ref>, there exists a good edge mapping ℱ satisfying conditions in <Ref>. First, it immediately follows from <ref> that all the corners of path B are contained in the set O = R_0, R_1, …, R_m = P, R̅ notaton issue? we are re-using A? throughout this proof replacing R_A by R_m, since the portions of B in between these points are straight lines. Now, we define the mapping ℱ = ℱ_A for parts of B between R_i and R_i+1 for 0 ≤ i ≤ m-2, for each edge B_j B_j+1 between R_i R_i+1 in B, we define ℱ(j) = k to be the least index such that A_k A_k+1 and B_j B_j+1 are super-parallel, that is, they satisfy the condition (3) in <Ref>. We claim that this is strictly monotonic for each i. Suppose not, then there is an index j such that such that ℱ(j+1) ≤ℱ(j). If ℱ(j+1) = ℱ(j), then edges B_jB_j+1 and B_j+1B_j+2 are super-parallel, which is a contradiction. Without loss of generality, let the points R_i, B_j, B_j+1, R_i+1 share the same x coordinate, that is, let R_i = (x_0,y_0), B_j = (x_0,y_1), B_j+1 = (x_0,y_1+1) and R_i+1 = (x_0, y_2). Then A_ℱ(j+1) = (x_1, y_1+1) for some x_1. Then the path from R_i = (x_0,y_0) to (x_1, y_1+1) must have an edge of the form (x_2, y_1) → (x_2,y_1+1) since y_0 ≤ y_1. Therefore, there is an index k < ℱ(j+1) such that A_k A_k+1 is super-parallel to the edge B_j B_j+1, which implies ℱ(j) < ℱ(j+1), a contradiction! If the path between R_m-1 and R_m = P is straight line, we can extend the definition above when i = A-1. Otherwise, the point R̅ is well defined. Let A̅ be portion of A between R_m-1 and P. Without loss of generality, let A̅ intersect the line y = n_2 before the line x = n_1 at a point Q. Suppose Q = (x_0,n_2), then x_0 < n_1, otherwise the path from R_m-1 to Q will intersect the line x = n_1. Since Q is also outside ℛ(R_m-1,P), it follows that x_0 < x_1 where R_i = (x_1,y_1). Now, for all B_j between R_m-1 and R̅, we define ℱ(j) = k where k is the smallest index such that A_k is between R_m_1 and Q such that B_jB_j+1 and A_kA_k+1 are super-parallel and for all B_j between R̅ and P, we define ℱ(j) = k where k is the smallest index such that A_k is between Q and P such that B_jB_j+1 and A_kA_k+1 are super-parallel. This map is well defined and monotonic since A̅ must go from y = y_1 to y = n_2, and then from x = x_0 to x = n_1, and hence edges super parallel to B_jB_j+1 exists for all B_j between R_m-1 and P. Further, the map is strictly monotonic by an argument earlier in the proof. This gives us the good edge mapping that we want. The next lemma proves that a large number of good edges can be bumped. Consider a path A of length n + 2k. Let B = ℬ be the base path associated with it. Suppose B has c corners. Then there is a set G of indices of at least n - c - 8k good edges in A which can be bumped. Note that A has exactly n good forward edges, k bad forward edges and k reverse edges. Now, we transverse A, and for each good forward edge, we check if we can bump the good forward edge. To be presice, consider a good forward edge S_1S_2. Without loss of generality, we will assume that the edge goes in U direction, and is given by (x_0, y_0) → (x_0,y_0 + 1). Suppose S_1S_2 is a good forward edge that cannot be bumped. We will associate either * a reverse edge * a bad forward edge * or a corner of B as the reason why bumping at S_1 is blocked. Since S_1S_2 cannot be bumped, either S_3 = (x_0 - 1, y_0) is in A or S_4 = (x_0 - 1, y_0 + 1) is in A. First, consider the case when S_3 is contained in A. Look at the edge e going out of S_3 in A. We have following cases: * If there is no such edge, then S_3 = P. In this case, we say that P blocks bumping at S_1. * If the edge e is either a reverse edge or a bad forward edge, then we say that this edge blocks bumping at S_1. * If the edge e is going in U direction and is a good forward edge, then there is an unique edge f ∈ B that is obtained by moving e and S_1S_2 perpendicular to their respective directions. This contradicts the definition of ℱ. * If the edge is going in R direction and is a good forward edge, S_3 S_1 S_2 are consecutive in A. Let j be such that A_j = S_3, A_j+1 = S_1 and A_j+2 = S_2. Since these are good forward edges, there is i such that ℱ(i) = j. Since ℱ is strictly monotonic, (i+1) = j+1. Therefore, B_i+1 is a corner point in B. In this case, we say that the corner point B_i+1 is blocking the bumping at S_1. Now, suppose S_4 is contained in A. Look at the edge e going into S_4 in A. We again that 4 cases: * If there is no such edge, then S_4 = O. In this case, we say that O is blocking bumping at S_1. * If the edge e is either a reverse edge or a bad forward edge, then we say that this edge is blocking the bump at S_1. * If the edge e is going in U direction and is a good forward edge, then it is exactly the same edge as the one considered in case (3) above. * If the edge e is going in R direction, then both e and S_1S_2 end at S_2, which cannot happen as A is a path. Each reverse forward edge or backward edge can block at most 4 good forward edges from bumping, two in each direction, one where it is blocking S_3 and one where it is blocking S_4. On the other hand, each corner including O and P can block at most one edge. Therefore, there are at least n - c - 8k good forward edges which can be bumped, completing the proof. The bound can be improved using a slightly more careful argument while deciding the blocking edges. Some casework can potentially lead to a one to one association between blocking edges and good edges that cannot be bumped. In order to finish the proof of <Ref>, we need a bound on number of paths A of length n+2k such that the base path B = ℬ(A) has a large number of corners. We will do this by bounding the number of paths A such that ℬ(A) = B, and then using <Ref> to bound number of paths B with a large number of corners. We will give a rather trivial bound that suffices. Given a shortest path B and k ≤ 0.1n, the number of paths A of length n + 2k such that ℬ(A) = B is at most 2 · 3^2kn + 2k2k. First, we express B as a sequence of directions of length n. Now, from n + 2k positions, we choose 2k positions, and fill up the rest with the sequence of directions used in B. For the remaining 2k places, we have at most 3 choices each since we canot leave in the direction we came from, unless we are picking the starting direction, in which case we might have 4 choices. This gives an upper bound of 3^2k-14 n + 2k - 12k - 1 + 3 n+2k-12k = 3^2kn+2k2k + 3^2k-1n+2k-12k-1 since n+2k-12k-1 = 2k/n + 2kn+2k2k≤ 3 n+2k2k for k ≤ 0.1n, we get the result. This bound can potentially be improved, for example, by looking at the number of corners of B. Now we are in a position to finish the proof of <Ref>. Recall that by <Ref>, there is n_0 = n_0(ε) such that for all n ≥ n_0, P_k ≥1/2 P_0 (0.49)^k n^k/k!exp-25k^2/n There seem to be some problems with the following sentence and I'm not sure what it is supposed to be claiming (and is it +√(nlog(1/ε_1)) or -√(nlog(1/ε_1))?) It should be at least On the other hand, for any given ε_1, we have that the number of paths A such that the base path B = ℬ(A) has at least n2 + 2 log n + √(n log (1/ε_1)) corners is upper bounded by 2 ε_1 P_0 3^2kn+2k2k≤ 2 ε_1 P_0 3^2kn^2k/(2k)!exp8k^2 - 4k^2 + 2k/n≤ 2 ε_1 P_0 3^2kn^2k/(2k)!exp5k^2/n = T Hence, if we choose ε_1 such that ε_1 ≤ε/4·(0.49)^k (2k)!/3^2k n^k k!exp-30k^2/n or equivalently, if log (1/ε_1) ≥log (1/ε) + log 4 + k log n + 4k log 3 - k log k + 30k^2/n It follows that there are at most ε P_k paths A of length n+2k such that B has at most n/2 + 2 log n + √(n log (1/ε) + 2nklog n + 30k^2) corners, when k ≤ 0.1n and n ≥ 81. Therefore, in this setting, every path A has at least t - 8k good edges which can be bumped where t = n/2 - 2 log n - √(n log (1/ε) + 2kn log n + 30k^2) Note that every edge that is bumped can prevent at most 3 new edges from being bumped. For example, if we bump and edge that looks like (x_0,y_0) → (x_0,y_0+1) it can stop the edges (x_0,y_0-1) → (x_0,y_0), (x_0,y_0+1) → (x_0,y_0+2) and (x_0-2, y_0+2) → (x_0-1,y_0+2) from bumping, which it initially did not. Therefore, we can choose set M of l edges which can be bumped simultaneously in t(t-4)(t-8)⋯(t-4(l-1))/l!≥(t-3l) ⋯ (t-4l+1)/l! = t-3ll many ways. Further, each path of length n+2k+2l can have k+l bumps, and can potentially be obtained in k+lk many different paths of length l. This gives us the lower bound P_k+l≥ (1 - ε) P_k t - 8k - 3llk+lk^-1 as required. Note that for k ≤n(log n)^2, there exists n(ε) such that for all n ≥ n(ε), t ≥ 0.49n. Using <Ref>, we get the simplified lower bound: P_k+l ≥ (1-ε) P_k (0.49)^l n^l k!/(k+l)!exp-2(8k+3l)l - l^2 + l/0.49n - 2l(8k+3l)/0.49n ≥ (1-ε) P_k (0.49)^l n^l k!/(k+l)!exp-32(kl+l^2)/0.49n ≥ (1-ε) P_k (0.49)^l n^l k!/(k+l)!exp-70(kl+l^2)/n ≥ (1-ε) P_k (0.49)^l n^l k!/(k+l)!exp-O(kl+l^2)/n § NUMBER OF LOW GIRTH WALKS IN THE GRID In this section, we will use the bounds obtained in the section above to compare the number of paths from O = (0,0) to P = (n_1,n_2) to the number of walks from O to P that do not have cycles of length less than 2l. For the sake of notation, let W_k^l denote the number of walks from O to P that do not have cycles of length less than 2l. Then we have the following: Given constants C,δ,α≥ 0, there exists n(C,δ,α), such that for all n ≥ n(C,δ,α), and for all k,l such that k ≤ Cn^1-δ and lδ > 1 + 2α, P_k ≤ W_k^l ≤1 + 16 n^-α P_k. We will show this by induction on k. Note that result holds for 0 ≤ k < l since in this setting, W_k^l = P_k. Suppose by induction hypothesis, W_k̅^l ≤1 + 8n^-α P_k̅ for 0 ≤k̅ < k. Since every walk of length n+2k with no cycles of length smaller than 2l is either or path or can be decomposed into a cycle of length t≥ 2ℓ and a walk of length n+2k-t with no cycles of length smaller than 2l, we get the following bound: W_k^l ≤ P_k + ∑_t=0^k-l W_t^l 16^k-t (n+2t) ≤ P_k ∑_t=0^k-l1 + 8n^-α· 2 · P_t 16^k-t n. Here 16^t is a simple upper bound on the number of cycles of length 16^t through a fixed point. Note that for t ≤ Cn^1-δ, n+2t ≤ 2n. Now, using <Ref> with ε = 0.5, we have P_t 16^k-t n/P_k ≤ 2 ·k!/t!·16^k-tn/n^k-t(0.49)^k-texp70(k-t)(k-t+t)/n ≤ 2 exp(k-t)(log k + log 16 - log n - log(0.49)) + 70(k-t)k/n + log n ≤ 2 exp(k-t)((1-δ) log n + log C - log n + log 40) + 70k(k-t)/n + log n. Let k-t = l+r, and let l be an integer constant such that lδ > 1, then we can upper bound the summation as below: W_k^l/P_k ≤ 1 + ∑_r=0^k-l1 + 8n^-α· 4 ·exp(1 - lδ)log n + C_1 l + - r δlog n + C_1 r + 50k(l+r)/n ≤ 1 + 4 1 + 8n^-αexp[](1 - lδ) log n + C_1 l + 50lCn^-δ∑_r=0^k-lexpr-δlog n + C_1 + 50Cn^-δ ≤ 1 + 4 1 + 8n^-αexp[]- αlog n∑_r=0^∞exp-r C_2 log n, where these equations hold with constants C_1 = log 40C and C_2 = δ2 for n ≥ n_1(C,δ). Simplifying, we get the upper bound: W_k^l/P_k ≤ 1 + 4 1 + 8n^-α n^-α1/1 - n^-C_2 ≤ 1 + 16 n^-α, where the last inequality holds for n ≥ n_2(α), so that 8n^-α, n^-C_2≤ 0.5. Therefore, for n ≥ n(C,δ,α) = max(n_1(C,δ), n_2(α)), we get the result. § SUBGRAPHS OF THE LATTICE In this section, we do the same analysis for number of paths in induced subgraphs of the lattice ^2. To ensure that the sampling procedure works efficiently, we will prove the analogues of <Ref> where we restrict ourselves to paths bounded in some set S ⊆^2. First, let us setup some notation: For this section, let S ⊆^2 be an induced subset of lattice. Let O,P be two points in S. Without loss of generality, we will assume that O = (0,0) and P = (n_1,n_2) ≥ O. Let n = n_1 + n_2 denote the length of shortest path from n_1 to n_2 in ^2. Let P_k denote the number of paths (self avoiding walk) from O to P of length n+2k that are contained in S Let W_k^l denote the number of walks from O to P of length n+2k that do not have cycles of length smaller than l and are contained in S. Now, we make a few definitions which are helpful in the analysis Given set S ⊆^2, we define the boundary of S, denoted by ∂ S as the set of points Q ∈ S such that at least on neighbor of Q is outside S. Given an induced subgraph S ⊆^2 and points O,P ∈ S, we say that S is (k,s,β)-wide if at least (1-β) fraction of paths of length n+2k from O to P contained in S intersect the boundary ∂ S of S in at most s points. To give some trivial examples, every set S is (k,s,1) wide for all k,s and on the other hand, every set S is (k,n+2k,β)-wide for all k,β. We are now ready to state and prove variants of <Ref> that hold for bounded subgraphs of the lattice ^2. Given an induced subgraph S ⊆^2 and points O,P in S such that S is (0,s,β)-wide, and numbers k ∈ and ε∈, ε,k > 0, we have the lower bound on number of paths from O to P contained in S: P_k ≥ (1 - ε - β) t - 2s - 2kk where t = n/2 - 2 log n - √(n log (1/ε)). Further, there is n_0 = n_0(ε) such that for all n ≥ n_0, P_k ≥ (1 - ε-β) P_0 (0.49)^k n^k/k!exp-O k(k+s)/n The proof is almost the same as <Ref>, except one major change, we need to ensure that the constructed paths 𝒜(B,M) using <Ref> stays inside set S. We can bump a path B at index i if the point B_i and B_i+1 are not on the boundary ∂ S. Further, there are at least (1 - ε - β)P_0 shortest paths that have at most n/2 + 2 log n + √(n log (1/ε)) corners and at most s points that are on the boundary. For these paths, there are at least n/2 - 2 log n - √(n log (1/ε)) - 2s indices which can be bumped while keeping the path inside set S. Using <Ref>, we get the lower bound: P_k ≥ ( 1 - ε - β ) t - 2s - 2kk for t = n2 - 2 log n - √(n log (1 / ε)). Since t = n2 - o(n) there is n_0 = n_0(ε) such that for all n ≥ n_0, t ≥ 0.49n. This gives us the lower bound, due to computation similar to <Ref>. P_k ≥ (1-ε-β) P_0 (0.49)^k n^k/k!exp-2(2s+2k)k - k^2 + k/0.49 n - 2k(2k+k+2s)/0.49 n ≥ (1-ε) P_0 (0.49)^k n^k/k!exp-25k(k+s)/n P_k ≥ (1-ε) P_0 (0.49)^k n^k/k!exp-Ok(k+s)/n completing the proof of the lemma. Given an induced subgraph S ⊆^2 and points O,P in S such that S is (k,s,β)-wide and (0,s,β)-wide, and numbers k ∈ and ε∈, ε,k > 0, then there is n_0 = n_0(ε) such that we have the lower bound on number of paths from O to P contained in S for n ≥ n_0(ε): P_k+l≥ (1 - ε - β) t - 2s - 8k - 3ll where t = n2 - 2 log n - √(n log (1/ε) + 2 k n log n + 30k^2). Further, if k,s ≤n(log n)^2, there is n_1 = n_1(ε) such that for all n ≥ n_1, P_k+1≥ (1 - ε - β) P_k (0.49)^l n^l k!/(k+l)!exp-Ol(k+s+l)/n The proof of this lemma is similar to <Ref>, and we will only mention the key differenecs. First, observe that if B = ℬ(A) has c corners, then there are at least n - c - 8k indices in A that can be bumped. Among these, there are at most 2s indices where the points A_i or A_i+1 are on boundary. Further, choice of ε_1 in the proof of <Ref> changes to satsify log (1/ε_1) ≥log (1/ε) + log 4 + k log n + 4k log 3 - k log k + 30k(k+s)/n Therefore, there are at most ε P_k paths A of length n+2k such that B has at most n2 + 2 log n + √(n log (1/ε) + 2nk log n + 30k(k+s)) corners, there are at most β P_k paths A of length n + 2k that may have more that s points on the boundary ∂ S. This gives us that at least (1 - ε - β)P_k paths of length n+2k can be bumped at t - 2s - 8k positions for t = n2 - 2 log n - √(n log(1/ε) + 2nk log n + 30k(k+s)) For k,s ≤n(log n)^2, t = n2 - o(n), implying that there is n_1 = n_1(ε) such that t ≥ 0.49n. Using <Ref> and computations similar to <Ref>, we get the lower bound: P_k+l ≥ (1-ε-β) P_k (0.49)^l n^l k!/(k+l)!exp-2(8k+3l+2s)l - l^2 + l/0.49n - 2l(8k+3l+2s)/0.49n ≥ (1-ε-β) P_k (0.49)^l n^l k!/(k+l)!exp-70l(k+l+s)/n This gives us the proposed bound, finishing the proof. Next step is to prove that variant of <Ref> holds for induced subgraph S of the lattice provided that the set S is satisfies certain properties. Given constants C,δ,α≥ 0, a subgraph S ⊆^2, and a function s=s(k) such that S is (k,s(k),β)-wide where β≤ 0.25 and s(k) ≤ Cn^(1-δ) for all k ≤ Cn^(1-δ), there exists n_0=n_0(C,δ,α) such that for all n ≥ n_0, k ≤ Cn^(1-δ) and l ≥ 0 such that lδ > 1 + 2 α, P_k ≤ W_k^l ≤ (1 + 32n^-α) P_k The proof is similar to the proof of <Ref>. The recursive bound still holds, that is, W_k^l ≤ P_k + ∑_t=0^k-l W_t^l 16^k-t (n+2t) ≤ P_k ∑_t=0^k-l1 + 8n^-α· 2 · P_t 16^k-t n since we are restricting all the paths and walks to be restricted to set S. Using <Ref> with ε = 0.5, we get P_t 16^k-t n/P_k≤ 4 ·k!/t!·16^k-tn/n^k-t(0.49)^k-texp70(k-t)(k-t+t+s)/n ≤ 4 exp((k-t)((1-δ) log n + log C - log n + log 40) + 70(k+s)(k-t)/n + log n), which follows from computations in <Ref>. The last expression holds for C_1 = log 40C and C_2 = δ2 for n ≥ n_1(C,δ). Following the steps in <Ref> to evaluate the summation, we get the upper bound W_k^l ≤ 1 + 8 1 + 32n^-α n^-α1/1 - n^-C_2≤ 1 + 32n^-α, where the last inequality holds for n ≥ n_2(α) chosen such that 16n^-α, n^-C_2≤ 0.5. Therefore, for n ≥ n(C,δ,α) = max(n_1(C,δ),n_2(α)), we get the result. § THE AZTEC DIAMOND We let denote the planar graph of the integer lattice ^2 and let be its planar dual, with vertices using half-integer coordinates. We define the Aztec Diamond graph A_k to be the subgraph of induced by the set V(A_k)={(x,y)∈^2+( 1 2, 1 2)| |x|+|y|≤ k}, and define A_k' to be the subgraph of induced by the set V(A_k')={(x,y)∈^2| |x|+|y|≤ k}; see Figure <ref>. We define the boundary ∂ A_k' to be those vertices of A_k' (x,y) with |x|+|y|=k. We consider as a toy example the problem of randomly dividing the Aztec diamond into two contiguous pieces S_1,S_2, whose boundaries are both nearly as small as possible. Here we use the edge-boundary of S_i, which is the number of edges between S_i and ∖ S_i. Note that this is the same has the length of the closed walk in enclosing S_i. We collect the following simple observations about these sets and their boundaries: A_k has 8k boundary edges. Every shortest path in A_k' between antipodal points on ∂ A_k' has length 2k. For x≥ 0, the (unique) shortest path between points (x,y_1) and (x,-y_1) of ∂ A_k' has length 2k-2x. In particular, there is no partition of A_k into two contiguous partition classes such that both have boundary size less than 6k. With this motivation, we define Ω=Ω_C,ε,k to be the partitions of A_k into two contiguous pieces, each with boundary sizes at most 6k+Ck^1-ε, and consider the problem of uniform sampling from Ω. We will show that this problem can be solved in polynomial time with our approach, but also that Glauber dynamics on this state space has exponential mixing time. Observe that we can equivalently view Ω as set of paths in A_k' between points of ∂ A_k', and for any partition ω∈Ω we write _ω for this corresponding path. Writing ω∼ω' for ω,ω'∈Ω whenever (viewed as partitions) ω,ω' agree except on a single vertex of A_k, we define the Glauber dynamics for Ω to be the Markov chain which transitions from ω to a uniformly randomly chosen neighbor ω'. Recall that we define the conductance by Φ=min_π(S)≤1/2Q(S,S̅)/π(S) where Q(S,S̅)=∑_ω∈ S ω'∈S̅π(ω)P(ω,ω')≤π(∂ S), where ∂ S is the set of all ω∈ S for which there exists an ω'∈S̅ for which P(ω,ω')>0. The mixing time t_mix of the Markov chain with transition matrix P is defined as the minimum t such that the total variation distance between vP^t and the stationary distribution π is ≤1/4, for all initial probability vectors v. With these definitions we have t_mix≥1/4Φ (e.g. see <cit.>, Chapter 7) and so to show the mixing time is exponentially large it suffices to show that the conductance Φ is exponentially small. To this end, we define S⊆Ω to be the set of ω for which the endpoints (x_1,y_1) and (x_2,y_2) of _ω satisfy x_1≤ x_2 y_1≤ y_2. Our goal is now to show that |S| is large while |∂ S| is small. For simplicity we consider the case where k is even but the odd case can be analyzed similarly. To bound S from below it will suffice to consider just the partitions whose boundary path in A_k' is a shortest path from the point (- k 2,- k 2) to the point ( k 2, k 2); note that such a path for the case where k=4 is shown in Figure <ref> There are 2kk such paths and so we have lower bound |S|≥2kk=Ω(2^2k/√(k)). To bound |∂ S| from above We will make use of the following count of walks in the lattice: For any point P = (n_1, n_2) such that n_1 + n_2 = n, the number of walks from O = (0,0) to P of length n + 2t is given by n + 2ttn+2tn_1 + t Let W_t denote number of such walks. Note that any such path can be denoted as a sequence of symbols U,D,L,R which denote moves in the corresponding directions. For a direction Z ∈U,D,L,R, let n_Z denote number of symbols signifying the direction that appear in the walk; then the walks from O to P are in bijection with the sequences over {U,D,L,R} of length n+2t for which n_U - n_D = n_1 and n_R - n_L = n_2. Note then that n_L+n_D=t, n_U+n_L=n_1+t, and n_R+n_D=n_2+t. There is a bijection from the set of these sequences s to pairs of subsets (X_s,Y_s)⊆ [n+2t] where |X_s|=t and |Y_s|=n_1+t as follows. Given such a sequence s, we can let X_s be the set of indices with symbols L or D, while Y_s is the set of indices with symbols U or L. The sequence s is recovered from the sets X_s and Y_s by assigning the symbol U to indices in X_s∖ Y_s, the symbol L to indices in X_s∩ Y_s, the symbol D to those in Y_s∖ X_s, and the symbol R to indices in neither X_s nor Y_s. Now the boundary ∂ S of S thus consists of paths which satisfy either x_1=x_2 or y_1=y_2. Observation <ref>, together with the condition that the total length of a closed walk enclosing each partition class is at most 6k+O(k^1-ε), implies that in these cases, we must have |y_i|=O(k^1-ε) in the case where x_1=x_2 or |x_i|=O(k^1-ε) in the case where y_1=y_2. In particular, we have without loss of generality that x_1=x_2, and y_2=y_1+2k-O(k^1-ε). In particular, letting ℓ_ω=y_2-y_1, we have that the path _ω has length ℓ_ω + O(ℓ_ω^1-ε). Now by Lemma <ref>, the number of choices for such walks (for fixed x_i,y_i, for which there are only polynomially many choices) is changing ℓ to ℓ_ω ℓ_ω+O(ℓ_ω^1-ε)O(ℓ_ω^1-ε)^2≤ 2^O(ℓ_ω^1-ε) for 0≤ε≤ 1. Together, (<ref>) and (<ref>) imply that Φ= 2^O(ℓ_ω^1-ε)/2^2k/√(k)≲1/4/2^ε k, and so the mixing time t_mix satisfies t_mix≥ 2^ε k, with respect to the fixed parameter ε>0. This gives the following theorem: Glauber Dynamics on contiguous 2-partitions of A_k with boundary of length at most 6k + Ck^(1-ϵ) has exponential mixing time. On the other hand, we claim that we can sample the partitions ω∈Ω efficiently using <Ref>, by applying it to each pair of points on the boundary A_k', to generate the path P_ω. To show this, we will argue that the set A_k' has the correct width property with endpoints of P_ω. Formally, Let ω∈Ω be a partition of A_k. Let P_ω be corresponding path in A_k' with endpoints P_1,P_2. Then A_k' is (ℓ, 16 ℓ + 4Ck^(1-ϵ), 0)-wide with respect to points P_1,P_2 for all ℓ. Let P_i = (x_i,y_i) for i = 1,2. Without loss of generality, let (x_2, y_2) ≥ (0,0). Let Q = (-x_2,-y_2) be the point anti-podal to P_2 in ∂ A_k'. We will break the proof into three cases, based on which quadrant P_1 is in. Suppose P_1 is in third quadrant. Then the distance between P_1 and P_2 is exactly 2k. Therefore, P_2 is at most Ck^(1-ϵ) distance from Q. The lattice box ℛ(P_1,P_2) has at most 2 x_1 + x_2 + 2 y_1 + y_2 points in ∂ A_k'. This is exactly the distance between P_1 and Q. Therefore, a shortest path from P_1 to P_2 can intersect ∂ A_k' at at most 2Ck^(1-ϵ) many points. It follows that a path of length 2k + 2ℓ is contained in ℛ(P_1-(ℓ,ℓ), P_2+(ℓ,ℓ)), which contains at most 16 ℓ + 2Ck^(1-ϵ) points in ∂ A_k', implying that any path of length 2k + 2ℓ can intersect ∂ A_k' in at most 16 ℓ + 4 C k^(1-ϵ). Suppose P_1 is in the second quadrant. Then the distance between P_1 and P_2 is x_2 - x_1 + y_2 - y_1 = x_2 - x_1 + max(y_1, y_2) - min(y_1, y_2) ≥ 2k - 2 min(y_1,y_2) Further, length of the lower boundary of A_k between P_1 and P_2 is at least 4k + y_1 + y_2, and hence boundary of the lower partition is at least 6k + y_2 - y_1, which implies that y_2 - y_1≤ Ck^(1-ϵ) The lattice box ℛ(P_1,P_2) contains at most 2y_2 - y_1 + 4 points on the boundary ∂ A_k'. By similar argument to above, we can conclude that any path of length 2ℓ larger than the shortest path is contained in a slightly bigger lattice box, and can intersect the boundary ∂ A_k' in at most 2y_2 - y_1 + 16ℓ + 4 ≤ 16ℓ + 4Ck^(1-ϵ) points. The case when P_1 is in the fourth quadrant is handled similarly to the case when P_1 is in the second quadrant. This proves that in all cases, the Aztec Diamond is (ℓ, 16ℓ + 4Ck^(1-ϵ), 0)-wide. This lemma implies that for ℓ≤ Ck^(1-ϵ), and s(l) = 20Ck^(1-ϵ), the set A_k' satisfies the hypothesis of <Ref> for all points P_1,P_2 that are endpoints of P_ω for some ω∈Ω. Hence, for each pair of points P_1, P_2 ∈∂ A_k', we can compute W_ℓ^λ(P_1,P_2) for all ℓ≤ Ck^(1-ϵ), where λϵ > 1. This allows us to uniformly sample P_ω, for ω∈Ω, with rejection sampling, using the following algorithm: * plainnat § BOUNDS ON BINOMIAL COEFFICIENTS We first recall some exponential bounds on 1+x. We have the standard upper bound: e^x ≥ 1 + x ∀ x ∈ On the other hand, we have the lower bound: e^x/1+x≤ 1 + x ≤ e^x ∀ x > - 1 This follows since 1 - t ≤ e^-t 1 - x/1+x≤ e^-x/1+x1/1+x≤ e^-x/1+x We get <Ref> from this by taking reciprocals whenever 11+x≥ 0. Further, <Ref> implies that e^x/2≤ 1 + x ≤ e^x ∀ 0 ≤ x ≤ 1 We also recall the Sterling's Approximation - the non-asymptotic version of Sterling's Approximation is given in <cit.> as √(2 π n)n/e^n exp1/12n + 1≤ n! ≤√(2 π n)n/e^n exp1/12n We can use these exponential bounds on (1+x) to bound the binomial coefficients. In particular, we are interested in bounded the binomial coefficient n+xk in the case where x,k ≤n10. Recall by the definition of binomial coefficients: n+xk = 1/k!∏_i=0^k-1n + x - i = n^k/k!∏_i=0^k-11 + x-i/n Using <Ref> we get the following upper bound: n+xk ≤n^k/k!exp∑_i=0^k-1x-i/n ≤n^k/k!exp2kx - k^2 + k/2n Using <Ref> we get the following lower bound when n + x - k ≥x,k: n+xk ≥n^k/k!exp∑_i=0^k-1x-i/n/1 + x-i/n = n^k/k!exp∑_i=0^k-1x-i/n+x-i = n^k/k!exp∑_i=0^k-1x-i/n + x-i/n+x-i - x-i/n = n^k/k!exp∑_i=0^k-1x-i/n - (x-i)^2/n(n+x-i) ≥n^k/k!exp2kx - k^2 + k/2n - 2k(x + k)/n Where the last inequality follows since (x-i)^2 ≤ 2x^2 + 2 i^2 ≤ 2x^2 + 2k^2 ≤ 2(x + k)(n+x-i) assuming that n + x - i ≥x,k. Together, we get the following upper and lower bounds on the binomial coefficients: n^k/k!exp2kx - k^2 + k/2n - 2kx + 2k^2/n≤n+xk≤n^k/k!exp2kx - k^2 + k/n
http://arxiv.org/abs/2307.04267v2
20230709214158
Phase transitions in sampling and error correction in local Brownian circuits
[ "Subhayan Sahu", "Shao-Kai Jian" ]
quant-ph
[ "quant-ph", "cond-mat.stat-mech" ]
quantikz *theoremTheorem [ [email protected] Institute for Theoretical Physics, Waterloo, Ontario N2L 2Y5, [email protected] of Physics and Engineering Physics, Tulane University, New Orleans, Louisiana, 70118, USA We study the emergence of anticoncentration and approximate unitary design behavior in local Brownian circuits. The dynamics of circuit averaged moments of the probability distribution and entropies of the output state can be represented as imaginary time evolution with an effective local Hamiltonian in the replica space. This facilitates large scale numerical simulation of the dynamics in 1+1d of such circuit-averaged quantities using tensor network tools, as well as identifying the various regimes of the Brownian circuit as distinct thermodynamic phases. In particular, we identify the emergence of anticoncentration as a sharp transition in the collision probability at log N timescale, where N is the number of qubits. We also show that a specific classical approximation algorithm has a computational hardness transition at the same timescale. In the presence of noise, we show there is a noise-induced first order phase transition in the linear cross entropy benchmark when the noise rate is scaled down as 1/N. At longer times, the Brownian circuits approximate a unitary 2-design in O(N) time. We directly probe the feasibility of quantum error correction by such circuits, and identify a first order transition at O(N) timescales. The scaling behaviors for all these phase transitions are obtained from the large scale numerics, and corroborated by analyzing the spectrum of the effective replica Hamiltonian. Phase transitions in sampling and error correction in local Brownian circuits Shao-Kai Jian August 12, 2023 ============================================================================= § INTRODUCTION Random quantum circuits (RQC) play a pivotal role in both quantum dynamics theory and quantum information theory, offering insights into fundamental aspects such as quantum chaos, out-of-time correlation functions, and entanglement entropy <cit.>. Closely related to RQC, random tensor network serves as a valuable tool for investigating the AdS/CFT correspondence, a theory aiming to understand quantum gravity through quantum entanglement <cit.>. Additionally, random quantum circuits find extensive applications in quantum information theory, including quantum advantage <cit.>, quantum error correction <cit.>, etc. Random circuits are expected to be a toy model capturing the following properties of generic quantum circuits: they output states of high complexity and generate maximal entanglement between initially disconnected regions. An important question is characterizing the time (depth) required for achieving the high complexity. How do we characterize the complexity of random circuits? In this work, we focus on two distinct features: anticoncentration and unitary design. Consider a circuit C acts on an initial simple state (the product state of |0⟩ on all qubits) and the output state is measured in the computational basis to obtain a distribution over measurement outcomes, p_C(s) = |⟨s|C|0⟩|^2. Anticoncentration is the property that p_C(s) is well spread over all bitstrings s. Certifying that the circuit is anticoncentrated is crucial in guaranteeing that the RQC simulation is classically hard, and is a promising route towards demonstrating quantum advantage <cit.>. At long enough depths, RQC has a stronger notion of complexity: it becomes an approximate unitary design. A unitary ensemble is said to be k-design if it approximates a global Haar random unitary in its first k moments. In particular, ensuring that a RQC has achieved the 2-design property is enough for the RQC to be maximally decoupling. Consider a system A, initially maximally entangled with a reference R, is subjected to a circuit C, before being coupled to an environment E. The initial encoding via C is said to have the decoupling property if the joint density matrix on R∪ E is approximately factorizable ρ_RE≈ρ_R⊗ρ_E. This can also be associated with the RQC dynamically generating a quantum error correcting code <cit.>. Several avenues of research on RQC have established that anticoncentration and unitary design occur at parametrically distinct timescales. Suppose we consider circuits with spatial local connectivity in d dimensions. Past research has shown that ensembles of RQC with Haar random local gates achieve anticoncentration and unitary design in O(log N) <cit.> and O(N^1/d) <cit.> timescales, respectively, where N denotes the number of qubit. Note that both anti-concentration and 2-design property are diagnosed by non-linear properties of the quantum state generated by the circuits. This makes numerically simulating these properties for local Haar random circuits hard and limited to modest system sizes and for short times. Hence, much of the research on RQC has depended on proving analytical bounds, classically simulable Clifford circuits, and perturbations around semi-classical limits, such as large local Hilbert space dimensions. In this work, we provide a minimal model which allows us to do efficient and guaranteed numerical simulation of the quantum informational quantities probing anticoncentration and 2-design property of large-sized random circuits using tensor network technology. We take the approach of directly representing the informational quantities averaged over the circuit ensemble as a linear observable in a replicated Hilbert space. Here, replicas are simply exact copies of the original system, and the informational observables probe the correlation between different replicas. We study a particular ensemble of RQC, namely local Brownian circuits <cit.>. These Brownian qubit models can be defined in any graph where each vertex hosts a qubit, with nearest neighbor Brownian interaction generating the unitary evolution. Remarkably, the real-time evolution of circuit averaged non-linear observables of the density matrix can now be realized as imaginary time evolution in the replica space. The Hilbert space for k replicas is simply the combination of a forward contour and a backward contour for real-time evolution for each replica; so the local Hilbert space encompasses 2k spins. After averaging of Brownian couplings, the quantum dynamics reduces to a Hermitian replica qubit Hamiltonian with the same locality properties as the initial interaction graph. This model not only establishes a clear mapping between various quantum information quantities and those of a quantum spin model, but also transforms the problem of quantum dynamics into a thermodynamic problem. Furthermore, imaginary time evolution with local Hamiltonians is guaranteed to be efficient in 1d using simple matrix product state and Time Evolving Block Decimation (TEBD) algorithms <cit.>. This allows us to perform large-scale simulations of these informational quantities in 1+1 dimensional circuit. As an example, we can simulate the averaged Rényi-2 entanglement properties of a Brownian circuit on N∼ O(100) qubits for t∼ O(N) depths in a few minutes on a standard laptop. The effective Hamiltonian approach also provides a statistical mechanical description of different regimes of RQC as distinct `phases', separated by phase transitions. These phases can be described within a generalized Landau framework involving multiple replicas, where the relevant symmetry is the replica permutation symmetry <cit.> (when we introduce multiple identical copies of the system, they can be permuted amongst each other without changing the effective description). Specifically, in the two replica scenario that we focus on in this work, the effective Hamiltonian has a ℤ_2 symmetry corresponding to a relative swap between the two real-time contours, which turns out to be the relevant symmetry for non-linear observables of the density matrix. This effective Hamiltonian is essentially a ℤ_2 Ising model in the replica space, and the phases of quantum information and their phase transitions are associated with the various phases and critical properties of this Ising model. Using large scale numerics of the Brownian circuit model we can directly probe the dynamical properties of the quantum informational quantities, and identify the saturation to anticoncentration (at ∼log N depth) and 2-design property (at ∼ N depth) of the RQC as sharp transitions, confirmed by careful finite-size scaling of the numerical data. This can be understood analytically by investigating the spectral properties of the effective Hamiltonian. The anticoncentration transition can also be directly associated with a transition in the computational hardness of classically simulating the output distribution. To show this, we show that a specific algorithm for simulating the output distribution <cit.> undergoes a hardness transition in ∼log N depth. In order to study the 2-design transition, we focus on investigating the feasibility of the Brownian circuit as a quantum error-correcting code, by directly simulating a quantity akin to the mutual information between the reference and environment in the decoupling setup, named `Mutual Purity' <cit.>. The mutual purity is a 2-replica quantity, and has recently been shown to provide a bound for the error correction capabilities of RQC in <cit.>. We show that the mutual purity undergoes a first order transition in O(N) time, after which the Brownian circuit approximates the global Haar random unitary for coding purposes. This coding transition is a first order pinning transition, driven by boundary conditions determining the mutual purity, akin to <cit.>. Furthermore, the mutual purity contributes to a bound for the failure probability for correcting errors after the encoding by the RQC. By numerically computing the mutual purity for different error models after the 2-design transition, we can also find a first order threshold transition for the code distance. As mentioned earlier, sampling of RQC outcome states is one of the most promising routes towards demonstration of quantum advantage in near term quantum devices <cit.>. However, real quantum devices suffer from noise. In order to benchmark that the noisy quantum device, an estimate of the fidelity of the output state is desirable. One proposal for an efficient estimate for the fidelity is the linear cross-entropy benchmark χ_XEB, and a high score in this benchmark suggests that the RQC simulation is classically hard <cit.>. However, it has recently been understood that with local noise models, there is a noise-induced phase transition (NIPT) in the linear cross entropy benchmarking <cit.>. In the weak noise regime, χ_XEB provides a reliable estimate of fidelity, and in the strong noise regime, it fails to accurately reflect fidelity. Furthermore, this implies that in the strong noise regime, classical simulation can yield a high score in the cross-entropy benchmark <cit.>, without necessarily solving the sampling task. The noise model can be incorporated in our Brownian circuit setup, where the noise serves as an explicit replica-permutation symmetry breaking field <cit.>. Using a combination of numerical and analytical tools, we characterize the NIPT in benchmarking by identifying it as a first order phase transition in the effective Hamiltonian picture. §.§ Main results and outline of paper We first briefly summarize the results of the paper. The main results of the paper are represented in Fig. <ref> and Table <ref>. * Anticoncentration: We probe anticoncentration in the 1+1d Brownian circuit U by computing the `collision probability' <cit.>, defined as the circuit averaged probability that two independent samples of the RQC (acting on the |0^⊗ N⟩ state of N qubits) produce the same result, defined as Z = 𝔼∑_x|⟨x|U|0^⊗ N⟩|^4, where the averaging ∼𝔼 is done over all realizations of the circuit. In the context of the effective Ising Hamiltonian (H_eff) description, we demonstrate that Z equates to the transition probability between an imaginary-time evolved state from the initial state and a quantum paramagnetic state (defined in a later section). The imaginary time evolution gradually projects the initial state onto the ground state of H_eff, which corresponds to Z∼ 2^-N. However, in finite time t, excited states contributions result in Z = 2^-N + S_Δ e^- Δ t, where Δ (S_Δ) denotes the energy gap (entropy) of the excitation [In a one-dimensional chain with local couplings, the elementary excitation manifests as a domain wall, with a finite gap independent of the system size and an entropy proportional to the system size]. The anticoncentration transition, thus occurs at t = 1/Δlog S_Δ∼log N, representing a depth-induced computational transition. The log N results arise from the nature of the elementary excited states of the Ising model, and can be confirmed by direct large scale simulation of the imaginary time evolution. * Computational Hardness transition: We probe the computational hardness of classically simulating the probability distribution in the measurement outcome in the earlier setup, i.e. p_U(x) = |⟨x|U|0^⊗ N⟩|^2. By studying the Rényi-2 version of conditional mutual information (CMI) of p_U(x) using numerics of the imaginary time evolution, we probe the hardness of a specific classical algorithm (`Patching algorithm') for approximately simulating the output distribution as introduced in <cit.>. We find that the CMI undergoes a phase transition at O(log N) time, with the same scaling behavior as the collision probability, signalling a computational hardness phase transition at the same depth. * Phase transition in cross-entropy benchmarking of noisy Brownian circuits: Here we consider the following setup of two copies of the Brownian circuit, one that is affected by noise (denoted by the noisy channel 𝒩), and the other copy undergoes the noise-free Brownian circuit. We can now update the effective Hamiltonian with explicit noise in one of the replicas, H_eff→ H_eff^'. We can compute the fidelity F = [𝒩(ρ)ρ] of the noisy simulation by doing imaginary time evolution with H_eff^' (with local noise models, H_eff^' remains local). We also compute the linear cross entropy benchmark, defined as χ_XEB = 2^N ∑_x p(x) q(x) - 1, where p(x) and q(x) represent the output distribution in the noise-free and the noisy cases respectively <cit.>. In H_eff^', noise explicitly breaks the Ising symmetry and subsequently pins the Ising spins. Consider a local (unital) noise model, with λ strength for each qubit (to be explicitly defined later). Noise generically undermines the ferromagnetic phase that leads to anticoncentration, and leads to erosion of the quantum advantage. This holds true for constant rate noise λ∼ O(1). Through the mapping to the quantum Ising model, we discover that noise behaves as a relevant perturbation with a scaling dimension of one. Therefore, when the noise rate scales inversely with respect to the size of the chain, λ∼ 1/N, we get a noise-induced computational transition at some critical λ^*∼ O(1/N). This transition essentially resembles a field-induced first-order transition and conforms to finite size scaling with ν = 1/2, which we confirm numerically. Moreover, if the rate scales less (greater) than 1/N, the noise is deemed irrelevant (relevant). This result is consistent with recent results on Noise-induced phase transitions in cross entropy benchmarking <cit.>. We also study whether this transition signals a transition in the computational hardness in the simulation of noisy Brownian circuits. By studying the Rényi-2 CMI of p_𝒩(U)(x), we find that it does not undergo a hardness transition with depth for large enough depths, and actually exponentially decays with time. This suggests that the 1+1d noisy random circuits are efficiently simulable in the long-time limit, even in the presence of infinitesimal scaled noise. * Coding transitions: We encode some local information (a reference qubit R) in the entire system A using the Brownian circuit, and probe the effectiveness of this encoding as a quantum error correcting code. After encoding, the state on A is affected by noise, which can be identified as a unitary coupling with the environment E. Mutual purity ℱ_RE <cit.> is a two replica quantity which upper bounds the trace distance between the initial encoded state and the error affected encoded state after error correction using a recovery channel <cit.>. From the effective Hamiltonian perspective, the mutual purity can be represented as a transition probability between two ferromagnetic states. In particular, we find that at short times the mutual purity decays exponentially, which can be identified with domain wall configurations pinned between the initial and final states; while after t ∼ O(N) time the domain walls get depinned, resulting in the saturation of the Mutual Purity to a global Haar value, i.e. realizes an approximate 2-design. Using large-scale numerics, we are able to directly probe this transition to a 2-design as a first order depinning transition. Furthermore, since the mutual purity determines the feasibility of error correction after the application of noise, we find a first order threshold transition in the fraction of qubits which are affected by noise. The critical fraction can be identified as a lower bound for the `code distance' of the Brownian circuit as a quantum error correcting code. The paper is organized as follows. Section <ref> presents an introduction to the Brownian circuit model and a derivation of the effective Hamiltonian for k=2 replicas. In section <ref>, we describe the symmetries of the effective Hamiltonian, and provide heuristic description of the phase diagrams. In section <ref> we discuss the anticoncentration and computational hardness transition. In section <ref> we investigate the noise-induced phase transition in benchmarking noisy Brownian circuits. In section <ref> we study the error-correcting properties of the Brownian circuit and probe the transition to an approximate 2-design. We conclude by discussing the implications of this work and future directions in section <ref>. § LOCAL BROWNIAN CIRCUITS We consider a Brownian circuit on N qubits in a chain, with the Hamiltonian H_t = ∑_⟨ i,j⟩^N∑_α,β J_t,ij^αβ σ_i,ασ_j,β, where α, β label the Pauli indices of the local Pauli matrices σ_i, interacting between nearest neighbor pairs ⟨ i,j ⟩. J_t,αβ is a normal random variable uncorrelated in time, defined via the following properties 𝔼[J_t,ij^αβ] = 0 𝔼[J_t,ij^αβJ_t^',ij^α^'β^'] = Jδ_tt^'/δ tδ_αα^'δ_ββ^'. 𝔼 denotes the average according to the distribution. §.§ Effective Hamiltonian description We integrate over the random couplings to get an effective Hamiltonian in the replica space. To this end, let's first consider a unitary evolution of a density matrix, ρ' = U ρ U^†. Explicitly writing out the indices, it is ρ'_a'b' = ∑_a,b U_a' aρ_ab U^†_bb' = ∑_a,b U_a' a U^∗_b'bρ_ab. In the second term, we transpose U^†, and use the fact that (U^†)^T = U^∗. Viewed as a tensor, the time evolution can be expressed by an operator U⊗ U^∗ acting on a state ∑_abρ_ab |a⟩⊗ |b ⟩. This is essentially the Choi–Jamiołkowski isomorphism (the operator-state mapping) <cit.>. Now we can extend this to two replicas. Since most of our discussion is focused on two replicas, we derive an effective Hamiltonian for two replicas. Notice that it is straightforward to generalize the derivation to k replicas and to arbitrary number of qubits on each node <cit.>. Because the random couplings at different time are uncorrelated, the central quantity is the instantaneous time evolution (for a small time interval δ t) operator for the four contours, U_1(δ t) ⊗ U_2(δ t)^∗⊗ U_3(δ t) ⊗ U_4(δ t)^∗, where U_a, a=1,2,3,4 denotes the unitary evolution operator generated by the Brownian spin Hamiltonian U_a(δ t) = e^-i δ t H_t,a acting on the four Hilbert spaces. It includes two replicas, each of which contains a forward contour a=1,3 and a backward contour a=2,4. The complex conjugate is due to the Choi–Jamiołkowski isomorphism, as demonstrated above. The average over the random coupling reads 𝔼[ U_1(δ t) ⊗ U_2(δ t)^∗⊗ U_3(δ t) ⊗ U_4(δ t)^∗] = ∫ DJ P[J] exp( ∑_a (- i)^aδ t ∑_⟨ i,j⟩∑_α,β J_t,ij^αβτ_i,a^ατ_j,a^β), where τ_i,1^α = τ_i,3^α = σ_i^α, and τ_i,2^α = τ_i,4^α = (σ_i^α)^∗, α = 1,2,3. Here σ^α denotes the Pauli matrix, and the complex conjugate for the a=2,4 contour is due to the backward evolution. DJ = ∏_⟨ i, j ⟩∏_α, β dJ_t,ij^αβ, and P[J] denotes the Gaussian distribution specified by (<ref>). Integrating over the random couplings results in an effective Hamiltonian, 𝔼[ U_1(δ t) ⊗ U_2(δ t)^∗⊗ U_3(δ t) ⊗ U_4(δ t)^∗] = e^-δ t H_eff, with H_eff = J/2∑_⟨ i,j⟩∑_a,b (-1)^a+b (τ⃗_i,a·τ⃗_i,b) (τ⃗_j,a·τ⃗_j,b). This Hamiltonian describes a spin chain with four spins per site, denoted by τ⃗_i,a, a=1,2,3,4. In the following, we will see that this Hamiltonian can describe various information phases and phase transitions, such as dynamical computational transition, error correcting transition, etc. Similarly, for a finite time evolution, we have 𝕌≡𝔼[ U_t⊗ U_t^∗⊗ U_t⊗ U_t^∗] = e^-H_eff t. Here we use U_t = e^-∫ dt H_t to denote the unitary generated by the Brownian circuit for a time interval t. §.§.§ Numerical Implementation We simulate imaginary time evolution in the replica Hilbert space using the TEBD algorithm. The local Hilbert space is ℂ_2^⊗ 4, reflecting the two replicas and two time contours per replica. To simulate exp(- t H_eff) we now need to Trotterise the TEBD evolution with Δ t as the time step, we take the energy-scale J = 1/Δ t. This ensures that Δ t · H_eff is dimensionless with the energy scale set to 1, and with the evolved time t as non-negative integers. All calculations are performed using the TeNPy Library <cit.>. §.§ Replica permutation symmetry The Hamiltonian Eq. <ref> is invariant under replica re-labelings, and has the symmetry group, (S_2× S_2)⋊ℤ_2, where S_2× S_2 is the permutation group on the two replica labels. The outer ℤ_2 arises from the symmetry of shuffling between the two time-conjugated copies after taking the complex conjugation [Note that since the variance of coupling is independent of α = x,y,z, the resulted Hamiltonian also enjoys a SU(2) symmetry for each site. But our results do not rely on this symmetry.]. Put simply, each of the S_2 transformation swaps τ_i,1^α↔τ_i,3^α or τ_i,2^α↔τ_i,4^α, whereas the ℤ_2 exchanges τ_i,1^α↔τ_i,2^α and τ_i,3^α↔τ_i,4^α simultaneously. It is easy to see that the Hamiltonian can be brought into a sum of squares, H_eff = J/2∑_⟨ i, j ⟩∑_α,β( ∑_a (-1)^a τ_i,a^ατ_j,a^β)^2. Therefore, the eigenvalues are no less than zero. Two ground states are |id⟩⟩^⊗ N and |swap⟩⟩^⊗ N, where |id⟩⟩ = 1/2(|0000 ⟩ + |0011 ⟩ + |1100 ⟩ + |1111 ⟩ ), |swap⟩⟩ = 1/2(|0000 ⟩ + |1001 ⟩ + |0110 ⟩ + |1111 ⟩ ). Here, we use |0> and |1> to denote ± eigenstates of the σ_z Pauli operator. The name of the state indicates that | id> ⟩ is a product of an EPR state of the first and second spins and an EPR state of the third and fourth spins, and | swap> ⟩ is a product of an EPR state of the first and fourth spins and an EPR state of the second and third spins. Using the properties of EPR pairs, namely, τ_1 =τ_2, τ_3 =τ_4, τ_1 =τ_4, τ_2 =τ_3 we can see that every square in the Hamiltonian vanishes, so that these two states are ground states with zero energy. The permutation symmetry is spontaneously broken by the ground state, and when the low-energy physics is concerned, our model is essentially equivalent to an Ising model. Notice that we can organize the permutation transformation such that one of them permutes the second and the fourth spins (we denote this by S_2^r: τ_i,2^α↔τ_i,4^α), while the other permutes both the first and the third spins as well as the second and the fourth spins. Then S_2^r can transform one ground state to the other, and only S_2^r is spontaneously broken. Our model Eq. <ref> transforms the real-time evolution along the four contours into an imaginary-time evolution that progressively projects onto the ground state subspace of the Hamiltonian described in Eq. <ref>. This imaginary-time evolution allows us to capture the dynamics of several important quantum information quantities. One such quantity is the collision probability, which measures the degree of anticoncentration and corresponds in the replica model to the overlap between the time-evolved state and a final state (to be specified later). The magnitude of this overlap is determined by the excitation gap present in the Hamiltonian Eq. <ref>. In one-dimensional systems, the elementary excitation takes the form of domain walls, which possess a finite energy gap and exhibit logarithmic entropy. As a result, the process of anticoncentration requires a timescale proportional to log N, where N represents the system size. §.§ Effective Hamiltonian with local noise Since we would also like to investigate the effect of quantum noise, we now consider imperfect time evolution due to the presence of quantum errors. The unitary time evolution operators are replaced by a quantum channel. A local depolarization channel is given by ρ→ (1-λ) ρ + λ/3∑_α = 1,2,3σ_i^αρσ_i^α, where 0 ≤λ < 3/4 for complete positivity. Using the operator-state mapping, this can be mapped to, 𝒩_i^depol(λ) = (1-λ) I^⊗ 2 + λ/3∑_α=1,2,3σ_i^α⊗ (σ_i^α)^∗, where I denotes the identity operator. The noise can induce a transition of random circuit sampling <cit.>. An observable of such a transition is the cross-entropy benchmarking (XEB), which we will describe in detail later. For now, let us just mention that XEB contains two distributions: one from a noiseless quantum circuit, and the other from a noisy quantum circuit. Therefore, we are again concerned with only two replicas. Without loss of generality, we assume the noisy replica is described by the first two contours a= 1, 2 and the noisy replica is described by the last two contours a= 3, 4. We upgrate the quantum channel into 𝒩_i^depol(λ) →𝒩_i^depol(λ) ⊗ I ⊗ I. The identity operators for the last two Hilbert space is clear since the second replica is noiseless. Thus, the noise occurs for the first replica with the first and second copies of the Hilbert space. It is not hard to see that the channel can be equivalently described by a perturbation described by the following effective Hamltonian, H_depol(λ) = 3/4δ tlog(1/1- 4/3λ) ∑_i (1 - 1/3∑_ατ_i,1^ατ_i,2^α). Here, we assume the noise occurs at each site with the same strength. Since 0 ≤λ < 3/4 for the depolarizing channel, the prefactor is positive. Essentially, the perturbation explicitly breaks the permutation symmetry. The state | id> ⟩^⊗ N is still an eigenstate of these two perturbations with eigenvalue zero, whereas, the state | swap> ⟩^⊗ N obtains a finite positive energy, i.e., ⟨< swap|^⊗ N H_depol(λ) | swap> ⟩^⊗ N = 3N/4δ tlog(1/1- 4/3λ). Therefore, the presence of noise effectively lifts the degeneracy between the two ground states and biases the system towards the state | id> ⟩^⊗ N. In the regime of low-energy physics, the noise can be treated as an external field that explicitly breaks the Ising symmetry and favors a particular state. For notation simplicity, we denote the local Zeeman energy as ϵ. For the depolization channel, the effective Zeeman energy is ϵ = 3/4δ tlog(1/1- 4/3λ). When λ is small, we can deduce that ϵ≈λ/δ t. Note that λ is dimensionless and δ t is the unit of energy. In the symmetry breaking phase, the external field acts as a relevant perturbation with a scaling dimension of one. Consequently, a first-order transition occurs at an infinitesimally small noise strength, which is independent of the system size. However, if the noise strength is appropriately scaled down by a factor of 1/N, which compensates for its relevant scaling dimension, the transition can take place at a finite, size-dependent noise strength given by ϵ∼ 1/N. This noise-induced transition also manifests as a first-order phase transition. The first-order transition exhibits a finite size scaling, and is distinguished from a second-order transition <cit.>. We will confirm it via a systematic finite size scaling. § ANTICONCENTRATION AND COMPUTATIONAL HARDNESS OF SAMPLING BROWNIAN CIRCUITS §.§ Anticoncentration It is well-known that random circuits generate output states that are anti-concentrated, which roughly means that the probability distribution of the classical bitstrings generated by measuring the output state of a random circuit in the computational basis, is well spread out and not concentrated on a few bit-strings. Naturally, this also implies that classical sampling of these bitstrings will be hard. Two key ingredients underpin random circuit sampling. Firstly, anticoncentration asserts that the distribution deviates only slightly from a uniform distribution. This property is typically required in hardness proofs. However, anticoncentration can be easily attained by applying a Hadamard gate to all qubits. Therefore, we need the second ingredient, randomness, to eradicate any discernible structure in the circuit. Given that randomness is inherent in our model, we are intrigued by whether the distribution exhibits anticoncentration and, if so, at what time (depth) it occurs. This indicates a transition in computational complexity, wherein the system shifts from a region that is easily achievable by classical means to a region that becomes challenging for classical algorithms. When the random circuits are generated by a particular ensemble of local quantum gates, a key diagnostic of the complexity of the ensemble is the time it takes to anti-concentrate the output states. Concretely, we can compute the collision probability, which is defined as the probability that the measurement outcomes of two independent copies of the random circuit agree with each other, i.e. ∑_sp_U(s)^2, where p_U(s) = |⟨s|U|0⟩|^2, for a given bitstring s. We are interested in the ensemble averaged collision probability which can be readily expressed as transition amplitude in the replicated dynamics, Z = 𝔼_J∑_sp_U(s)^2 = ∑_s ⟨⟨ s^⊗ 4 | 𝕌 | 0^⊗ 4⟩⟩, where 𝕌 = 𝔼[ U_t⊗ U_t^∗⊗ U_t⊗ U_t^∗] can be represented by an imaginary time evolution with a replica Hamiltonian defined in Eq. <ref>. We identify the circuit to have reached anti-concentration if Z ≈ c 2^-N and to not have anti-concentration if Z≥ e^N^c2^-N for some O(1) constant c. In Fig. <ref> we study the averaged collision probability in a 1d Brownian circuit by the tensor network simulations. We find that the Brownian circuit anti-concentrates in logN depth, which is consistent with the fact that local Haar random circuits anti-concentrate in Ω(log N) depth in 1d <cit.>. Furthermore, in Fig. <ref>b, we show data collapse which is consistent with the following approximate form for the collision probability, 2^N Z = 2+ c_1e^-c_2(t-τ^*log N), for some O(1) constants c_1 and c_2. This expression can be justified by the effective Hamiltonian picture as follows. Because 𝕌 = e^-H t, with H given by Eq. <ref>, it effectively projects the initial state |0^⊗ 4>⟩ to the ground state 2^N 𝕌|0^⊗4>⟩≈| id>⟩^⊗ N + | swap>⟩^⊗ N + excitations. The leading contribution of excitations is given by a single domain wall (since we have used open boundary condition, a single domain wall is allowed), | DW_k > ⟩≈| swap>⟩^⊗ k⊗| id>⟩^⊗ (N-k), k=1,...,N-1. Therefore, the multiplicity of such an excitation is proportional to N. The excitation energy Δ, on the other hand, is a constant independent of N, and it contributes to an exponential function e^-Δ t. Therefore, according to this picture, the prediction for the collision probability reads 2^N Z ≈ 2 + N e^-Δ t = 2 + e^-Δ (t- 1/Δlog N), where we have noticed that ∑_s ⟨< s | id> ⟩ = ∑_s ⟨< s | swap> ⟩ =∑_s ⟨< s | DW_k > ⟩ = 1. This result is consistent with the data collapse. In particular, it is clear that the transition time log N is due to the entropy of the domain wall excitation. §.§ Hardness of classical simulation As a consequence of anticoncentration and randomness, classical simulation of the output probabilities of the Brownian circuits after log N depth is expected to be hard. In this section, we show that, with respect to a particular algorithm for approximate classical simulation, there is a computational hardness transition at t∼log N depth. We study the computational hardness of the Patching algorithm introduced in <cit.>. Heuristically, the algorithm attempts to sample from the marginal probability distribution of spatially separated patches, and then combine the results together. This succeeds in poly(N) time if the output distribution of the state generated by the circuit has decaying long-range correlations. Without going into the details of the algorithm itself, we study the condition on the long-range correlations for which the algorithm is expected to successfully sample from the output distribution. Consider a tripartition of N qubits into A∪ B∪ C, such that dist(A,C)≥ l. For the output probability distribution p_U(s) = |⟨s|U|0⟩|^2, we consider the conditional mutual information between the regions A and C conditioned on B, as in I(A:C|B)_p = S(AB)_p+S(BC)_p-S(B)_p-S(ABC)_p, where the S(A) refers to the entropy of the marginal distribution of p on the region A. The output distribution is defined to have f(l)- Markov property if I(A:C|B)_p≤ f(l). We quote the main Theorem about the condition for successful Patching algorithm from <cit.>: Patching algorithm succeeds in poly(N) time to sample from a probability distribution arbitrarily close in total variation distance to the exact output distribution p_U(s) of a quantum circuit on N qubits, if p_U(s) has e^-Ω(l) Markov property, for a suitable choice of the length-scale parameter l. In the local Brownian circuits introduced earlier, we can directly compute the averaged Rényi-2 version of the conditional mutual information (CMI) of the output distribution p_U(x), i.e. I^(2)(A:C|B)_p = S^(2)(AB)_p+S^(2)(BC)_p-S^(2)(B)_p-S^(2)(ABC)_p, as a function of time t. In Fig. <ref>a and b, we study the Rényi-2 CMI for an equal tripartition of the qubit chain (i.e. |A| = |B| = |C| = N/3), and find that there is a transition at log N depth. In particular, at long times, I^(2)(A:C|B)_p asymptotes to log 2, indicating long-range correlations in the output probability distribution. At short times, the data is consistent with I^(2)(A:C|B)_p∼ O(e^-N). There is, furthermore, a sharp transition at t∼τ^*log N at τ^*≈ 1.2. The data collapses as a function of t-τ^*log N as shown in Fig. <ref>b inset, indicating the same statistical mechanical interpretation as the collision probability. Even though this result is for the Rényi-2 version of the CMI, and not the actual CMI itself, it provides evidence that the anticoncentration transition corresponds to an actual phase transition in computational hardness of classical estimation of the output probabilities of the random circuit. § NOISY BROWNIAN CIRCUITS Random circuit sampling is widely implemented in experiments to show quantum advantage. However, sufficiently large noise can diminish the quantum advantage. It was reported recently a noise induced phase transition in random circuit sampling <cit.>. For weak noise, the cross-entropy benchmarking provides a reliable estimate of fidelity. Whereas, for strong noise, it fails to accurately reflect fidelity. §.§ Cross-entropy benchmarking In the random circuit sampling, we start from a product state ρ_0 = |0>^⊗ N< 0|^⊗ N (The initial state does not really matter, and we choose this just for simplicity), and evolve the state using the Brownian spin Hamiltonian. For brevity, we denote the unitary generated by the Brownian spin model as U. In an ideal case, i.e., there is no noise, the final state is ρ = U ρ_0 U^†. A measurement is performed on the computational basis, and this will generate a probability distribution, p(s) = ⟨ s | ρ | s⟩, where s denotes the bit string. In a real experiment, the implementation of Brownian spin Hamiltonian is not ideal because errors can occur. In this case, the time evolution of the system is, in general, not unitary and should be described by a quantum channel, ρ_err = 𝒩 (ρ_0). Here, 𝒩 denotes the noise channel. The probability distribution for a bit string s is now given by q(s) = ⟨ s | ρ_err | s ⟩. We are interested in the cross entropy benchmarking (XEB), defined as follows, χ_XEB = 2^N ∑_s p(s) q(s) - 1, where p(s) is an ideal distribution (which in practice can be estimated by classical simulations), and q(s) is the probability distribution sampled from real experiments. Since the circuit involves Brownian variables, we consider the average over these random variables, 𝔼(χ_XEB). §.§ XEB in the replica model Using the operator-state mapping, ∑_s q(s) p(s) = ∑_s ⟨⟨ s | 𝒩⊗ U ⊗ U^∗ |0 ⟩⟩, where U is the unitary generated by the Brownian spin model, and 𝒩 denotes the channel generated by both the Brownian spin model and the errors. For simplicity, we will denote 𝔼[𝒩⊗ U ⊗ U^∗]= 𝕌_err. And the initial and final states are the same as in the collision probability. Actually, the collision probability is closely related to the noiseless XEB. Consider imperfect time evolution due to the presence of quantum errors. To this end, after integrating over the Brownian variable, we arrived at the imaginary-time evolution given by ∑_s ⟨⟨ s | 𝕌_err | 0 ⟩⟩ = ∑_s ⟨⟨ s | e^-(H+H'(λ)) t | 0 ⟩⟩, where H'(λ) is the perturbation caused by the noise. The example of dephasing and depolarizing channels are given by Eq. <ref>. The average XEB then reads 𝔼 [χ_XEB] = 2^N ∑_s ⟨⟨ s | e^-(H+H'(λ)) t | 0 ⟩⟩ - 1, On the other hand, the average fidelity is given by 𝔼 [F] = 2^N ⟨⟨swap |^⊗ N𝔼 [𝒩⊗ U ⊗ U^∗] | 0^⊗4⟩⟩ = 2^N ⟨⟨swap |^⊗ N e^-(H+H'(λ)) t | 0^⊗4⟩⟩. Comparing it with the XEB, we can see that the difference comes from the final state. As discussed before, the noise lifts the degeneracy and behaves as an external field. We denote the local Zeeman energy by ϵ. The Zeeman field is a relevant perturbation even in the symmetry breaking phase. We will show that the competition between one of the lifted ground state and the excited state leads to a first-order transition at a finite noise rate ϵ N ∼ const. We will also perform a finite size scaling analysis to verify such a first-order transition in the following. We consider the evolution of XEB as a function of time. In the long-time limit, we expect the time-evolved state is a superposition of the ground state with a few low-lying excitations. It can be approximately written as 2^N 𝕌_err| 0^⊗ 4>⟩≈|id> ⟩^⊗ N + e^-N ϵ t| swap>⟩^⊗ N + e^-Δ t∑_k e^ -kϵ t|DW_k >⟩ + ∑_k e^-(2Δ + ϵ) t|SF_k > ⟩ where Δ is the local energy cost of a domain wall, and ϵ are the local energy cost and the Zeeman energy of a local spin flip. We have included both domain wall excitations and local spin flips, |SF_k >⟩ = |id> ⟩^⊗ k-1⊗|swap> ⟩⊗|id> ⟩^⊗ N-k. Note that the domain wall excitation can lead to an extensive energy cost, but we need to include them because the external field scales ϵ∼ 1/N. Therefore, the average XEB at late time is 𝔼[χ_XEB] = e^-N ϵ t + e^-Δ t∑_k=1^N-1 e^-k ϵ t + N e^-(2Δ + ϵ) t, On the other hand, the average fidelity is 𝔼[F] = e^-N ϵ t. Actually, the fidelity is lower bounded by 2^-N. This is because | id> ⟩^⊗ N and | swap> ⟩^⊗ N are orthogonal only at the thermodynamic limit N →∞. For a finite N, their overlap is (⟨< id|^⊗ N) ( | swap> ⟩^⊗ N) = 2^-N. It is clear that for the XEB to well estimate the fidelity, we require e^-N ϵ t≫ e^-Δ t. If we consider the ratio between them 𝔼 [F]/𝔼 [χ_XEB]≈1/1+ e^-Δ t + Nϵ t . To the leading order in N, there is a noise-induced phase transition at ϵ_c = Δ/N, separating between a weak noise phase, where the XEB well estimates the fidelity, and a strong noise phase, where they do not match. This is consistent with the scaling dimension analysis. §.§ Noise-induced transition In the short-time region, all kinds of excitations contribution to the XEB, and its evolution is non-universal. A crude estimate of the XEB is given as follows, 𝔼 [χ_XEB] ≈ (1 + e^-(2Δ + ϵ) t )^N - 1 . This estimate comes from the superposition of all possible spin flips at each site [For a more accurate estimate, we need to rescale N by a factor c_3 < 1. This is because spin flips do not interact with each other only when they are dilute enough. ]. Here Δ is the effective local energy cost of a spin flip. The XEB is exponential in the system size ∼exp[N e^-(2Δ + ϵ) t], but this behavior decays exponentially fast. Then the XEB will transition to the late-time behavior. In the long-time limit, since we are at the weak noise phase, we expect the XEB matches fidelity. To verify this, We plot the time evolution of XEB (solid curves) and fidelty (dahsed curves) in Fig. <ref> for a fixed noise rate. At the long-time limit, their evolution follows closely. The fact that the deviation is larger for a bigger N is because we have fixed ϵ. It is also clear that the XEB curves exhibit a crossover from a short-time non-universal region to a long-time universal region. In order to show the noise-induced phase transition in our replica model, we plot the time-evolution of the XEB for different noise rates in Fig. <ref>a. It is clear that when the noise rate is less than λ^∗≈ 0.84/N, the XEB tracks the fidelity very well. Here the fidelity is shown by a dashed curve. Note also that the fidelity has a lower bound given by 2^-N. Next, to connect this to the statistical mechanical model and implement a finite size scaling analysis, we consider scaling t ∼ N to feature an equal space-time scaling. The ratio between the fidelity and XEB is plotted in Fig. <ref>b for different system sizes. The crossing indicates a transition at λ^∗ N ≈ 0.84. The inset shows data collapse for different sizes as a function of (λ - λ^∗) N^2, which shows 1/ν = 2. To understand this exponent, we briefly review the finite size scaling at first-order phase transitions. The finite size scaling near a first-order phase transition is studied in Ref. <cit.>. We briefly repeat the argument here. In a classical Ising model in d dimensional cube with size L^d, the probability distribution of the magnetization P_L(s) in the ferromagnetic phase can be well approximated by a double Gaussian distribution P_L(s) ∝ e^-(s-M)^2 L^d/χ + e^-(s+M)^2 L^d/χ, here χ denotes the susceptibility, and M is the average magnetization. To incorporate the external field, notice that the probability distribution can be expressed as P_L(s) ∝ e^-f L^d, where f is the free energy density. From the Ising transition, the free energy is given by f = f_0 + r/2 s^2 + u/4 s^4 - sH = f'_0 + u/4(s^2-M^2)^2 - sH, where H denotes the external field, M = √(-r/u) is the average magnetization when r<0, and f_0, f'_0 are unimportant constants. If we approximate the magnetization around ± M, then the double Gaussian distribution reads P_L(s) ∝ e^- ((s-M)^2 - s χ H)L^d/χ + e^- ((s+M)^2 - s χ H)L^d/χ, where χ = -r. It is clear that the distribution will be shifted, and the one near s = M will be amplified. This probability distribution can serve as a starting point for finite size scaling analysis. The external field is equipped with scaling dimension L^-d, implying ν = 1/d. Now in our analysis, the Hamiltonian Eq. <ref> corresponds to a 1d quantum system or a 2d classical Ising model, which leads to ν = 1/2, consistent with our scaling data collapse in Fig. <ref>b. §.§ Hardness of simulating noisy Brownian circuits As we have described, the linear cross-entropy benchmark can be described in the 2-replica formalism, where the noise acts on only one of the replicas. In this section we briefly comment on the hardness of classical simulation of noisy Brownian circuits, by analysing the Rényi-2 conditional mutual information of the output distribution p(s) of the noisy circuit, as in Sec. <ref>. In this formulation the noise acts on both replicas. In Fig. <ref> we plot the Rényi-2 CMI as a function of time for two instances of weak and strong scaled local depolarization channels, with strength λ = μ/N with μ = 0.1, 2.0 respectively. The plots show that the CMI doesn't asymptote to log 2 as the noise-free case, and ultimately decays as e^-μ t without any signature of crossing. This suggests that in the long-time limit, even in the presence of scaled noise, the output distribution remains efficiently estimable using the Patching algorithm <cit.>. These numerical results provide evidence that the noise-induced phase transition in the linear cross-entropy benchmark does not signal a phase transition in the hardness of classical simulability of the output distribution of the noisy random circuits. In fact, in the presence of noise, 1+1d random circuits remain efficiently simulable by the Patching algorithm. § QUANTUM ERROR CORRECTING CODES FROM BROWNIAN CIRCUITS Random circuits scramble local information into global correlations of a state, in a way which is inaccessible to local probes. As a result of this, the encoded information can be protected from local noise, thereby leading to the notion of random circuits generating quantum error correcting codes <cit.>. §.§ Decoupling by Random circuits The intuition as to why random circuits are able to dynamically generate a quantum error correcting code comes from the decoupling principle. Consider the setup in Fig. <ref>, where initial quantum information is initialized in the entangled state between reference R (code subspace) and part of the system A_1⊂ A, such that the dimensions match, |R| = |A_1|. Now A is subjected to an encoding through the random circuit U_enc. Suppose a part of the system A_4⊂ A is subjected to a noise channel 𝒩. By Stinespring dilation, the noise channel can be identified as a unitary coupling with an environment E, as shown in Fig. <ref>. If U_enc forms an approximate 2-design, the circuit is able to decouple effectively <cit.>, i.e., the environment E has bounded access to the information encoded in R. Concretely, let us consider local qubit degrees of freedom, such that the Hilbert space dimension of any set A is d_A = 2^|A|. Consider the isometric encoding V:ℋ_R→ℋ_A generated by the circuit U_enc, which transforms the basis vectors as follows, |ϕ_i⟩_A≡ V|i⟩_R = U_enc|i⟩_A_1|0⟩_A_2. Any density matrix ρ_R of R is encoded as Vρ_RV^†. Suppose the encoded state is now subjected to noise, resulting in the density matrix ρ_err = 𝒩(Vρ_RV^†). A convenient probe is the noise-affected encoding of a maximally entangled state between the code subspace R and A_1. By introducing an auxiliary environment E the effect of the noise channel can be represented by a unitary on the combined system and environment, A ∪ E, |Ψ^'⟩ = 1/√(d_R)∑_i = 1^d_R|i⟩_R U_err(|ϕ_i⟩_A|e_0⟩_E). Here, d_R refers to the Hilbert space dimension for R; if the local degrees of freedom are q dimensional qudits, then d_R = q^|R|. By the decoupling theorem, for U_enc which are approximate 2 designs and small enough error, we have a factorized reduced density matrix on R∪ E, ρ_RE^Ψ^'≈ρ_R^Ψ^'⊗ρ_E^Ψ^'. The time required by random circuits with locality to approximately form a 2 design is upper bounded by O(N^1/d) in d dimensions <cit.>. A probe of the extent of decoupling is the mutual information <cit.>, I_Ψ^'(R:E) = S(ρ^'_R)+S(ρ^'_E)-S(ρ^'_RE). A central theorem in quantum error correction is the existence of an optimal recovery channel ℛ that undoes the effect of noise ℛ(ρ_err) = ρ_R, if perfect decoupling has occurred, i.e. I_Ψ^'(R:E) = 0 <cit.>. This can be generalized to approximate error correction in the presence of approximate decoupling <cit.>. In particular, the trace distance between the recovered state by a near-optimal recovery channel ℛ, and any encoded state can be bounded by the mutual information computed for Ψ^', ||ℛ(ρ_err)-ρ_R||_1≤(I_Ψ^'(R:E))^1/4. §.§ Approximate error correction in Brownian circuits Recently <cit.> derived a similar bound as Eq. <ref>, with the right-hand side replaced by a different entropic quantity rather than the mutual information. Mutual information is difficult to analytically study because of the associated replica limit in the definition of the von Neumann entropy. They instead introduce the mutual purity of the noise-affected state in Eq. <ref>, which is defined as, ℱ_Ψ^'(R:E) = (ρ_RE^' 2 - ρ_R^' 2⊗ρ_E^' 2). They showed that for the same approximate recovery channel as <cit.>, the trace distance between the recovered state and the encoded state can be bounded by the mutual purity, ||ℛ(ρ_err)-ρ_R||_1≤ d_R^5/2d_E^1/2(ℱ_Ψ^'(R:E))^1/4. We provide a description of the recovery channel and the sketch of the proof of this bound in Appendix <ref>. This bound can be computed using just a two-replica computation for local Brownian circuits in 1+1d with the imaginary TEBD protocol that we have introduced earlier. §.§ Numerical results in 1d Using the replicated Hilbert space formalism, we can represent the mutual purity for the 1+1d local Brownian circuit by the following expression, ℱ^Ψ^'_RE = ⟨⟨ψ_err|𝕌_enc|ψ_in⟩⟩ = ⟨⟨ψ_err|e^-t H_eff|ψ_in⟩⟩, where 𝕌_enc = 𝔼[U_enc⊗ U^†_enc⊗ U_enc⊗ U^†_enc], and appropriately defined states ψ_in, ψ_err in the replicated Hilbert space, given by, |ψ_in⟩⟩ = ( - 1/2)^⊗ A_1⊗0000^⊗ A_2 |ψ_err⟩⟩ = 2^N∑_m,n=0^d_E-1 E_mE_n^*E_nE_m^*^⊗ A. Notice that both |ψ_in⟩⟩|ψ_err⟩⟩ are not normalized. The operators E_m are non-unitary operators implementing the error on the systems A, U_err|ψ⟩_A|e_0⟩_E = ∑_m E^m_A. The derivation is provided in Appendix <ref>. The replica order of the initial state ψ_in reveals that the state breaks the replica symmetry to `swap' in the region A_1 (reflecting the encoded qubit), and preserves the replica symmetry in A_2. As for the final state, ψ_err, the replica order is `id' in the region where the error doesn't act, and `swap' in the region where error acts. To diagnose the error correcting properties of the Brownian circuit, we need to take specific noise models. In this section, we focus on local depolarization channels acting on a few qubits, say a fraction p of them. The depolarization channel of strength λ acts on the density matrix as follows, 𝒩_i(ρ) = (1-λ) ρ + λ/3(∑_α = x,y,zσ_i,αρσ_i,α). In Fig. <ref>a we present the plot of the mutual purity of the 1+1d Brownian circuit as a function of time, where a single qubit in R is encoded in the system A of size N. The noise model is chosen to depolarization channel of λ = 0.05 acting on p = 0.25 fraction of qubits. It is clear from the plot that the mutual purity initially exponentially decays, until it saturates to the global Haar value which is O(2^-N). The time taken for the saturation scales as t∝ N. In Appendix <ref> we derive the explicit result for mutual purity with globally Haar random encoding ℱ_Haar = O(2^-N). This numerical result demonstrates that the Brownian circuit approximates a two design in O(N) times, and we show in Fig. <ref>b that the 2 design transition occurs after time τ^*N, where τ^*∼ 0.77. The scaling collapse of the transition reveals ℱ/ℱ_Haar∼ f(t-τ^*N). Furthermore, we can study the mutual purity and the RHS of the quantum error correction bound Eq. <ref> for different values of p. In Fig. <ref>d we plot the saturation value of the RHS of Eq. <ref> (after the Brownian circuit has run for t = N steps) for different values of p and system sizes N, for a single qubit encoding, and depolarization channel of strength λ = 0.05. We find that the RHS of the error correction bound undergoes a transition at p^*≈ 0.17, which can be identified as the threshold of this quantum error correction code. Note the quantum error correction bound in Eq. <ref> guarantees that for p<p^* the Brownian circuit generates a quantum error correction code whose errors are correctable using the recovery channel outlined in Appendix <ref>. We don't expect this threshold to be tight, as the error correction bound with the mutual purity is expected to be looser than the bound from mutual information. However, the numerical results strongly indicate that the quantum error correction transition with t, the depth in the Brownian circuit (the time when the circuit approximates a 2 design) and the threshold transition both correspond to a first-order domain wall pinning transition. §.§ Coding transitions As discussed in the previous section, the mutual purity is given by the amplitude, ℱ^Ψ^'_RE = ⟨⟨ψ_err|𝕌_enc|ψ_in⟩⟩. It is convenient to view the space-time layout of the Brownian circuit as a two-dimensional statistical model. In our setting, this is nothing but the mapping from a d dimensional quantum system to a d+1 dimensional classical system. It is important to note that in the wave function |ψ_in> ⟩, the encoded |A_1| qubits is mapped to a projection to | swap> ⟩, i.e., ⟨< id| ( - 1/2) = 0. Whereas the wave function of the rest |A_2| qubits behaves as a free boundary condition, i.e., ⟨< id |0000 > ⟩ = ⟨< swap |0000 > ⟩ = 1/2. On the other hand, |ψ_err> ⟩ can effectively change the boundary condition on the top layer. In particular, for the single-qubit depolarization channel at site i, the wave function becomes a superposition of two spins, | ψ_err,i> ⟩ = (1- 4/3λ)^2 | id> ⟩ + 4/3λ( 1 - 2/3λ ) | swap> ⟩. Note that when λ = 3/4, the wave function is given by | swap> ⟩ only. Therefore, the statistical mechanical picture is that in the symmetry breaking phase of the Ising model, the boundary condition caused by the noise channel |ψ_err> ⟩ will induce different domains in the bulk. Namely, these are domains denoted by either `id' or `swap' (equivalently, the two Ising values) as shown schematically in Fig. <ref>(e). The mutual purity is only nonzero when the encoded qubit is located in the 'swap' domain. To understand better the coding transition, we perform a finite scaling analysis of mutual purity as a function of depth and discussion different cases in the following. * Noisy region overlaps the encoding qubit. As shown in Fig. <ref>a, the reference qubit is encoded on the left-most edge, and the noise occurs in a contiguous region that is also on the left edge. In this case, there are many domain wall configurations that can contribute to the mutual purity. To simplify the discussion, we focus on two different domain walls: one ends on the bottom layer, and the other ends on the right edge. A schematic plot of these two domain walls is shown in Fig. <ref>e. It is clear that their contributions are ℱ ∼ e^- Δ t + e^-Δ'(1-p) L = e^-Δ'(1-p) L (1 + e^- Δ t +Δ'(1-p) L), where Δ and Δ' denote the tension of the two kinds of domain walls respectively. L is the length of the chain. In the short-time region, the first kind of domain wall dominates, while in the long-time region, the second kind of domain wall dominates, and it becomes time independent. There is an exchange of dominating domain configurations, as demonstrated in Fig. <ref>e. The transition time is roughly Δ'(1-p)/Δ L ∝ N. This explains the behavior in Fig. <ref>b and c. Replacing the contribution from the second kind of domain wall by ℱ_Haar, we can obtain ℱ/ ℱ_err = 1 + e^- (Δ t + logℱ_err), which is consistent with the data collapse performed in the Fig. <ref>c. * Random noisy region. In this case, the noise occurs in random positions, as shown in Fig. <ref>. The picture of exchange of two kinds of domain wall configuration is still correct. The inset of Fig. <ref> shows consistent data collapse. * Noisy region does not overlap the encoding qubit. The encoding qubit and the noisy region is shown in Fig. <ref>. In the calculation, we set λ = 3/4. The boundary condition creates a domain wall at the boundary between the noisy qubits and the noiseless qubits. Due to the causality, the back propagation of the domain wall is constrained in an emergent light cone. Thus, the mutual purity is zero (up to an exponentially small number of N) when the encoding coding qubit is still outside the back propagating light cone of the domain wall. Moreover, unlike in the previous case where the first kind of domain wall that ends on the bottom can lead to a finite mutual parity, here, only the second kind of domain wall that ends on the right boundary can have a significant contribution to the mutual purity. This is only possible when the back propagating light one hits the right boundary. Therefore, this indicates a dynamical transition at a timescale that is proportional to the system size. In Fig. <ref>, the crossing of mutual parity in different sizes indicates such a transition. We also performed the data collapse in the inset of Fig. <ref>. Different from the previous two cases, the scaling is given by (t - τ^∗ N)/√(N). To understand this, note that the `id' domain back propagates to the light cone as shown in. In the symmetry breaking phase, the domain wall can fluctuate away from the light cone [Note that in the symmetry breaking phase, there are still two phases for domain walls, the pinning and depinning phases, separating by a pinning transition. For our case, since the coupling at the top layer is the same as the coupling in the bulk, the depinning transition is the same as the symmetry breaking transition. This means the domain wall is depinned and can fluctuate.]. The average position is √(N) away from the domain wall <cit.>. Furthermore, the fluctuation of the domain wall is captured by a universal function of α = δ L/ √(N) <cit.>, where δ L denotes the distance away from the light cone. More concretely, the magnetization profile is a function of α at the distance that is of the order √(N) (outside this distance, the magnetization is given either by one of the two spin polarization). We expect that this function also captures the mutual purity because mutual purity in this case prob the `swap' spin at the right boundary. Now, since the light cone reaches the right boundary at a time of order N, the mutual purity is then a universal function of (t- N/v)/√(N), where v is the light speed. This explains the data collapse. In summary, depending on whether the noise occurs in the encoding qubit, we discover distinct coding transitions. If the noisy region covers the encoding qubit, there are two kinds of domain wall configurations contributing to the mutual purity. They are schematically shown in Fig. <ref>e. The exchange of dominance between the two kinds of domain walls underlies the physics of the coding transition in this situation. On the other hand, if the noisy region does not cover the encoding qubit, the mutual purity is only nonzero when the noise back propagates to the encoding qubit. In this case, the transition is induced by the fluctuating domain walls and captured by a different scaling, N^-1/2, as shown in the data collapse. § CONCLUDING REMARKS In this paper, we have used the effective replica Hamiltonian mapping for local Brownian circuits to probe timescales of complexity growth in random quantum circuits, namely anti-concentration and approximate unitary-design generation. The effective replica model serves two purposes: we can perform large-scale numerics to simulate several quantum informational quantities for long times, using tensor network tools. This makes local Brownian circuits as efficient numerical tools to study unitary quantum many-body dynamics. Secondly, it transforms the question of time-scales in the real-time dynamics into questions of energy-scales in a corresponding thermodynamic problem, which allows us to make analytical progress. We have shown that local Brownian circuits in 1+1d anticoncentrate in log N time, consistent with earlier results in local Haar random circuits <cit.>. Furthermore, we have analyzed the success condition of an approximate classical algorithm <cit.> to sample from the output distribution of Brownian circuits, and have identified that there is a sharp transition in the computational hardness of simulation at the same timescale. The anticoncentration transition arises from the transition in dominance of different low-energy states of the effective Hamiltonian in the collision probability. In particular, the collision probability (a probe of anticoncentration) gets contribution from eigenstates of the effective Hamiltonian with domain walls and the timescale where this becomes relevant can be related to the logarithm of the number of such domain wall states (which in 1d is ∼ N). In the presence of noise, we showed that there is a noise-induced phase transition in linear cross entropy benchmark (χ_XEB), as has been recently demonstrated for related noisy random circuit models in <cit.>. This can be seen as a consequence of explicit replica symmetry breaking in the effective Hamiltonian model in the presence of noise acting on a single replica of the system. By relating the χ_XEB to specific transition amplitudes in the corresponding replica model, we identify the noise-induced transition in the cross entropy benchmarking as the transition in the dominance of certain domain wall states in the presence of explicit bulk symmetry breaking field. The critical properties of the transition can be related to those of external field-induced first-order transitions in the classical Ising model in 2d <cit.>. Finally, we probed the generation of approximate unitary design by Brownian circuits. By directly probing the quantum error-correcting properties of the Brownian circuit, namely a 2-replica quantity called Mutual Purity <cit.>, we find that the 1+1d Brownian circuits become good quantum error correcting codes in O(N) time. This transition can be identified as first-order transitions between certain space-time domain wall configurations, which are related to first-order boundary-driven pinning transitions in classical Ising models. There can be several avenues of future research based on this work. Here we have demonstrated 1+1d Brownian circuits as a useful numerically accessible tool for studying the dynamics of quantum information. A natural question is whether the same numerical feasibility extends to higher dimensions. Here, we speculate the dynamics and transitions in informational quantities in higher dimensional Brownian circuits d>1 (here d is the spatial dimension of the Brownian circuit, N is total number of qubits, we also use the volume L^d ∼ N with L denotes the length scale): * Collision probability. It is still true that in higher dimensions, the collision probability at long enough time is dominated by two grounds states and elementary excitations. Distinct from the situation in 1d, now the lowest excitation is given by local spin flips with an energy that is independent of system sizes. Nevertheless, the entropy of such a local excitation is proportional to the system size N. Therefore, we expect that the Brownian circuit anti-concentrates on a log N timescale, similar to that in 1d <cit.>. * Computational transition in patching algorithm. The patching algorithm is closely related to the symmetry breaking of the underlying 2-replica spin model <cit.>. In higher dimensions, d>1, the discrete symmetry can be broken in a finite depth. This contrasts with the 1d case, where even the discrete symmetry can only be broken in a log depth. This means that the patching algorithm will fail when the depth of the Brownian circuit exceeds a critical depth that is independent of system sizes. * Noisy cross-entropy benchmarking. A noise ϵ behaves as an external field and will lift the degeneracy between | id>⟩ and | swap>⟩, i.e., | swap>⟩ will be suppressed by a factor of e^- N ϵ t. On the other hand, as discussed in the collision probability, the elementary excitation is given by local spin flips. With an external field, there is an additional cost, adding up to a factor ∼ e^- z Δ t - ϵ t, where z is the coordination number. Therefore, we expect that when the noise rate scales as 1/N = 1/L^d, there will be a first-order phase transition with critical exponent given by 1/ν = d+1. * Coding transition. The dynamical transition for the Brownian circuit to achieve an approximate unitary 2-design is given by the transition of dominance between two kinds of domain walls. It should be the same in higher dimensions. Therefore, the transition occurs on a timescale ∼ L = N^1/d <cit.>. Next, consider the different regions of noise and the encoding qubit. We expect the mutual purity transition is similarly given by transitions of domain walls when the noisy region overlaps the encoding qubit. On the other hand, when the noisy region does not overlap the encoding qubit, we expect the fluctuation of domain wall dictates the coding transition. Even in 1+1d, this work paves the way towards exploration of quantum information dynamics in symmetric Brownian circuits, by studying directly the spectrum of the effective Hamiltonian in the presence of other circuit symmetries. Another direction of interest is incorporating the effects of mid-circuit measurements in the entanglement dynamics in Brownian circuit <cit.>. We will present these results in a future work. In this work, we have focused on only 2-replica quantities, such as collision probabilities and mutual purities. In principle, any integer k replica quantities can be represented in the effective Hamiltonian picture, with q^k local Hilbert space dimension (q being the dimension of the original degrees of freedom), which makes numerical methods intractable at large sizes for large k. An outstanding question is to develop controlled numerical methods or analytical techniques to take the k→ 1 replica limit. § ACKNOWLEDGEMENTS We thank Timothy Hsieh, Tsung-Cheng Lu, Utkarsh Agrawal, and Xuan Zou for useful discussions, and Tsung-Cheng Lu for detailed comments. We have used the TeNPy package for the tensor network simulations <cit.>. The numerical simulations were performed using the Symmetry HPC system at Perimeter Institute (PI). Research at PI is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development and by the Province of Ontario through the Ministry of Colleges and Universities. S.-K.J is supported by a startup fund at Tulane University. iblabel[1][S#1] § QUANTUM ERROR CORRECTING CODES GENERATED BY BROWNIAN CIRCUITS §.§ Error correction bound with mutual purity In this section we will briefly recap the derivation of the error correction bound Eq. <ref> derived in <cit.>. We will assume the setup described in Fig. <ref>. Consider first the encoding of the maximally entangled state between the reference R and system A, |Ψ⟩_RA = 1/√(d_R)∑_i = 1^d_R|i⟩_R|ϕ_i⟩_A, before any application of error. After the error channel acts on A, we get the following noise-affected state on RAE, |Ψ^'⟩_RAE = 1/√(d_R)∑_i = 1^d_R|i⟩_RU_err(|ϕ_i⟩_A|e_0⟩_E) . The error recovery procedure ℛ <cit.> works by first measuring A using an ideal projective measurement that probes the effect of error in |Ψ^'⟩_RAE, followed by a unitary update of the state to restore it to |Ψ⟩_RA. We first introduce an an orthonormal basis of states in A, |ϕ_ij⟩_A. The projective measurement is given by the projection operators, Π_j = ∑_i=1^d_R|ϕ_ij⟩_A⟨ϕ_ij|_A. Depending on the measurement outcome, a corrective unitary U_j, A is applied on system A. In order to study the effectiveness of the recovery channel, we want to study the trace distance between the recovered state and the encoded state, ||ℛ(|Ψ^'⟩⟨Ψ^'|),|Ψ⟩⟨Ψ|||_1. In order to bound this, we introduce a fictitious state |̃Ψ̃⟩̃_RAE which aids in the analysis. Consider ρ̃_RE = ρ^'_R⊗ρ^'_E, where the reduced density matrices ρ^' are obtained from the state |Ψ^'⟩_RAE. We now take the fictitious state |̃Ψ̃⟩̃_RAE which is a purification of ρ̃_RE such that the trace distance between |̃Ψ̃⟩̃_RAE and |Ψ^'⟩_RAE is minimal. This uniquely defines, |̃Ψ̃⟩̃_RAE = 1/√(d_R)∑_i = 1^d_R∑_j = 1^d_E√(α_j)|i⟩_R|ϕ_ij⟩_A|e_j⟩_E. Imagine we apply the recovery channel ℛ on |̃Ψ̃⟩̃_RAE instead. After the measurement, say the outcome j is obtained. The measured fictitious state is now, |̃Ψ̃⟩̃_RAE^j = 1/√(d_R)∑_i = 1^d_R|i⟩_R|ϕ_ij⟩_A|e_j⟩_E. We can now choose U_j acting on A such that U_j,A|ϕ_ij⟩_A = |ϕ_i⟩_A, and we get, U_j,A|̃Ψ̃⟩̃_RAE^j = |Ψ⟩_RA|e_j⟩_E. From the above relation, we find that, ||ℛ(|Ψ^'⟩⟨Ψ^'|),|Ψ⟩⟨Ψ|||_1 = ||ℛ(|Ψ^'⟩⟨Ψ^'|),ℛ(|̃Ψ̃⟩̃⟨̃Ψ̃|̃)||_1≤|||Ψ^'⟩⟨Ψ^'|,|̃Ψ̃⟩̃⟨̃Ψ̃|̃||_1. where the last inequality follows from the monotonicity property of the trace distance. In <cit.>, the last expression is bounded by the Mutual Purity defined in Eq. <ref>[See Appendix. B in <cit.>]. We quote the result in Eq. <ref>. §.§ Replica computation of mutual purity We first represent the reduced density matrix of the noise-affected state defined in Eq. <ref>, ρ^'_RE = _A|Ψ^'⟩⟨Ψ^'| = 1/d_R∑_i,j = 1^d_R|i⟩⟨j|_R⊗_A{U_err(U_enc(|i⟩⟨j|_A_1⊗|0⟩⟨0|_A_2)U_enc^†⊗|e_0⟩⟨e_0|)U_err^†} The effect of the U_err on the system and the environment can be represented by Kraus operators acting on the system itself, U_err|ψ⟩_A|e_0⟩_E = ∑_m E^m_A|ψ⟩_A|e_m⟩_E E_A^m: ℋ_A→ℋ_A, ∑_mE^m†_AE^m_A = 1_A. The squared density matrix ρ^'⊗ 2_RE can be represented by a state vector in the replicated Hilbert space ℋ⊗ℋ^*⊗ℋ⊗ℋ^*, and the replicated unitaries 𝕌_enc = U_enc⊗ U^*_enc⊗ U_enc⊗ U^*_enc and 𝕌_err = U_err⊗ U^*_err⊗ U_err⊗ U^*_err as follows, |ρ^'⊗ 2_RE⟩⟩ = 1/d_R^2∑_i,j,i^',j^'=1^d_R|iji^'j^'⟩⟩_R∑_s,k = 1^d_A⟨⟨ sskk|_A𝕌_err⊗𝕌_enc|iji^'j^'⟩⟩_A_1|0^⊗ 4|A_2|⟩⟩_A_2|e_0^⊗ 4⟩⟩_E. While this representation looks cumbersome, it makes further computations straightforward. The mutual purity is given by ℱ^Ψ^'_RE = ρ_RE^' 2-ρ_R^' 2ρ_E^' 2. Let us compute each term. It is convenient to express Eq. <ref> pictorially, with 4-rank tensors for each subsystem, representing ℋ⊗ℋ^*⊗ℋ⊗ℋ^*, < g r a p h i c s > By unitarity of U_err and U_enc we have, < g r a p h i c s > . Using this result we find, < g r a p h i c s > < g r a p h i c s > In general, d_R = 2^|A_2|, where the local Hilbert space dimension is 2. We introduce notation for the following states in the replicated Hilbert space on A, |ψ_in⟩⟩ = ( - 1/2)^⊗ A_1⊗0000^⊗ A_2 |ψ_err⟩⟩ = 2^N∑_m,n=0^d_E-1 E_mE_n^*E_nE_m^*^⊗ A. Note ψ_err is an unnormalized state and the expression for ψ_err includes the case for no error acting on the subsystem A_3⊂ A by choosing E_m = 1_A_3⊗ E_m, A_4. Combining all the expressions, the mutual purity is given by, ℱ^Ψ^'_RE = ⟨⟨ψ_err|𝕌_enc|ψ_in⟩⟩. The noise model for probing the extent of error correction enters the computation of mutual purity only in the definition of ψ_err. Consider the local depolarization channel of strength λ on a subset A_4⊂ A, such that number of qubits undergoing noise is |A_4| = p|A|. The Kraus operators for the depolarization channel are, E_0 = √(1-λ)1, E_x,y,z = √(λ/3)σ_x,y,z. Local depolarization channel acting on each qubit can be purified using a environment degree of freedom with 4 levels 0,x,y,z. The corresponding d_env = 4^p|A|. §.§ Maximal complexity encoding by Haar random circuits We can compute the mutual purity for any noise model for an encoding unitary U_enc which is a global Haar random unitary. Any unitary 2 designs will exhibit this value of mutual purity. By examining the time it requires for the Brownian circuit to achieve this value of mutual purity, we can diagnose the time required for the Brownian circuit to realise a 2 design. For a global Haar random unitary, we have, U_Haar⊗ U^*_Haar⊗ U_Haar⊗ U^*_Haar = 1/d^2-1(+-1/d(+)). Using this identity for U_enc in Eq. <ref>, we get the following terms in the expression for mutual purity, 𝔼[ρ_RE^2] = (1/√(d_R))^4 1/d_A^2-1( f_id(λ) d_R + f_swap(λ) d_R^2 - 1/d_A( f_id(λ) d_R^2 + f_swap(λ) d_R )), 𝔼[ρ_E^2 ] = (1/√(d_R))^4 1/d_A^2-1( f_id(λ) d_R^2 + f_swap(λ) d_R - 1/d_A( f_id(λ) d_R + f_swap(λ) d_R^2 )), where we have introduced the notation, f_id(λ) = ψ_errid⟩⟩ , f_swap(λ) = ψ_errswap⟩⟩. Combining the results, we get, ℱ^Haar_RE = 1/d_A^2-1( 1- 1/d_R^2) ( f_swap - 1/d_Af_id). For the depolarization channel of strength λ acting on a fraction p of the qubits, we get the following expression, ℱ^Haar_RE = d_A/d_A^2-1(1-1/d_R^2)(1-g(λ)^p|A|), g(λ) = (1-λ)^2+λ^2/3. If one qubit is encoded in N qubits, i.e. d_R = 2, d_A = 2^N, we have for N≫ 1, ℱ^Haar_RE = 2^-N+2 3(1-g(λ)^pN). ]
http://arxiv.org/abs/2307.05906v1
20230712042326
Mini-Batch Optimization of Contrastive Loss
[ "Jaewoong Cho", "Kartik Sreenivasan", "Keon Lee", "Kyunghoo Mun", "Soheun Yi", "Jeong-Gwan Lee", "Anna Lee", "Jy-yong Sohn", "Dimitris Papailiopoulos", "Kangwook Lee" ]
cs.LG
[ "cs.LG" ]
: Floor Identification System with One Label for Crowdsourced RF Signals [ Received: date / Accepted: date ========================================================================= Contrastive learning has gained significant attention as a method for self-supervised learning. The contrastive loss function ensures that embeddings of positive sample pairs (e.g., different samples from the same class or different views of the same object) are similar, while embeddings of negative pairs are dissimilar. Practical constraints such as large memory requirements make it challenging to consider all possible positive and negative pairs, leading to the use of mini-batch optimization. In this paper, we investigate the theoretical aspects of mini-batch optimization in contrastive learning. We show that mini-batch optimization is equivalent to full-batch optimization if and only if all NB mini-batches are selected, while sub-optimality may arise when examining only a subset. We then demonstrate that utilizing high-loss mini-batches can speed up SGD convergence and propose a spectral clustering-based approach for identifying these high-loss mini-batches. Our experimental results validate our theoretical findings and demonstrate that our proposed algorithm outperforms vanilla SGD in practically relevant settings, providing a better understanding of mini-batch optimization in contrastive learning. § INTRODUCTION Contrastive learning has been widely employed in various domains as a prominent method for self-supervised learning <cit.>. The contrastive loss function is designed to ensure that the embeddings of two samples are similar if they are considered a “positive” pair, in cases such as coming from the same class <cit.>, being an augmented version of one another <cit.>, or being two different modalities of the same data <cit.>. Conversely, if two samples do not form a positive pair, they are considered a “negative” pair, and the contrastive loss encourages their embeddings to be dissimilar. In practice, it is not feasible to consider all possible positive and negative pairs when implementing a contrastive learning algorithm due to the quadratic memory requirement 𝒪(N^2) when working with N samples. To mitigate this issue of full-batch training, practitioners typically choose a set of N/B mini-batches, each of size B = 𝒪(1), and consider the loss computed for positive and negative pairs within each of the N/B batches <cit.>. For instance, <cit.> train a model on a dataset where N = 1.28 × 10^7 and B = 4096. This approach results in a memory requirement of 𝒪(B^2) = 𝒪(1) for each mini-batch, and a total computational complexity linear in the number of chosen mini-batches. Despite the widespread practical use of mini-batch optimization in contrastive learning, there remains a lack of theoretical understanding as to whether this approach is truly reflective of the original goal of minimizing full-batch contrastive loss. This paper examines the theoretical aspects of optimizing mini-batches loaded for the contrastive learning. Main Contributions. The primary contributions of this paper are twofold. First, we show that under certain parameter settings, mini-batch optimization is equivalent to full-batch optimization if and only if all NB mini-batches are selected. These results are based on an interesting connection between contrastive learning and the neural collapse phenomenon <cit.>. From a computational complexity perspective, the identified equivalence condition may be seen as somewhat prohibitive, as it implies that all NB = 𝒪(N^B) mini-batches must be considered. Our second contribution is to show that Ordered SGD (OSGD) <cit.> can be effective in finding mini-batches that contain the most informative pairs and thereby speeding up convergence. OSGD, proposed in a work by <cit.>, is a variant of SGD that modifies the model parameter updates. Instead of using the gradient of the average loss of all samples in a mini-batch, it uses the gradient of the average loss over the top-q samples in terms of individual loss values. We show that the convergence result from <cit.> can be applied directly to contrastive learning. We also show that OSGD can improve the convergence rate of SGD by a constant factor in certain scenarios. Furthermore, in a novel approach to address the challenge of applying OSGD to the NB mini-batch optimization (which involves examining 𝒪(N^B) batches to select high-loss ones), we reinterpret the batch selection as a min-cut problem in graph theory <cit.>. This novel interpretation allows us to select high-loss batches efficiently via a spectral clustering algorithm <cit.>. The following informal theorems summarize our main findings. Under certain parameter settings, the mini-batch optimization of contrastive loss is equivalent to full-batch optimization of contrastive loss if and only if all NB mini-batches are selected. Although NB mini-batch contrastive loss and full-batch loss are neither identical nor differ by a constant factor, the optimal solutions for both mini-batch and full-batch are identical (see Sec. <ref>). In a demonstrative toy example, OSGD operating on the principle of selecting high-loss batches, can potentially converge to the optimal solution of mini-batch contrastive loss optimization faster by a constant factor compared to SGD (see Sec. <ref>). We validate our theoretical findings and the efficacy of the proposed spectral clustering-based batch selection method by conducting experiments on both synthetic and real data. On synthetic data, we show that our proposed batch-selection algorithms do indeed converge to the optimal solution of full-batch optimization significantly faster than the baselines. We also apply our proposed method to ResNet pre-training with CIFAR-100 <cit.> and Tiny ImageNet <cit.>. We evaluate the performance on downstream retrieval tasks, demonstrating that our batch selection method outperforms vanilla SGD in practically relevant settings. § RELATED WORK Contrastive losses. Contrastive learning has been used for several decades to learn a similarity metric to be used later for applications such as object detection and recognition <cit.>. <cit.> proposed one of the early versions of contrastive loss which has been updated and improved over the years <cit.>. More recently, contrastive learning has been shown to rival and even surpass traditional supervised learning methods, particularly on image classification tasks <cit.>. Further, its multi-modal adaptation leverages vast unstructured data, extending its effectiveness beyond image and text modalities <cit.>. Unfortunately, these methods require extremely large batch sizes in order to perform effectively. Follow-up works showed that using momentum or carefully modifying the augmentation schemes can alleviate this issue to some extent <cit.>. Effect of batch size. While most successful applications of contrastive learning use large batch sizes (32,768 for CLIP and 8,192 for SimCLR), recent efforts have focused on reducing batch sizes and improving convergence rates <cit.>. <cit.> carefully study the effect of the requirements on the convergence rate when a model is trained for minimizing SimCLR loss, and prove that the gradient of the solution is bounded by 𝒪(1/√(B)). They also propose SogCLR, an algorithm with a modified gradient update where the correction term allows for an improved convergence rate with better dependence on B. It is shown that the performance for small batch size can be improved with the technique called hard negative mining <cit.>. Neural collapse. Neural collapse is a phenomenon observed in <cit.> where the final classification layer of deep neural nets collapses to the simplex Equiangular Tight Frame (ETF) when trained well past the point of zero training error <cit.>. <cit.> prove that this occurs when minimizing cross-entropy loss over the unit ball. We extend their proof techniques and show that the optimal solution for minimizing contrastive loss under certain conditions is also the simplex ETF. Optimal permutations for SGD. The performance of SGD without replacement under different permutations of samples has been well studied in the literature <cit.>. One can view batch selection in contrastive learning as a method to choose a specific permutation among the possible NB mini-batches of size B. However, it is important to note that these bounds do not indicate an improved convergence rate for general non-convex functions and thus would not apply to the contrastive loss, particularly in the setting where the embeddings come from a shared embedding network. We show that in the case of OSGD <cit.>, we can indeed prove that contrastive loss satisfies the necessary conditions in order to guarantee convergence. § PROBLEM SETTING Suppose we are given a dataset {(_i, _i)}_i=1^N of N positive pairs (data sample pairs that are conceptually similar or related), where _i and _i are two different views of the same object. Note that this setup includes both the multi-modal setting (CLIP <cit.>) and the uni-modal setting (SimCLR <cit.>) as follows. For the multi-modal case, one can view (_i, _i) as two different modalities of the same data, e.g., _i is the image of a scene while _i is the text description of the scene. For the uni-modal case, one can consider _i and _i as different augmented images from the same image. We consider the contrastive learning problem where the goal is to find embedding vectors for {_i}_i=1^N and {_i}_i=1^N, such that the embedding vectors of positive pairs (_i, _i) are similar, while ensuring that the embedding vectors of other (negative) pairs are well separated. Let _i ∈^d be the embedding vector of _i, and _i ∈^d be the embedding vector of _i. In practical settings, one typically considers parameterized encoders so that _i = f_(_i) and _i=g_(_i). We define embedding matrices := [_1, _2, …_N] and := [_1, _2, …, _N] which are the collections of embedding vectors. Now, we focus on the simpler setting of directly optimizing the embedding vectors instead of model parameters and in order to gain theoretical insights into the learning embeddings. This approach enables us to develop a deeper understanding of the underlying principles and mechanisms. Consider the problem of directly optimizing the embedding vectors for N pairs which is given by min_, (, ) s.t. ‖_i ‖ = 1, ‖_i ‖ = 1 ∀ i ∈ [N], where ‖·‖ denotes the ℓ_2 norm, the set [N] denotes the set of integers from 1 to N, and the contrastive loss (the standard InfoNCE loss <cit.>) is defined as ^con (, ) := -1/N∑_i=1^N log(e^_i^⊺_i/∑_j=1^N e^_i^⊺_j) -1/N∑_i=1^N log(e^_i^⊺_i/∑_j=1^N e^_i^⊺_j). Note that ^con(, ) is the full-batch version of the loss which contrasts all embeddings with each other. However, due to the large computational complexity and memory requirements during optimization, practitioners often consider the following mini-batch version instead. Note that there exist NB different mini-batches, each of which having B samples. For k ∈[NB], let _k be the k-th mini-batch satisfying _k ⊂ [N] and |_k| = B. Let __k:={_i}_i∈_k and __k:={_i}_i∈_k. Then, the contrastive loss for the k-th mini-batch is ^con(__k, __k). § RELATIONSHIP BETWEEN THE OPTIMIZATION FOR FULL-BATCH AND MINI-BATCH Recall that we focus on finding the optimal embedding matrices (, ) that minimize the contrastive loss. In this section, we investigate the relationship between the problem of optimizing the full-batch loss ^con(, ) and the problem of optimizing the mini-batch loss ^con(__k, __k). Towards this goal, we prove three main results, the proof of which are in Appendix <ref>. * We derive the optimal solution that minimizes the full-batch loss (Lem. <ref>, Thm. <ref>). * We show that the solution that minimizes the average of NB mini-batch losses is identical to the one that minimizes the full-batch loss (Prop. <ref>, Thm. <ref>). * We show that minimizing the mini-batch loss summed over only a strict subset of NB mini-batches can lead to a sub-optimal solution that does not minimize the full-batch loss (Thm. <ref>). §.§ Full-batch Contrastive Loss Optimzation In this section, we characterize the optimal solution for the full-batch loss minimization in Eq. (<ref>). We start by providing the definition of the simplex equiangular tight frame (ETF) which turns out to be the optimal solution in certain cases. The original definition of ETF <cit.> is for N vectors in a d-dimensional space where N ≥ d+1 [See Def. <ref> in Appendix <ref> for the full definition]. <cit.> defines the ETF for the case where N ≤ d+1 to characterize the phenomenon of neural collapse. In our work, we use the latter definition of simplex ETFs which is stated below. We call a set of N vectors {_i}_i=1^N form a simplex Equiangular Tight Frame (ETF) if ‖_i ‖ = 1, ∀ i ∈ [N] and _i^⊺_j = -1/(N-1), ∀ i≠ j. In the following Lemma, we first prove that the optimal solution of full-batch contrastive learning is the simplex ETF for N ≤ d+1 which follows almost directly from <cit.>. [Optimal solution when N≤ d+1]lemmathmOne Suppose N≤ d+1. Then, the optimal solution (^⋆, ^⋆) of the full-batch contrastive learning problem in Eq. (<ref>) satisfies two properties: (i) ^⋆=^⋆, and (ii) the columns of ^⋆ form a simplex ETF. Actually, many practical scenarios satisfy N > d+1. However, the approach used in <cit.> cannot be directly applied for N> d+1, leaving it as an open problem. While solving the open problem for the general case seems difficult, we characterize the optimal solution for the specific case of N=2d, subject to the conditions stated below. Embedding matrices and are called symmetric and antipodal if (, ) satisfies two properties: (i) Symmetric =; (ii) Antipodal for each i ∈ [N], there exists j(i) such that _j(i)=-_i. We conjecture that the optimal solutions for N=2d are symmetric and antipodal. Note that the symmetric property holds for N≤ d+1 case, and the antipodality is frequently assumed in geometric problems such as the sphere covering problem in <cit.>. Thm. <ref> shows that when N=2d, the optimal solution for the full-batch loss minimization, under a symmetric and antipodal configuration, form a cross-polytope which is defined as the following. We call a set of N vectors {}_i=1^N form a simplex cross-polytope if, for all i, the following three conditions hold: _i = 1; there exists a unique j such that _i^⊺_j = -1; and _i^⊺_k = 0 for all k ∉{i, j}. [Optimal solution when N=2d]theoremthmPoly Let (^⋆, ^⋆) :=min_(,)∈(,) s.t. _i=1, _i=1 ∀ i∈[N], where :={(,): , are symmetric and antipodal }. Then, the columns of ^⋆ form a simplex cross-polytope for N=2d. Proof Outline. By the antipodality assumption, we can apply Jensen's inequality to N-2 indices without itself _i and antipodal point -_i for a given i ∈ [N]. Then we show that the simplex cross-polytope also minimizes this lower bound while satisfying the conditions that make the applications of Jensen's inequality tight. For the general case of N>d+1, excluding N=2d, we still leave it as an open problem. §.§ Mini-batch Contrastive Loss Optimization Here we consider the mini-batch contrastive loss optimization problem, where we first choose multiple mini-batches of size B and then find , that minimize the sum of contrastive losses computed for the chosen mini-batches. Note that this is the loss that is typically considered in the contrastive learning since computing the full-batch loss is intractable in practice. Let us consider a subset of all possible NB mini-batches and denote their indices by _B⊆[NB]. For a fixed _B, the mini-batch loss optimization problem is formulated as: min_, _mini(,;_B) s.t. ‖_i ‖ = 1, ‖_i ‖ = 1 ∀ i ∈ [N], where the loss of given mini-batches is _mini(,;_B) :=1/|_B|∑_i∈_B (__i, __i). To analyze the relationship between the full-batch loss minimization in Eq. (<ref>) and the mini-batch loss minimization in Eq. (<ref>), we first compare the objective functions of two problems as below. propositionpropOne The mini-batch loss and full-batch loss are not identical, nor is one a simple scaling of the other by a constant factor. In other words, when _B=[NB], for all B ≥ 2, there exists no constant c such that _mini(,; _B)= c·(,) for all ,. We illustrate this proposition by visualizing the two loss functions in Fig. <ref> when N=10, B=2, and d=2. We visualize it along a single embedding vector _1 by freezing all other embeddings (_1 and {_i, _i}_i=2^10) at the optimal solution and varying _1 = [u_1,1, u_1,2] as [cos(θ), sin(θ)] for θ∈ [-π, π]. One can confirm that two losses are not identical (even up to scaling). Interestingly, the following result shows that the optimal solutions of both problems are identical. [Optimization with all possible NB mini-batches]theoremthmTwo Suppose B≥ 2. The set of minimizers of the NB mini-batch problem in Eq. (<ref>) is the same as that of the full-batch problem in Eq. (<ref>) for two cases: (i) N≤ d+1, and (ii) N = 2d and the pairs (, ) are restricted to those satisfying the conditions stated in Def. <ref>. In such cases, the solutions (,) for the N B mini-batch optimization problem satisfies the following: Case (i) {_i}_i=1^N forms a simplex ETF and _i=_i for all i∈ [N]; Case (ii): {_i}_i=1^N forms a simplex cross-polytope. Proof Outline. Similar to the proof of Lem. <ref>, we bound the objective function from below using Jensen's inequality. Then, we show that this lower bound is equivalent to a scaling of the bound from the proof of Lem. <ref>, by using careful counting arguments. Then, we can simply repeat the rest of the proof to show that the simplex ETF also minimizes this lower bound while satisfying the conditions that make the applications of Jensen's inequality tight. Now, we present mathematical results specifying the cases when the solutions of mini-batch optimization and full-batch optimization differ. First, we show that when B=2, minimizing the mini-batch loss over any strict subset of NB batches, is not equivalent to minimizing the full-batch loss. [Optimization with fewer than NB mini-batches]theoremthmThree Suppose B=2 and N≤ d+1. Then, the minimizer of Eq. (<ref>) for _B ⊊[NB] is not the minimizer of the full-batch optimization in Eq. (<ref>). Proof Outline. We show that there exist embedding vectors that are not the simplex ETF, and have a strictly lower objective value. This implies that the optimal solution of any set of mini-batches that does not contain all N2 mini-batches is not the same as that of the full-batch problem. The result of Thm. <ref> is extended to the general case of B ≥ 2, under some mild assumption; please check Prop. <ref> and <ref> in Appendix <ref>. Fig. <ref> summarizes the main findings in this section. § ORDERED STOCHASTIC GRADIENT DESCENT FOR MINI-BATCH CONTRASTIVE LEARNING Recall that the optimal embeddings for the full-batch optimization problem in Eq. (<ref>) can be obtained by minimizing the sum of NB mini-batch losses, according to Thm. <ref>. An easy way of approximating the optimal embeddings is using gradient descent (GD) on the sum of losses for NB mini-batches, or to use a stochastic approach which applies GD on the loss for a randomly chosen mini-batch. Recent works found that applying GD on selective batches outperforms SGD in some cases <cit.>. A natural question arises: does this hold for mini-batch contrastive learning? Specifically, (i) Is SGD enough to guarantee good convergence on mini-batch contrastive learning?, and (ii) Can we come up with a batch selection method that outperforms vanilla SGD? To answer this question: * We show that Ordered SGD (OSGD) <cit.> can potentially accelerate convergence compared to vanilla SGD in a demonstrative toy example (Sec. <ref>). We also show that the convergence results from <cit.> can be extended to mini-batch contrastive loss optimization (Sec. <ref>). * We reformulate the batch selection problem into a min-cut problem in graph theory <cit.>, by considering a graph with N nodes where each node is each positive pair and each edge represents a proxy to the contrastive loss between two nodes. This allows us to devise an efficient batch selection algorithm by leveraging spectral clustering <cit.> (Sec. <ref>). §.§ Convergence Comparison in a Toy Example: OSGD vs. SGD This section investigates the convergence of two gradient-descent-based methods, OSGD and SGD. The below lemma shows that the contrastive loss is geodesic non-quasi-convex, which implies the hardness of proving the convergence of gradient-based methods for contrastive learning in Eq. (<ref>). lemmalclipnonquasiconvex Contrastive loss (,) is a geodesic non-quasi-convex function of , on = { (, ): ‖_i‖ = ‖_i‖=1, ∀ i∈[N]}. We provide the proof in Appendix <ref>. In order to compare the convergence of OSGD and SGD, we focus on a toy example where convergence to the optimal solution is achievable with appropriate initialization. Consider a scenario where we have N=4 embedding vectors {_i}_i=1^N with _i ∈^2. Each embedding vector is defined as _1 =(cosθ_1, sinθ_1); _2 =(cosθ_2, -sinθ_2); _3 =(-cosθ_3, -sinθ_3); _4 =(-cosθ_4, sinθ_4) for parameters {θ_i }_i=1^n. Over time step t, we consider updating the parameters ^(t):=[θ_1^(t),θ_2^(t),θ_3^(t),θ_4^(t)] using gradient descent based methods. For all i, the initial parameters are set as θ_i^(0)=ϵ > 0, and the other embedding vectors are initialized as _i^(0) = _i^(0). This setting is illustrated in Fig. <ref>. At each time step t, each learning algorithm begins by selecting a mini-batch ^(t)⊂{1,2,3,4} with batch size |^(t)|=2. SGD randomly selects a mini-batch, while OSGD selects a mini-batch as follows: ^(t) = max_∈ (_(^(t)), _ (^(t))). Then, the algorithms update ^(t) using gradient descent on (_,_) with a learning rate η: ^(t+1) =^(t) - η∇_ (_^(t),_^(t)). For a sufficiently small margin ρ > 0, let T_OSGD, T_SGD be the minimal time required for the algorithms to reach the condition 𝔼[^(T)] ∈ (π/4-ρ, π/4)^N. Under this setting, the following theorem compares OSGD and SGD, in terms of the lower bound on the time required for the convergence to the optimal solution. theoremthmToy Consider the described setting where the parameters ^(t) of embedding vectors are updated, as shown in Fig. <ref>. Suppose there exist ϵ̃, T such that for all t satisfying ^(t)={1,3} or {2,4}, ∇_^(t) (_^(t), _^(t))≤ϵ̃, and T_OSGD, T_SGD< T. Then, we have the following inequalities: T_OSGD≥π/4 - ρ - ϵ +O(η^2 ϵ + ηϵ^3) ηϵ, T_SGD≥3(e^2+1) e^2-1π/4-ρ-ϵ+O(η^2 ϵ+η^2 ϵ̃) ηϵ+O(ηϵ^3+ηϵ̃). Suppose lower bounds of T_OSGD, T_SGD in Thm. <ref> are tight, and the learning rate η is small enough. Then, T_OSGD/T_SGD=(e^2-1)/3(e^2+1)≈ 1/4. In Fig. <ref>, we present training loss curves of the full-batch contrastive loss in Eq. (<ref>) for various algorithms implemented on the toy example. One can observe that the losses of all algorithms eventually converge to 1.253, the optimal loss achievable when the solution satisfies _i=_i and {_i}_i=1^N form simplex cross-polytope. As shown in the figure, OSGD converges faster than SGD to the optimal loss. This empirical evidence corroborates our theoretical findings in Corollary <ref>. §.§ Convergence of OSGD in Mini-batch Contrastive Learning Setting Recall that it is challenging to prove the convergence of gradient-descent-based methods for contrastive learning problem in Eq. (<ref>) due to the non-quasi-convexity of the contrastive loss ℒ^con. Instead of focusing on the contrastive loss, we consider a proxy, the weighted contrastive loss defined as (, ) 1/q∑_j=1^N Bγ_j (__(j), __(j)) with γ_j = ∑_l=0^q-1j - 1 lN B-j k - l - 1/N B k for two arbitrary natural numbers k, q ≤NB where _(j) is a mini-batch with j-th largest loss among batches of size B. Indeed, this is a natural objective obtained by applying OSGD to our problem, and we show the convergence of such an algorithm by extending the results in <cit.>. OSGD updates the embedding vectors using the gradient averaged over q batches that have the largest losses among randomly chosen k batches (see Algo. <ref> in Appendix <ref>). Let ^(t), ^(t) be the updated embedding matrices when applying OSGD for t steps starting from ^(0), ^(0), using the learning rate η_t. Then the following theorem, proven in Appendix <ref>, holds. [Convergence results]theoremosgdconvergence Consider sampling t^⋆ from [T-1] with probability proportional to {η_t}_t=0^T-1, that is, (t^⋆ = t) = η_t/(∑_i=0^T-1η_i). Then ∀ρ > ρ_0 = 2√(2/B) + 4e^2 / B, we have [∇(^(t^⋆), ^(t^⋆))^2 ] ≤(ρ + ρ_0)^2/ρ(ρ-ρ_0)( (^(0), ^(0)) - ) + 8ρ∑_t=0^T-1η_t^2/∑_t=0^T-1η_t, where denotes the minimized value of . Given sufficiently small learning rate η_t ∼ O(t^-1/2), 𝔼∇^2 decays at the rate of O(T^-1/2). Therefore, this theorem guarantees the convergence of OSGD for mini-batch contrastive learning. §.§ Suggestion: Spectral Clustering-based Approach R0.47 0.47 Applying OSGD to mini-batch contrastive learning has a potential benefit as shown in Sec. <ref>, but it also has some challenges. Choosing the best q batches with high loss in OSGD is only doable after we evaluate losses of all NB combinations, which is computationally infeasible for large N. A naive solution to tackle this challenge is to first randomly choose k batches and then select q high-loss batches among k batches. However, this naive random batch selection method does not guarantee that the chosen q batches are having the highest loss among all NB candidates. Motivated by these issues of OSGD, we suggest an alternative batch selection method inspired by graph theory. Note that the contrastive loss ℒ^con(U_, V_) for a given batch is lower bounded as follows: 1/B(B-1){∑_i∈∑_j ∈∖{i}log(1+(B-1)e^_i^⊺(_j-_i))+log(1+(B-1)e^_i^⊺(_j-_i))}. This lower bound is derived using Jensen's inequality. Detailed derivation is provided in Appendix <ref>. A nice property of this lower bound is that it can be expressed as a summation of terms over a pair (i,j) of samples within batch . Consider a graph with N nodes, where the weight between node k and l is defined as w(k,l):= ∑_(i,j)∈{(k,l), (l,k)}log(1+(B-1)e^_i^⊺(_j-_i))+log(1+(B-1)e^_i^⊺(_j-_i)). Recall that our goal is to choose q batches having the highest contrastive loss among N B batches. We relax this problem by reducing our search space such that the q=N/B chosen batches _1, ⋯, _q form a partition of N samples, _i ∩_j = ∅ and ∪_i ∈ [q]_i = [N]. In such scenario, our target problem is equivalent to the problem of clustering N nodes in graph into q clusters with equal size, where the objective is to minimize the sum of weights of inter-cluster edges. This problem is nothing but the min-cut problem <cit.>, and we can employ even-sized spectral clustering algorithm which solves it efficiently. The pseudo-code of our batch selection method[Our algorithm finds N/B good clusters at once, instead of only finding a single best cluster. Compared with such alternative approach, our method is (i) more efficient when we update models for multiple iterations, and (ii) guaranteed to load all samples with N/B batches, thus expected to have better convergence <cit.>.] is provided in Algo. <ref>, and further details of the algorithm are provided in Appendix <ref>. Fig. <ref> shows the histogram of contrastive loss for N/B batches chosen by the random batch selection method and the proposed spectral clustering (SC) method. One can observe that the SC method favors batches with larger loss values. § EXPERIMENTS We validate our theoretical findings and the effectiveness of our proposed batch selection method by providing experimental results on synthetic and real datasets. We first show that our experimental results on synthetic dataset coincide with two main theoretical results: (i) the relationship between the full-batch contrastive loss and the mini-batch contrastive loss given in Sec. <ref>, (ii) the analysis on the convergence of OSGD and the proposed SC method given in Sec. <ref>. To demonstrate the practicality of our batch selection method, we provide experimental results on CIFAR-100 <cit.> and Tiny ImageNet <cit.>. Details of the experimental setting can be found in Appendix <ref>, and our code is available at https://github.com/krafton-ai/mini-batch-cl.githttps://github.com/krafton-ai/mini-batch-cl. §.§ Synthetic Dataset Consider the problem of optimizing the embedding matrices , using GD, where each column of , is initialized as a multivariate normal vector and then normalized as ‖_i ‖ = ‖_i ‖ = 1, ∀ i. We use learning rate η=0.5, and apply the normalization step at every iteration. First, we compare the minimizers of three optimization problems: (i) full-batch optimization in Eq.(<ref>); (ii) mini-batch optimization in Eq. (<ref>) with _B=[NB]; (iii) mini-batch optimization with _B⊊[NB]. We apply GD algorithm to each problem for N=8 and B=2, obtain the updated embedding matrices, and then show the heatmap plot of N× N gram matrix containing all the pairwise inner products _i^⊺_j in Fig. <ref>(b)-(d). Here, we plot for two regimes: d=2N for the top row, and d=N/2 for the bottom row. In Fig. <ref>(a), we plot the gram matrix for the optimal solution obtained in Sec. <ref>. One can observe that when either full-batch or all NB mini-batches are used for training, the trained embedding vectors reach a simplex ETF and simplex cross-polytope solutions for d=2N and d=N/2, respectively, as proved in Thm <ref>. In contrast, when a strict subset of NB mini-batches are used for training, these solutions are not achieved. Second, we compare the convergence speed of three algorithms in mini-batch optimization: (i) OSGD; (ii) the proposed SC method; and (iii) SGD (see details of the algorithms in Appendix <ref>). Fig. <ref> shows the ^⋆⊺^⋆-^(t)⊺^(t)_F which is the Frobenius norm of the difference between heatmaps of the ground-truth solution (^⋆, ^⋆) and the embeddings at each step t. We restrict the number of updates for all algorithms, specifically 500 steps. We observe that both OSGD and the proposed method nearly converge to the ground-truth solutions proved in Thm. <ref> within 500 steps, while SGD does not. We obtain similar results for other values of N and d, given in Appendix <ref>. §.§ Real Datasets Here we show that the proposed SC method is effective in more practical settings where the embedding is learned by a parameterized encoder, and can be easily applied to existing uni-modal frameworks, such as SimCLR <cit.> and SogCLR <cit.>. We conduct mini-batch contrastive learning on CIFAR-100 and Tiny ImageNet datasets and report the performances in the image retrieval downstream task on corrupted datasets, the results of which are in Table <ref>. Due to the page limit, we provide detailed experimental information in the Appendix <ref>. § CONCLUSION We provided a thorough theoretical analysis of mini-batch contrastive learning. First, we showed that the solution of mini-batch optimization and that of full-batch optimization are identical if and only if all N B mini-batches are considered. Second, we analyzed the convergence of OSGD and devised spectral clustering (SC) method, a new batch selection method which handles the complexity issue of OSGD in mini-batch contrastive learning. Experimental results support our theoretical findings and the efficacy of SC. § LIMITATIONS We note that our theoretical results have two major limitations: * While we would like to extend our results to the general case of N > d+1, we were only able to characterize the optimal solution for the specific case of N=2d. Furthermore, our result for the case of N=2d in Thm. <ref> requires the use of the conjecture that the optimal solution is symmetric and antipodal. However, as mentioned by <cit.>, the general case of N > d+1 seems quite challenging in the non-asymptotic regime. * In practice, the embeddings are usually the output of a shared neural network encoder. However, our results are for the case when the embeddings only have a norm constraint. Thus, our results do not readily indicate any generalization to unseen data. We expect however, that it is possible to extend our results to the shared encoder setting by assuming sufficient overparameterization. icml2022 § ORGANIZATION OF THE APPENDIX * In Appendix <ref>, we introduce an additional definition for posterity. * In Appendix <ref>, we provide detailed proofs of the theoretical results as well as any intermediate results/lemmas that we found useful. * Appendix <ref> provides proofs of the results from Section <ref> which focuses on the relationship between the optimal solutions for minimizing the mini-batch and full-batch constrastive loss. * Appendix <ref> contains the proofs of results from Section <ref> which concern the application of Ordered SGD to mini-batch contrastive learning. * Appendix <ref> is intended to supplement Appendix <ref>. It contains auxiliary notation and proofs required in the proof of Theorem <ref>. * Appendix <ref> specifies the pseudo-code and details for the three algorithms: (i) Spectral Clustering; (ii) Stochastic Gradient Descent (SGD) and (iii) Ordered SGD (OSGD). * Appendix <ref> describes the details of the experimental settings from Section <ref> while also providing some additional results. § ADDITIONAL DEFINITION A set of N vectors {_i}_i=1^N in the ℝ^d form an equiangular tight frame (ETF) if (i) they are all unit norm: ‖_i ‖ = 1 for every i ∈ [N], (ii) they are equiangular: ‖_i^⊺_j ‖ = α≥ 0 for all i j and some α≥ 0, and (iii) they form a tight frame: ^⊺ = (N/d) 𝕀_d where is a d× N matrix whose columns are _1, _2, …, _N, and 𝕀_d is the d× d identity matrix. § PROOFS §.§ Proofs of Results From Section <ref> * First, we define the contrastive loss as the sum of two symmetric one-sided contrastive loss terms to simplify the notation. We denote the following term as the one-sided contrastive loss (, ) = 1/N∑_i=1^N -log(e^_i^⊺_i/∑_j=1^N e^_i^⊺_j). Then, the overall contrastive loss is given by the sum of the two one-sided contrastive losses: ^con(, ) = (, ) + (, ). Since ^con is symmetric in its arguments, results pertaining to the optimum of (, ) readily extend to ^con. Now, let us consider the simpler problem of minimizing the one-sided contrastive loss from Eq. (<ref>) which reduces the problem to exactly the same setting as <cit.>: (, ) = 1/N∑_i=1^N -log(e^_i^⊺_i/∑_j=1^N e^_i^⊺_j) = 1/N∑_i=1^N log(1 + ∑_j=1,j≠ i^N e^(_j - _i)^⊺_i). Note that, we have for any fixed 1≤ i ≤ N, ∑_j=1,j≠ i^N e^(_j - _i)^⊺_i = e^-(_i^⊺_i)∑_j=1,j≠ i^N e^_j^⊺_i = (N-1) e^-(_i^⊺_i)(1/N-1) ∑_j=1,j≠ i^N e^_j^⊺_i (a)≥ (N-1)e^-(_i^⊺_i)exp(1/N-1∑_j=1,j≠ i^N _j^⊺_i ) (b)= (N-1)e^-(_i^⊺_i)exp(^⊺_i - _i^⊺_i/N-1) = (N-1) exp(^⊺_i - N(_i^⊺_i)/N-1) , where (a) follows by applying Jensen inequality for e^t and (b) follows from :=∑_i=1^N_i. Since log(·) is monotonic, we have that x > y ⇒log(x) > log(y) and therefore, (, ) ≥1 N∑_i=1^N log[1 + (N-1)exp(^⊺_i/N-1 - N(_i^⊺_i)/N-1)] (c)≥log[ 1 + (N-1)exp(1/N∑_i=1^N ( ^⊺_i/N-1 - N(_i^⊺_i)/N-1) )] (d)=log[1 + (N-1)exp(1/N(^⊺/N-1 - N/N-1∑_i=1^N(_i^⊺_i))) ], where (c) follows by applying Jensen inequality to the convex function ϕ(t) = log(1 + ae^bt) for a, b > 0, and (d) follow from := ∑_i=1^N _i. Note that for equalities to hold in Eq. (<ref>) and (<ref>), we need constants c_i, c such that _j^⊺_i = c_i ∀ j ≠ i , ^⊺_i/N-1 - N(_i^⊺_i)/N-1 = c ∀ i ∈ [N] . Since log(·) and exp(·) are both monotonic, minimizing the lower bound in Eq. (<ref>) is equivalent to min ^⊺/N-1 - N/N-1∑_i=1^N _i^⊺_i ⇔max N∑_i=1^N _i^⊺_i - (∑_i=1^N _i)^⊺(∑_i=1^N _i). All that remains is to show that the solution that maximizes Eq <ref> also satisfies the conditions in Eq. (<ref>) and (<ref>). To see this, first note that the maximization problem can be written as max _stack^⊺ ((N𝕀_N - 1_N 1_N^⊺) ⊗𝕀_d) _stack where _stack = (_1, _2, …, _n) is a vector in ℝ^Nd formed by stacking the vectors _i together. _stack is similarly defined. 𝕀_N denotes the N× N identity matrix, 1_N denotes the all-one vector in ℝ^n, and ⊗ denotes the Kronecker product. It is easy to see that ‖_stack‖=‖_stack‖ = √(N) since each ‖_i ‖=‖_i ‖ = 1. Since the eigenvalues of A ⊗ B are the product of the eigenvalues of A and B, in order to analyze the spectrum of the middle term in the above maximization problem, it suffices to just consider the eigenvalues of (N𝕀_N - 1_N 1_N^⊺). As shown by the elegant analysis in <cit.>, (N𝕀_N - 1_N 1_N^⊺) = N for any ∈ℝ^N such that ∑_i=1^N _i = 0 and (N𝕀_N - 1_N 1_N^⊺) = 0 for any ∈ℝ^N such that = k 1_N for some k ∈ℝ. Therefore it follows that its eigenvalues are N with multiplicity (N-1) and 0. Since its largest eigenvalue is N and since ‖_stack‖ = ‖_stack‖ = √(N), applying cauchy schwarz inequality, we have that max _stack^⊺ (N𝕀_N - 1_N 1_N^⊺) ⊗𝕀_d) _stack^⊺ = ‖_stack‖·‖ (N𝕀_n - 1_n 1_n^⊺) ⊗𝕀_d) ‖·‖_stack‖ = √(N) (N) √(N) = N^2. Moreover, we see that setting _i = _i and setting {_i}_i=1^N to be the simplex ETF attains the maximum above while also satisfying the conditions in Eq. (<ref>) and (<ref>) with c_i = -1/(N-1) and c=-N/(N-1). Therefore, the inequalities in Eq. (<ref>) and (<ref>) are actually equalities for _i = _i when they are chosen to be the simplex ETF in ℝ^d which is attainable since d ≥ N-1. Therefore, we have shown that if ^⋆ = {_i^⋆}_i=1^N is the simplex ETF and _i^⋆ = _i^⋆ ∀ i ∈ [N], then ^⋆, ^⋆ = argmin_, (, ) over the unit sphere in ℝ^n. All that remains is to show that this is also the minimizer for . First note that ^⋆, ^⋆ is also the minimizer for (, ) through symmetry. One can repeat the proof exactly by simply exchanging _i and _i to see that this is indeed true. Now recalling Eq. (<ref>), we have min = min((, ) + (, )) ≥min((, )) + min((, )) = (^⋆, ^⋆) + (^⋆, ^⋆). However, since the minimizer of both terms in Eq. (<ref>) is the same, the inequality becomes an equality. Therefore, we have shown that (^⋆, ^⋆) is the minimizer of completing the proof. In the proof of the above Lemma, we only show that the simplex ETF attains the minimum loss in Eq. (<ref>), but not that it is the only minimizer. The proof of <cit.> can be extended to show that this is indeed true as well. We omit it here for ease of exposition. * By applying the logarithmic property that allows division to be represented as subtraction, (, ) = - 1 N∑_i=1^Nlog(e^_i^⊺_i/∑_j=1^N e^_i^⊺_j) =-1 N∑_i=1^N[ _i^⊺_i - log( ∑_j=1^N e^_i^⊺_j)]. Since = (symmetric property), the contrastive loss satisfies (, ) = 2(, ) =-2 N∑_i=1^N[ _i^⊺_i - log( ∑_j=1^N e^_i^⊺_j)] = -2+ 2 N∑_i=1^Nlog(∑_j=1^N e^_i^⊺_j) . Since _i=1 for any i ∈ [N], we can derive the following relations: _i-_j^2 = 2-2_i^⊺_j, _i^⊺_j = 1-_i-_j^2 2. We incorporate these relations into Eq. (<ref>) as follows: (, ) =-2+2 N∑_i=1^Nlog(∑_j=1^N e^1-_i-_j^2/2) =2 N∑_i=1^Nlog(∑_j=1^N e^-_i-_j^2/2). The antipodal property of indicates that for each i ∈[N], there exists a j(i) such that u_j(i)=-u_i. By applying this property, we can manipulate the summation of e^-_i-_j^2/2 over j as the following: ∑_j=1^N e^-_i-_j^2/2 = e^-_i-_i^2/2+e^-_i-_j(i)^2/2+∑_j ≠ i, j(i) e^-_i-_j^2/2 =1+e^-2+∑_j ≠ i, j(i) e^-_i-_j^2/2. Therefore, (, ) = 2 N∑_i=1^Nlog( 1+e^-2+∑_j ≠ i, j(i) e^-_i-_j^2/2) (a)≥2 N(N-2)∑_i=1^N∑_j ≠ i, j(i)log(1+e^-2+(N-2)e^-_i-_j^2/2) = 2 N(N-2)∑_i=1^N∑_j ≠ ilog(1+e^-2+(N-2)e^-_i-_j^2/2) - 2 N-2log(1+(N-1)e^-2) (b)≥2 N(N-2)∑_i=1^N∑_j ≠ ilog(1+e^-2+(N-2)e^-_i^⋆-_j^⋆^2/2) - 2 N-2log(1+(N-1)e^-2), where (a) follows by applying Jensen's inequality to the concave function f(t)=log(1+e^-2+t); and (b) follows by Lem. <ref>, and the fact that function g(t)=log[1+e^-2+(N-2)e^-t/2] is convex and monotonically decreasing. {^⋆_1, ⋯, ^⋆_N} denotes a set of vectors which forms a cross-polytope. Both inequalities in (a) and (b) are equalities only when the columns of form a cross-polytope. Therefore, the columns of ^⋆ form a cross-polytope. Given a function g(t) is convex and monotonically decreasing, let ^* := min_∈∑_i=1^N∑_j ≠ i g(_i - _j^2) s.t. _i=1, _i=1 ∀ i∈[N], where :={: is antipodal}. Then, the columns of ^* form a simplex cross-polytope for N=2d. Suppose N=2d and ∈. Given a function g(t) is convex and monotonically decreasing. j(i) denotes the corresponding index for i such that _j(i)=-_i, and _i-_j(i)^2=4. Under these conditions, we derive the following: ∑_i=1^N∑_j ≠ i g(_i - _j^2) = N g(4)+ ∑_i=1^N∑_j ≠ i, j(i) g(_i - _j^2) (a)≥ N g(4)+N(N-2)g(1/N(N-2)∑_i=1^N∑_j ≠ i, j(i)_i - _j^2) = N g(4)+N(N-2)g(1/N(N-2)(-4N+∑_i=1^N∑_j =1^N_i - _j^2)) = N g(4)+N(N-2)g(1/N(N-2)(-4N+∑_i=1^N∑_j =1^N(2-2_i^⊺_j ))) = N g(4)+N(N-2)g(1/N(N-2)(-4N+2N^2-∑_i=1^N_i^2)) (b)≥ N g(4)+N(N-2)g(1/N(N-2)(-4N+2N^2)) =Ng(4)+N(N-2)g(2), where (a) follows by Jensen's inequality; and (b) follows from the fact that ∑_i=1^N_i^2 ≥ 0 and the function g(t) is monotonically decreasing. The equality conditions for (a) and (b) only hold when the columns of form a cross-polytope. We can conclude that the columns of ^⋆ form a cross polytope. * Consider , defined such that _i = _i = _i ∀ i ∈ [N], where _i is i-th unit vector in ℝ^N. First note that _i^⊺_i = 1 ∀ i ∈ [N] and _i^⊺_j = 0 ∀i ≠ j. Then, (, ) = log(e + N - 1) -1 , 1/N B∑_i=1^N B(__i, __i) = log(e + B - 1) - 1 . We now consider the second part of the statement. For contradiction, assume that there exists some c ∈ℝ such that _mini(,; _B)= c·(,) for all ,. Let , be defined such that _i = _i = _1 ∀ i ∈ [N], where _1 = (1, 0, ⋯, 0). Note that _i^⊺_j = 1 ∀ i, j ∈ [N]. Then, (, ) = log(N) , 1/N B∑_i=1^N B(__i, __i) = log(B). From Eq. (<ref>) and (<ref>), we have that c = log(e+B-1)-1/log(e+N-1)-1. Whereas from Eq. (<ref>) and (<ref>), we have that c = log(B)/log(N) which is a contradiction. Therefore, there exists no c ∈ℝ satisfying the given condition. * Case (i): Suppose N ≤ d+1. For simplicity, first consider just one of the two terms in the two-sided loss. Therefore, the optimization problem becomes min_, 1/NB∑_i=1^NB(__i, __i) s.t. ‖_i ‖ = 1, ‖_i ‖ = 1 ∀ i ∈ [N]. Similar to the proof of Lem. <ref>, we have that ∑_i=1^N B(__i, __i) = 1 B∑_i=1^N B∑_j ∈_ilog(1 + ∑_k∈_i k≠ j e^_j^⊺ (_k - _j)) (a)≥1 B∑_i=1^N B∑_j ∈_ilog(1 + (B-1)exp(∑_k∈_i, k≠ j_j^⊺ (_k - _j)/B-1)) = 1 B∑_i=1^N B∑_j ∈_ilog(1 + (B-1) exp(∑_k ∈_i(_j^⊺_k - B _j^⊺_j)/B-1)) (b)≥N Blog(1 + (B-1) exp(∑_i=1^N B∑_j ∈_i∑_k ∈_i_j^⊺_k - ∑_i=1^N B∑_j ∈_iB _j^⊺_j/N B· B · (B-1))), where (a) and (b) follows by applying Jensen's inequality to e^t and log(1+ae^bt) for a,b>0, respectively. Note that for equalities to hold in Jensen's inequalities, we need constants c_j, c such that _j^⊺_k = c_j ∀ k ≠ j , ^⊺_i/N-1 - N(_i^⊺_i)/N-1 = c ∀ i ∈ [N] . Now, we carefully consider the two terms in the numerator: A_1:= ∑_i=1^N B∑_j ∈_i∑_k ∈_i_j^⊺_k, A_2 := ∑_i=1^N B∑_j ∈_i B _j^⊺_j. To simplify A_1, first note that for any fixed l, m ∈ [N] such that l≠ m, there are N-2B-2 batches that contain l and m. And for l=m, there are N-1B-1 batches that contain that pair. Since these terms all occur in A_1, we have that A_1 = N-2B-2∑_l=1^N ∑_m=1^N _l^⊺_m + [N-1B-1 - N-2B-2]∑_l=1^N _l^⊺_l = N-2B-2∑_l=1^N ∑_m=1^N _l^⊺_m + N-2B-2(N-B/B-1) ∑_l=1^N _l^⊺_l. Similarly, we have that A_2 = N-1B-1 B ∑_l=1^N _l^⊺_l. Plugging these back into the above inequality, we have that ∑_i=1^N B(__i, __i) ≥N Blog(1 + (B-1)exp(∑_l=1^N∑_m=1^N _l^⊺_m - N∑_l=1^N _l^⊺_l/N(N-1))) = N Blog(1 + (B-1)exp(^⊺ - N∑_i=1^N _i^⊺_i/N(N-1))). Observe that the term inside the exponential is identical to Eq. (<ref>) and therefore, we can reuse the same spectral analysis argument to show that the simplex ETF also minimizes ∑_i=1^N B(__i, __i). Once again, since the proof is symmetric the simplex ETF also minimizes ∑_i=1^N B(__i, __i). Case (ii): Suppose N = 2d, and , are symmetric and antipodal. Next, we consider the following optimization problem min_(,)∈ 1/NB∑_i=1^NB(__i, __i) s.t. ‖_i ‖ = 1, ‖_i ‖ = 1 ∀ i ∈ [N], where :={(,): , are symmetric and antipodal }. Since = (symmetric property) the contrastive loss satisfies (__i, __i) = 2(__i, __i) =-2 B∑_j∈_i[ _j^⊺_j - log( ∑_k∈_i e^_j^⊺_k)] = -2+ 2 B∑_j∈_ilog(∑_k∈_i e^_j^⊺_k) . Therefore, the solution of the optimization problem in Eq. (<ref>) is identical to the minimizer of the following optimization problem: ^⋆:=min_ ∑_i=1^N B∑_j ∈_ilog(∑_k ∈_i e^_j^⊺_k). The objective of the optimization problem can be rewritten by reorganizing summations as ∑_j=1^N∑_i∈_jlog(∑_k ∈_i e^_j^⊺_k), where _j:={i: j ∈_i } represents the set of batch indices containing j. We then divide the summation term in Eq. (<ref>) into two terms: ∑_j=1^N∑_i∈_jlog(∑_k ∈_i e^_j^⊺_k) = ∑_j=1^N∑_i∈_jlog(∑_k ∈_i e^_j^⊺_k) +∑_j=1^N∑_i∈_j^clog(∑_k ∈_i e^_j^⊺_k), by partitioning the set _j for each j ∈ [N] into as the following with k(j) being the index for which u_k(j)=-u_j: _j := {i: j ∈_i, and k(j) ∈_i }; _j^c := {i: j∈_i, and k(j) ∉_i}. We will prove that the columns of ^* form a cross-polytope by showing that the minimizer of each term of the RHS in Eq. (<ref>) also forms a cross-polytope. Let us start with the first term of the RHS in Eq. (<ref>). Starting with applying Jensen's inequality to the concave function f(x) :=log(e+e^-1+ x), we get: ∑_j=1^N∑_i ∈_jlog( ∑_k ∈_i e^_j^⊺_k) =∑_j=1^N∑_i ∈_jlog( e+e^-1+∑_k ∈_i ∖{j, k(j)} e^_j^⊺_k) ≥1 B-2∑_j=1^N∑_i ∈_j∑_k ∈_i ∖{j, k(j)}log(e+e^-1+(B-2)e^_j^⊺_k) =1 B-2∑_j=1^N∑_k ∉{j, k(j)}N-3 B-3log(e+e^-1+(B-2)e^_j^⊺_k) =N-3 B-3 B-2[∑_j=1^N∑_k ≠ jlog(e+e^-1+(B-2)e^_j^⊺_k) - N log(e+(B-1)e^-1) ] =N-3 B-3 B-2[∑_j=1^N∑_k ≠ jlog(e+e^-1+(B-2)e· e^-_j- _k^2/2) - N log(e+(B-1)e^-1) ] (a)≥N-3 B-3 B-2[∑_j=1^N∑_k ≠ jlog(e+e^-1+(B-2)e· e^-_j^⋆- _k^⋆^2/2) - N log(e+(B-1)e^-1) ], where (a) follows by Lem. <ref> and the fact that g(t)=log(a+be^-t/2) for a,b>0 is convex and monotonically decreasing. {^⋆_1, ⋯, ^⋆_N} denotes a set of vectors which forms a cross-polytope. All equalities hold only when the columns of form a cross-polytope. Next consider the second term of the RHS in Eq. (<ref>). By following a similar procedure above, we get: ∑_j=1^N∑_i ∈^c_jlog( ∑_k ∈_i e^_j^⊺_k) ≥1 B-1∑_j=1^N∑_i ∈_j∑_k ∈_i ∖{j}log(e+ (B-1)e^_j^⊺_k) = 1 B-1∑_j=1^N∑_k ∉{j, k(j)}N-3B-2log(e+ (B-1)e^_j^⊺_k) =N-3 B-2 B-1[ ∑_j=1^N∑_k ≠ jlog(e+(B-1)e^_j^⊺_k) - Nlog(e+(B-1)e^-1)] ≥N-3 B-2 B-1[∑_j=1^N∑_k ≠ jlog(e+(B-1)e· e^-_j^⋆- _k^⋆^2/2) - Nlog(e+(B-1)e^-1)], where {^⋆_1, ⋯, ^⋆_N} denotes a set of vectors which forms a cross-polytope. Both terms of RHS in Eq. (<ref>) have the minimum value when forms a cross-polytope. Therefore, we can conclude that the columns of ^⋆ form a cross-polytope. * Consider a set of batches _B⊂[N 2] with the batch size B=2. Without loss of generality, assume that (1, 2) ∉⋃_i ∈_B{_i}. For contradiction, assume that the simplex ETF - (^⋆, ^⋆) is indeed the optimal solution of the loss over these _B batches. Then, by definition, we have that for any (, ) ≠ (^⋆, ^⋆), 1/|_B |∑_i ∈_B(^⋆__i, ^⋆__i) ≤1/|_B |∑_i ∈_B(__i, __i) ⇒∑_i ∈_B(^⋆__i, ^⋆__i) ≤∑_i ∈_B(__i, __i), where (^⋆, ^⋆) is defined such that _i^⋆ = _i^⋆ for all i ∈ [N] and _i^⋆^⊺_j^⋆ = -1/(N-1) for all i ≠ j. Also recall that ‖_i ‖ = ‖_i ‖ = 1 for all i ∈ [N]. Therefore, we also have ∑_i ∈_B (^⋆__i, ^⋆__i) = ∑_i ∈_B∑_j ∈_ilog(1 + ∑_k ∈_i, k≠ jexp(_j^⋆^⊺ (_k^⋆ - _j^⋆))) = ∑_i ∈_B∑_j ∈_ilog(1 + ∑_k ∈_i, k≠ jexp(-1/N-1 - 1)) = ∑_i ∈_B∑_j ∈_ilog(1 + exp(-1/N-1 - 1)), where the last equality is due to the fact that |_i| = 2. Now, let us consider (, ) defined such that _i = _i for all i ∈ [N], and _i^⊺_j = -1/(N-2) for all i≠ j, (i, j) ∉{(1,2), (2,1)}. Intuitively, this is equivalent to placing _2, …, _N on a simplex ETF of N-1 points and setting _1 = _2. This is clearly possible because d > N-1 ⇒ d > N-2, which is the condition required to place N-1 points on a simplex ETF in ℝ^d. Therefore, ∑_i ∈_B (__i, __i) = ∑_i ∈_B∑_j ∈_ilog(1 + ∑_k ∈_i, k≠ jexp(_j^⊺ (_k - _j))) = ∑_i ∈_B∑_j ∈_ilog(1 + ∑_k ∈_i, k≠ jexp(-1/N-2 - 1)) = ∑_i ∈_B∑_j ∈_ilog(1 + exp(-1/N-2 - 1)), where the last equality follows since (1, 2) ∉⋃_i ∈_B{_i}. It is easy to see from Eq. (<ref>) and (<ref>) that ∑_i ∈_B (__i, __i) < ∑_i ∈_B (^⋆__i, ^⋆__i) which contradicts Eq. (<ref>). Therefore, the optimal solution of minimizing the contrastive loss over any _B ⊂[N 2] batches is not the simplex ETF completing the proof. propositionpropTwo Suppose B≥ 2, and let _B⊆[NB] be a set of mini-batch indices. If there exist two data points that never belong together in any mini-batch, ∃ i,j∈[N] s.t. {i,j}⊄_k for all k∈_B, then the optimal solution of Eq. (<ref>) is not the minimizer of the full-batch problem in Eq. (<ref>). The proof follows in a fairly similar manner to that of Thm. <ref>. Consider a set of batches of size B ≥ 2, _B ⊂ [N B]. Without loss of generality, assume that {1, 2}⊄_k for any k ∈_B. For contradiction, assume that the simplex ETF - (^⋆, ^⋆) is the optimal solution of the loss over these _B batches. Then, by definition, we have that for any (, ) ≠ (^⋆, ^⋆) Once again, for contradiction assume that the simplex ETF - (^⋆, ^⋆) is indeed the optimal solution of the loss over these _B batches. Then, by definition for any (, ) ≠ (^⋆, ^⋆), 1/|_B |∑_i ∈_B(^⋆__i, ^⋆__i) ≤1/|_B |∑_i ∈_B(__i, __i) ⇒∑_i ∈_B(^⋆__i, ^⋆__i) ≤∑_i ∈_B(__i, __i), where (^⋆, ^⋆) is defined such that _i^⋆ = _i^⋆ for all i ∈ [N] and _i^⋆^⊺_j^⋆ = -1/(N-1) for all i ≠ j. Also recall that ‖_i ‖ = ‖_i ‖ = 1 for all i ∈ [N]. Therefore, we also have ∑_i ∈_B (^⋆__i, ^⋆__i) = 1 B∑_i ∈_B∑_j ∈_ilog(1 + ∑_k ∈_i, k≠ jexp(_j^⋆^⊺ (_k^⋆ - _j^⋆))) = 1 B∑_i ∈_B∑_j ∈_ilog(1 + ∑_k ∈_i, k≠ jexp(-1/N-1 - 1)) = 1 B∑_i ∈_B∑_j ∈_ilog(1 + (B-1)exp(-1/N-1 - 1)). Now, let us consider (, ) defined such that _i = _i for all i ∈ [N], _2 = _2 and _i^⊺_j = -1/(N-2) for all i≠ j, (i, j) ∉{(1,2), (2,1)}. Once again, note that this is equivalent to placing _2, …, _N on a simplex ETF of N-1 points and setting _1 = _2. Hence, ∑_i ∈_B (__i, __i) = 1 B∑_i ∈_B∑_j ∈_ilog(1 + ∑_k ∈_i, k≠ jexp(_j^⊺ (_k - _j))) = 1 B∑_i ∈_B∑_j ∈_ilog(1 + ∑_k ∈_i, k≠ jexp(-1/N-2 - 1)) = 1 B∑_i ∈_B∑_j ∈_ilog(1 + (B-1)exp(-1/N-2 - 1)), where for the final equality note that following. The only pair for which _j^⊺_k ≠ -1/(N-2) is (j, k) = (1,2). Since there is no i ∈_B such that {1, 2}∈_i, this term never appears in our loss. From Eq. (<ref>) and Eq. (<ref>), we have that ∑_i ∈_B (__i, __i) < ∑_i ∈_B (^⋆__i, ^⋆__i) which contradicts Eq. (<ref>). Therefore, we conclude that the optimal solution of the contrastive loss over any _B ⊂[N 2] batches is not the simplex ETF. propositionpropThree Suppose B ≥ 2, and let _B ⊆[NB] be a set of mini-batch inidices satisfying _i⋂_j = ∅, ∀ i,j∈_B and ⋃_i∈_B_i = [N], {_i}_i ∈_B forms non-overlapping mini-batches that cover all data samples. Then, the minimizer of the mini-batch loss optimization problem in Eq. (<ref>) is different from the minimizer of the full-batch loss optimization problem in Eq. (<ref>). Applying Lem. <ref> specifically to a single batch _i gives us that the optimal solution for just the loss over this batch is the simplex ETF over B points. In the case of non-overlapping batches, the objective function can be separated across batches and therefore the optimal solution for the sum of the losses is equal to the solution of minimizing each term independently. More precisely, we have min_, ∑_i=1^N/B^con(__i, __i) =∑_i=1^N/Bmin___i, __i^con(__i, __i), where __i = {_j: j∈_i} and __i = {_j: j∈_i}, respectively, and the equality follows from the fact that _i's are disjoint. §.§ Proofs of Results From Section <ref> * The contrastive loss function is geodesic quasi-convex if for any two points (, ) and (', ') in the domain and for all t in [0,1]: (t(, )+(1-t)(', '))≤max{(, ), (', ')}. We provide a counter-example for geodesic quasi-convexity, which is a triplet of points (^1, ^1), (^2, ^2), (^3, ^3) where (^3, ^3) is on the geodesic between other two points and satisfies (^3, ^3) > max{(^1, ^1), (^2, ^2)}. Let N = 2 and ^1 = [ √(1/2) √(2/5); √(1/2) √(1/5) ], ^2 = [ √(1/2) √(1/2); √(1/2) √(1/2) ], ^1 = [ √(1/2) √(1/2); √(1/2) √(1/2) ], ^2 = [ √(2/5) √(1/2); √(1/5) √(1/2) ]. Now, define ^3 = normalize((^1 + ^2)/2) and ^3 = normalize((^1 + ^2)/2), which is the “midpoint” of the geodesic between (^1, ^1) and (^2, ^2). By direct calculation, we obtain (^3, ^3) ≈ 2.798 > 2.773 ≈max((^1, ^1), (^2, ^2)), which indicates is geodesic non-quasi-convex. Consider N=4 samples and their embedding vectors {_i}_i=1^N, {_i}_i=1^N with dimension d=2. Suppose _i's are parametrized by ^(t) = [θ_1^(t), θ_2^(t), θ_3^(t), θ_4^(t)] as in the setting described in Sec. <ref> (see Fig. <ref>). Consider initializing _i^(0) = _i^(0) and θ_i^(0) = ϵ > 0 for all i, then updating ^(t) via OSGD and SGD with the batch size B=2 as described in Sec. <ref>. Let T_OSGD, T_SGD be the minimal time required for OSGD, SGD algorithm to have 𝔼[^(T)] ∈ (π/4 - ρ, π/4)^N. Suppose there exist ϵ̃, T such that for all t satisfying ^(t)={1,3} or {2,4}, ∇_^(t) (_^(t), _^(t))≤ϵ̃, and T_OSGD, T_SGD< T. Then, T_OSGD≥π/4 - ρ - ϵ +O(η^2 ϵ + ηϵ^3) ηϵ, T_SGD≥3(e^2+1) e^2-1π/4-ρ-ϵ+O(η^2 ϵ+η^2 ϵ̃) ηϵ+O(ηϵ^3+ηϵ̃). We begin with the proof of T_OSGD≥π/4 - ρ - ϵ +O(η^2 ϵ + ηϵ^3) ηϵ. Assume that the parameters are initialized at (θ^(0)_1, θ^(0)_2, θ^(0)_3, θ^(0)_4) = (ϵ, ϵ, ϵ, ϵ). Then, there are six batches with the batch size B=2, and we can categorize the batches according to the mini-batch contrastive loss: * ={1, 2} or {3, 4}: (_, _) = -2+2 log(e+e^cos 2ϵ); * ={1, 3} or {2, 4}: (_, _) = -2+2 log(e+e^ -1); * ={1, 4} or {2, 3}: (_, _) = -2+2 log(e+e^- cos 2ϵ). Without loss of generality, we assume that OSGD algorithm described in Algo. <ref> chooses the mini-batch = {1, 2} corresponding to the highest loss at time t=0, and updates the parameter as θ_1^(1) = ϵ - η∇_θ_1(_, _), θ_2^(1) = ϵ - η∇_θ_2(_, _) . Then, for the next update, OSGD choose _3, _4 which is now closer than updated _1, _2. And _3, _4 would be updated as same as what previously _1, _2 have changed. Thus, θ_1 updates only at the even time, and stays at the odd time, i.e. θ_1^(t+1)=θ_1^(t)- η∇_θ_1(_, _) if t is even, θ_1^(t) if t is odd. Iterating this procedure, we can view OSGD algorithm as one-parameterized algorithm of parameter ϕ^(t)=θ_1^(2t) as: ϕ^(0) = ϵ, ϕ^(t) = ϕ^(t-1) + η g(ϕ^(t-1)), ϕ^(T_half)∈(π 4-ρ, π 4), where g(ϕ) = 2 sin (2ϕ) / (1+ e^1- cos(2ϕ)), and T_half:=T_OSGD/2. In the procedure of updates, we may assume that ϕ^(t)∈ (0, π 4) for all t. To analyze the drift of ϕ^(t), we firstly study smoothness of g; g' (ϕ) =4e^cos 2ϕ(cos 2ϕ (e+e^cos 2ϕ)-e sin^2 2ϕ) (e+e^cos 2ϕ)^2. We can observe that max_ϕ∈ [0, π 4] | g' (ϕ)| = 2, hence g(ϕ) has Lipschitz constant 2, i.e. |g(ϕ^(t-1)) - g(ϕ^(0))| ≤ 2 |ϕ^(t-1)-ϕ^(0)|. Therefore, ϕ^(t) - ϕ^(t-1) = η| g (ϕ^(t-1)) | ≤η | g (ϵ) | + 2η (ϕ^(t-1)-ϵ) =2 ηϕ^(t-1)+O(ηϵ^3), where the first inequality is from Lipschitz-continuity of g(ϕ), and the second equality is from Taylor expansion of g at ϵ =0 as; g(ϵ)=2ϵ - 10/3ϵ^3+34/15ϵ^5 + ⋯. Hence, ϕ^(t)≤ (1+2η) ϕ^(t-1) + O(ηϵ^3) indicates that ϕ^(T_half) ≤ (1+2η)^T_halfϵ + T O(ηϵ^3) ≤ (1+2η T_half)ϵ + O(η^2 ϵ+ηϵ^3), for some constant T>T_OSGD. Moreover π 4-ρ < ϕ^(T_half) implies that T_half≥1 2π/4 - ρ - ϵ +O(ηϵ^3 + η^2 ϵ) ηϵ. So, we obtain the lower bound of T_OSGD by doubling T_half. We estimate of T_OSGD. Now, we study convergence rate of SGD algorithm. We claim that T_SGD≥3(e^2+1) e^2-1π/4-ρ-ϵ+O(η^2 (ϵ+ ϵ̃)) ηϵ+O(η (ϵ^3+ ϵ̃)). Without loss of generality, we firstly focus on the drift of θ_1. Since batch selection is random, given ^(t) = (θ_1^(t), θ_2^(t), θ_3^(t), θ_4^(t)): * ={1, 2} with probability 1 / 6. Then, (_, _) = -2+2 log(e+e^cos (θ_1^(t) + θ_2^(t))) implies θ_1^(t+1) = θ_1^(t) + η2 sin (θ_1^(t)+θ_2^(t)) 1+ e^1- cos(θ_1^(t) + θ_2^(t)). * ={1, 3} with probability 1 / 6. At t=0, the initial batch selection can be primarily categorized into three distinct sets; closely positioned vectors {_1, _2} or {_3, _4}, vectors that form obtuse angles {_1, _4} or {_2, _3}, and vectors diametrically opposed at 180^∘, {_1, _3} or {_2, _4}. Given that ϵ is substantially small, the possibility of consistently selecting batches from the same category for subsequent updates is relatively low. As such, it is reasonable to infer that each batch is likely to maintain its position within the initially assigned categories. From this, one can deduce that vector sets such as {_1, _3} or {_2, _4} continue to sustain an angle close to 180^∘. Given these conditions, it is feasible to postulate that if the selected batch encompasses either {1, 3} or {2, 4}, the magnitude of the gradient of the loss function (U_, V_), denoted by ∇(U_, V_), would be less than a particular threshold ϵ̃, i.e. ∇(U_, V_)< ϵ̃. Then, θ_1^(t+1) = θ_1^(t)+ η O(ϵ̃). * ={1, 4} with probability 1 / 6. Then, (_, _) = -2+2 log(e+e^- cos (θ_1+θ_4)) implies θ_1^(t+1) = θ_1^(t) - η2 sin (θ_1^(t)+θ_4^(t)) 1+ e^1+ cos(θ_1^(t) + θ_4^(t)). Since there is no update on θ_1 for the other cases, taking expectation yields 𝔼[θ_1^(t+1) - θ_1^(t)|^(t)]=η 6 F_1(^(t)) + O(ηϵ̃), where F_1() is defined as: F_1()=2 sin (θ_1+θ_2) 1+ e^1- cos(θ_1 + θ_2) - 2 sin (θ_1+θ_4) 1+ e^1+ cos(θ_1 + θ_4). We study smoothness of F_1 by setting F_1() = f_-(θ_1+θ_2)-f_+(θ_1+θ_4), where f_-(t):=2 sin t 1+e^1-cost, f_+(t):=2sin t 1+e^1+cost. Note that max_t ∈ [0, π / 2] |f_-'(t)| = 1, max_t ∈ [0, π / 2] |f_+'(t)|= C, for some constant C∈ (0, 1). Then for = (θ_1, θ_2, θ_3, θ_4), ' = (θ'_1, θ'_2, θ'_3, θ'_4), |F_1(')-F_1()| ≤ |f_-(θ'_1+θ'_2)-f_-(θ_1+θ_2)|+|f_+(θ'_1+θ'_4)-f_+(θ_1+θ_4)| ≤ 1 · |θ'_1+θ'_2-θ_1-θ_2|+C · |θ'_1+θ'_4-θ_1-θ_4| ≤ 2(1+C) ' - . In the same way, we can define the functions F_2, F_3, F_4 all having Lipschitz constant 2(1+C). As we define F()=(F_1(), F_2(), F_3(), F_4()), it has Lipschitz constant 4(1+C) satisfying that 𝔼['-|]=η 6 F()+O(ηϵ̃), where Big O(·) is applied elementwise to the vector, denoting that each element follows O(·) independently. From Lipschitzness of F, for any t ≥ 1, 𝔼 [^(t) - ^(t-1)|^(t-1)] ≤η 6F(^(t-1)) + O(ηϵ̃ ) ≤η 6F(^(0))+ η 6F(^(t-1))-F(^(0))+O(ηϵ̃) ≤η 6F(^(0))+ 2η(1+C) 3^(t-1)-^(0)+O(ηϵ̃). By taking expecations for both sides, 𝔼[ ^(t) - ^(t-1)] ≤η 6F(^(0))+ 2η(1+C) 3𝔼[^(t-1)-^(0)] +O(ηϵ̃). Applying the triangle inequality, ^(t)-^(0)≤^(t)-^(t-1)+^(t-1)-^(0), we further deduce that 𝔼[^(t) - ^(0)] ≤(1+2η (1+C) 3) 𝔼[ ^(t-1)-^(0)] +( ηF(^(0))/6 + O(ηϵ̃) ). Setting Γ = 3/2η(1+C)( ηF(^(0))/6 + O(ηϵ̃) ), we can write 𝔼 [ ^(t) - ^(0) + Γ ] ≤(1+2η (1+C) 3) 𝔼[^(t-1)-^(0) + Γ], Thus, with constant T>T_SGD, 𝔼 [ ^(T_SGD) - ^(0) + Γ ] ≤(1+2η (1+C) 3)^T_SGDΓ ≤(1+2η (1+C) 3 T_SGD) Γ + T O(η^2 Γ). By Taylor expansion of F_1 near ϵ≈ 0: F_1(ϵ, ϵ, ϵ, ϵ)=2(e^2-1) e^2+1ϵ+O(ϵ^3), F(^0) = 4(e^2-1) 1+e^2ϵ + O(ϵ^3), we get Γ = e^2-1/(1+C)(e^2+1)ϵ +O(ϵ^3+ϵ̃) = O(ϵ+ϵ̃). Since 𝔼[^(T_SGD) - ^(0)] ≥ 2(π 4 - ρ - ϵ), 2η(1+C) Γ/3 T_SGD ≥𝔼[^(T_SGD) - ^(0)]+O(η^2 (ϵ+ϵ̃)) ≥ 2(π 4-ρ-ϵ) + O(η^2 (ϵ+ϵ̃)). Therefore, T_SGD≥3(e^2+1) e^2-1π/4-ρ-ϵ+O(η^2 (ϵ+ ϵ̃)) ηϵ+O(η (ϵ^3+ ϵ̃)). To simply compare the convergence rates of two algorithms, we assumed that there is some constant T such that T_SGD, T_OSGD <T in Theorem  <ref>. However, without this assumption, we could still obtain lower bounds of both algorithms as; T_OSGD≥2/log(1+2η)log[ π 4-ρ +O(ϵ^3)/ϵ+O(ϵ^3)], T_SGD≥1/log(1+2(1+C) 3η)log[ 1 C̃π 4-ρ -(1-C̃)ϵ+O(ϵ^3+ϵ̃)/ϵ+O(ϵ^3+ϵ̃)], where C̃= (e^2-1)/2(C+1)(e^2+1), C := max_x ∈ [0, π 2][2 sin x/(1+e^1+ cos x)]', and their approximations are C̃≈ 0.265, C ≈ 0.436. For small enough η, ϵ, ϵ̃, we can observe OSGD algorithm converges faster than SGD algorithm, if the inequalities are tight. Direct Application of OSGD and its Convergence We now focus exclusively on the convergence of OSGD. We prove Theorem <ref>, which establishes the convergence of an application of OSGD to the mini-batch contrastive learning problem, with respect to the loss function . algorithmosgdalg The direct application of OSGD to our problem For ease of reference, we repeat the following definition: (, ) 1/q∑_j=1^N Bγ_j (__(j), __(j)), γ_j = ∑_l=0^q-1j - 1 lN B - j k - l -1/N B k , where _(j) represents the batch with the j-th largest loss among all possible NB batches, and q, k are parameters for the OSGD. * Define (^(t^⋆), ^(t^⋆)) = ', 'argmin{(', ') + ρ/2(', ') - (^(t^⋆), ^(t^⋆))^2 }. We begin by reffering to Lemma 2.2. in <cit.>, which provides the following equations: (^(t^⋆), ^(t^⋆)) - (^(t^⋆), ^(t^⋆)) = 1/ρ∇_ρ(^(t^⋆), ^(t^⋆)), ∇(^(t^⋆), ^(t^⋆)) ≤∇_ρ(^(t^⋆), ^(t^⋆)). Furthermore, we have that ∇ is ρ_0-Lipschitz in ((B_d(0, 1))^N)^2 by Thm. <ref>. This gives ∇(^(t^⋆), ^(t^⋆)) - ∇(^(t^⋆), ^(t^⋆))≤ρ_0 (^(t^⋆), ^(t^⋆)) - (^(t^⋆), ^(t^⋆)) Therefore, ∇(^(t^⋆), ^(t^⋆)) ≤∇(^(t^⋆), ^(t^⋆)) + ∇(^(t^⋆), ^(t^⋆)) - ∇(^(t^⋆), ^(t^⋆)) ≤∇(^(t^⋆), ^(t^⋆)) + ρ_0 (^(t^⋆), ^(t^⋆)) - (^(t^⋆), ^(t^⋆)) ≤ρ + ρ_0/ρ∇_ρ(^(t^⋆), ^(t^⋆)). As a consequence of Thm <ref>, [∇(^(t^⋆), ^(t^⋆))^2 ] ≤(ρ + ρ_0)^2/ρ^2[∇_ρ(^(t^⋆), ^(t^⋆))^2 ] ≤(ρ + ρ_0)^2/ρ(ρ-ρ_0)( _ρ(^(0), ^(0)) - _ρ) + 8ρ∑_t=0^Tη_t^2/∑_t=0^Tη_t ≤(ρ + ρ_0)^2/ρ(ρ-ρ_0)( (^(0), ^(0)) - ) + 8ρ∑_t=0^Tη_t^2/∑_t=0^Tη_t. Note that _ρ is the minimized value of _ρ, and the last inequality is due to _ρ(^(0), ^(0)) - _ρ≤(^(0), ^(0)) -, because _ρ(^(0), ^(0)) = min_', '{(', ') + ρ/2(', ') - (^(0), ^(0))^2 } ≤(^(0), ^(0)) by putting (', ') = (^(0), ^(0)), and = min_', '{(', ') } ≤min_', '{(', ') + ρ/2(', ') - (, )^2 } = _ρ(, ) for any , , implying that ≤_ρ. We provide details, including proof of theorems and lemmas in the sequel. Consider sampling t^⋆ from [T] with probability (t^⋆ = t) = η_t/(∑_i=0^Tη_i). Then ∀ρ > ρ_0 = 2√(2/B) + 4e^2 / B, we have [∇_ρ(^(t^⋆), ^(t^⋆))^2 ] ≤ρ/ρ-ρ_0( _ρ(^(0), ^(0)) - _ρ) + 8ρ∑_t=0^Tη_t^2/∑_t=0^Tη_t, where _ρ(, ) min_', '{(', ') + ρ/2(', ') - (, )^2 }, and _ρ denotes the minimized value of _ρ. ∇ is ρ_0-Lipschitz in ((B_d(0, 1))^N)^2 by Thm. <ref>. Hence, it is ρ_0-weakly convex by Lem. <ref>. Furthermore, the gradient norm of a mini-batch loss, or ∇_, (__i, __i) is bounded by L = 4. Finally, <cit.> states that the expected value of gradients of the OSGD algorithm is ∇_, (^(t), ^(t)) at each iteration t. Therefore, we can apply <cit.> to the OSGD algorithm to obtain the desired result. Roughly speaking, Theorem <ref> shows that (^(t^⋆), ^(t^⋆)) are close to a stationary point of _ρ. We refer readers to <cit.> which illustrates the role of the norm of the gradient of the Moreau envelope, ∇_ρ(^(t^⋆), ^(t^⋆)), being small in the context of stochastic optimization. We leave the results of some auxiliary theorems and lemmas to Subsection <ref>. §.§ Auxiliaries for the Proof of Theorem <ref> For a square matrix A, we denote its trace by tr(A). If matrices A and C are of the same shape, we define the canonical inner product A, C by A, C = ∑_i, j A_ij C_ij = tr(A^⊺ C). Following a pythonic notation, we write A_i, : and A_:, j for the i-th row and j-th column of a matrix A, respectively. The Cauchy–Schwarz inequality for matrices is given by A, C≤AC, where a norm · is a Frobenius norm in matrix i.e. A=(∑_i, j A_ij^2)^1/2. Let A ∈^m × n, C ∈^n × k. Then, AC≤AC. By a basic calculation, we have AC^2 = tr(C^⊺ A^⊺ A C) = tr(CC^⊺ A^⊺ A) = CC^⊺, A^⊺ A≤CC^⊺A^⊺ A. Meanwhile, for any positive semidefinite matrix D, let D = UΛ U^⊺ be a spectral decomposition of D. Then, we have tr(D^2) = tr(U Λ^2 U^⊺) = tr(Λ^2 U^⊺ U) = tr(Λ^2) ≤ (tr(Λ))^2 = (tr(D))^2, where λ_i(D) denotes the i-th eigenvalue of a matrix D. Invoking this fact, we have CC^⊺^2 = tr((CC^⊺)^2) ≤ (tr(CC^⊺))^2 = C^4, or equivalently, CC^⊺≤C^2. Similarly, we have A^⊺ A = A^2. Therefore, we obtain AC^2 ≤CC^⊺A^⊺ A≤A^2 C^2, which means AC≤AC. If ^m × n→ is a function of a matrix X ∈^m × n, we write a gradient of with respect to X as a matrix-valued function defined by (∇_X )_ij = (∂/∂ X)_ij = ∂/∂ X_ij. Then, the chain rule gives d/dt(X) = ⟨dX/dt, ∇_X ⟩ for a scalar variable t. If (, ) is a function of two matrices , ∈^m × n, we define ∇_, as a horizontal stack of two gradient matrices, i.e., ∇_, = (∇_, ∇_). Now, we briefly review some necessary facts about Lipschitz functions. Let f ^d → be a ρ-smooth function, i.e., ∇ f is a ρ-Lipschitz function. Then, f is ρ-weakly convex. For the sake of simplicity, assume f is twice differentiable. We claim that ∇^2 f ≽ -ρ𝕀_d, where 𝕀_d is the d × d identity matrix and A ≽ B means A - B is a positive semidefinite matrix. It is clear that this claim renders f + ρ/2·^2 to be convex. Let us assume, contrary to our claim, that there exists _0 ∈^d with ∇^2 f(_0) ⋡-ρ𝕀_d. Therefore, ∇^2 f(_0) has an eigenvalue λ < -ρ. Denote corresponding eigenvector by , so we have ∇^2 f(_0) = λ, and consider g(ϵ) = ∇ f(_0 + ϵ); the (elementwise) Taylor expansion of g at ϵ = 0 gives ∇ f(_0 + ϵ) = ∇ f(_0) + ϵ∇^2 f(_0) + o(ϵ), which gives ∇ f(_0 + ϵ) - ∇ f(_0)/ϵ = ∇^2 f(_0) + o(ϵ)/ϵ. Taking ϵ→ 0, we obtain ∇ f(_0 + ϵ) - ∇ f(_0)/ϵ≥λ > ρ, which is contradictory to ρ-Lipschitzness of ∇ f. For X ∈^B × B, let us define ^M(X) = 1/B( -2tr(X) + ∑_i=1^B log∑_j=1^B exp(X_ij) + ∑_i=1^B log∑_j=1^B exp(X_ji) ). Using this function, we can write the loss corresponding to a mini-batch of size B by ℒ^M(_^⊺_) = (_, _). We now claim the following: Consider X ∈^B × B, where X_ij≤ 1 for all 1 ≤ i, j ≤ B. Then, ∇_X ^M(X) is bounded by 2√(2/B) and 2e^2 /B^2-Lipschitz. With basic calculus rules, we obtain B ∇_X ^M(X) = -2𝕀_B + P_X + Q_X , where 𝕀_B is the B × B identity matrix and (P_X)_ij = exp (X_ij) / ∑_k=1^B exp (X_ik), (Q_X)_ij = exp (X_ij) / ∑_k=1^B exp (X_kj). From ∑_j P_ij = 1 for all i, it is easy to see that (𝕀_B - P)_i, :^2 ≤ 2. This gives 𝕀_B - P_X^2 ≤ 2B, and similarly 𝕀_B - Q_X^2 ≤ 2B. Therefore, we have B ∇_X ^M(X)≤𝕀_B - P_X + 𝕀_B - Q_X≤ 2√(2B), or equivalently ∇_X ^M(X)≤ 2√(2/B). We now show that ∇_X ^M is 2e^2/B^2-Lipschitz. Define p ^B →^B by (p(x))_i = exp(x_i)/∑_k=1^B exp(x_k). Then, we have ∂/∂ x p(x) = diag(p(x)) - p(x)p(x)^⊺. For x ∈ [-1, 1]^B, we have p(x)_i ≤e^2/B - 1 + e^2 < e^2/B for any i. Thus, 0 ≼∂/∂ x p(x) ≼diag(p(x)) ≼e^2/B𝕀_B, which means p(x) is e^2/B-Lipschitz, i.e., p(x) - p(y)≤e^2/Bx-y for any x, y ∈ [-1, 1]^B. Using this fact, we can bound P_X - P_Y for X, Y ∈ [-1, 1]^B × B as follows: P_X - P_Y^2 = ∑_i = 1^B p(X_i, :) - p(Y_i, :)^2 ≤(e^2/B)^2 ∑_i = 1^B X_i, : - Y_i, :^2 = (e^2/B)^2 X - Y^2. Similarly, we have Q_X - Q_Y≤e^2/BX - Y. Summing up, B∇_X ^M(X) - B∇_X ^M(Y)≤P_X - P_Y + Q_X - Q_Y≤2e^2/BX - Y. which renders ∇_X ^M(X) - ∇_X ^M(Y)≤2e^2/B^2X - Y. Recall that (_, _) = ^M(_^⊺_) for _, _∈^d × B (They correspond to embeddings corresponding to a mini-batch ). Using this relation, we can calculate the gradient of with respect to _. Denote E_ij∈^d × B a one-hot matrix, which is a matrix of zero entries except for (i, j) indices being 1, and write G = ∇_X ^M(_^⊺_). Then, ∂/∂(_)_ij(_, _) = ⟨∂ (_^⊺_)/∂__ij, ∇_X ^M(_^⊺_) ⟩ = ⟨ E_ij^⊺_, G ⟩ = tr( _^⊺ E_ij G ) = tr( E_ij (G _^⊺)) = (G _^⊺)_ji = (_ G^⊺)_ij. This elementwise relation means ∂/∂_(_, _) = _ G^⊺ = _ (∇_X ^M(_^⊺_))^⊺, and similarly,∂/∂_(_, _) = _∇_X ^M(_^⊺_). We introduce a simple lemma for bounding the difference between two multiplication of matrices. For A_1, A_2 ∈^m × n and B_1, B_2 ∈^n × k, we have A_1 B_1 - A_2 B_2≤A_1 - A_2B_1 + A_2B_1 - B_2. This follows from a direct calculation and Lemma <ref> A_1 B_1 - A_2 B_2 = A_1 B_1 - A_1 B_2 + A_1 B_2 - A_2 B_2 ≤A_1 (B_1 - B_2) + (A_1 - A_2) B_2 ≤A_1 - A_2B_1 + A_2B_1 - B_2. For any , ∈ (B_d(0, 1))^N and any batch of size B, we have ∇_, (_, _)≤ 4. Suppose _, _∈ (B_d(0, 1))^B, we have ∇__, _(_, _) = (_(∇_X ^M (_^⊺_))^⊺, _∇_X ^M (_^⊺_)) from Eq. (<ref>) and (<ref>). By following the fact that _, _≤√(B) and ∇_X ^M (X)≤ 2√(2/B) (see Lem. <ref>), we get _(∇_X ^M (_^⊺_))^⊺ ≤_∇_X ^M (_^⊺_)≤ 2√(2), and_∇_X ^M (_^⊺_) ≤_∇_X ^M (_^⊺_)≤ 2√(2). Then, ∇__, _(_, _) =√(_(∇_X ^M (_^⊺_))^⊺^2+_∇_X ^M (_^⊺_)^2)≤ 4. Since (_, _) is independent of _[N]∖ and _[N]∖, we have ∇_, (_, _) = ∇__, _(_, _)≤ 4. ∇ (, ) is ρ_0-Lipschitz for , ∈ (B_d(0, 1))^N, or to clarify, ∇ (^1, ^1) - ∇ (^2, ^2) ≤ρ_0 (^1, ^1) - (^2, ^2) for any ^1, ^1, ^2, ^2 ∈ (B_d(0, 1))^N, where ρ_0 = 2√(2/B) + 4e^2/B. Denoting _^i, _^i as parts of ^i, ^i that correspond to a mini-batch , we first show ∇__, _(_^1, _^1) - ∇__, _(_^2, _^2)≤ρ_0 (_^1, _^1) - (_^2, _^2) holds. For any _, _∈ (B_d(0, 1))^B, we have ∇__, _(_, _) = (_(∇_X ^M (_^⊺_))^⊺, _∇_X ^M (_^⊺_)). from Eq. <ref> and Eq. <ref>. Recall Lemma <ref>; for any _^i, _^i ∈ (B_d(0, 1))^B (i = 1, 2), we have ∇_X ^M ((_^i)^⊺_^i) ≤ 2√(2/B)and∇_X ^M ((_^1)^⊺_^1) - ∇_X ^M ((_^2)^⊺_^2) ≤2e^2/B^2(_^1)^⊺_^1 - (_^2)^⊺_^2. We invoke Lemma <ref> and obtain _^1 ∇_X ^M ((_^1)^⊺_^1) - _^2 ∇_X ^M ((_^2)^⊺_^2) ≤_^1 - _^2∇_X ^M ((_^1)^⊺_^1) + _^2∇_X ^M ((_^1)^⊺_^1) - ∇_X ^M ((_^2)^⊺_^2) ≤ 2√(2/B)_^1 - _^2 + 2e^2/B^3/2(_^1)^⊺_^1 - (_^2)^⊺_^2 ≤ 2√(2/B)_^1 - _^2 + 2e^2/B^3/2 (_^1 - _^2_^1 + _^2_^1 - _^2) ≤ (2√(2/B) + 2e^2/B) _^1 - _^2 + (2e^2/B) _^1 - _^2, and similarly _^1 ∇_X (^M ((_^1)^⊺_^1))^⊺ - _^2 ∇_X (^M ((_^2)^⊺_^2))^⊺ ≤ (2e^2/B) _^1 - _^2 + (2√(2/B) + 2e^2/B) _^1 - _^2. Using the fact that (ax + by)^2 + (bx + ay)^2 = (a^2 + b^2)(x^2 + y^2) + 4ab xy ≤ (a + b)^2 (x^2 + y^2) holds for any a, b ≥ 0 and x, y ∈, we obtain ∇(_^1, _^1) - ∇(_^2, _^2)^2 = _^1 ∇_X (^M ((_^1)^⊺_^1))^⊺ - _^2 ∇_X (^M ((_^2)^⊺_^2))^⊺^2 + _^1 ∇_X ^M ((_^1)^⊺_^1) - _^2 ∇_X ^M ((_^2)^⊺_^2)^2 ≤ (2√(2/B) + 4e^2/B)^2 (_^1 - _^2^2 + _^1 - _^2^2) = (2√(2/B) + 4e^2/B)^2 (_^1, _^1) - (_^2, _^2)^2. Restating this with ρ_0 = 2√(2/B) + 4e^2/B, we have ∇(_^1, _^1) - ∇(_^2, _^2)≤ρ_0 (_^1, _^1) - (_^2, _^2). Recall the definition of : (, ) = 1/q∑_jγ_j (__(j), __(j)), where γ_j = ∑_l=0^q-1j - 1 lN B - j k - l -1/N B k and ∑_j γ_j = q. For any , ∈ (^d)^N, we can find a neighborhood of (, ) so that value rank of (__i, __i) over i ∈{1, …, N B} does not change, since is ρ_0-Lipschitz. More precisely speaking, we can find a rank that can be accepted by all points in the neighborhood. Therefore, we have ∇_, (, ) = 1/q∑_jγ_j ∇_, (__(j), __(j)), and since __(j) - __(j)≤ -, ∇_, (, ) is locally ρ_0-Lipschitz. Since is smooth, such property is equivalent to -ρ_0 𝕀_N ≼∇^2_, (, ) ≼ρ_0 𝕀_N, where 𝕀_N is the N × N identity matrix. Therefore, is ρ_0-Lipschitz on ((B_d(0, 1))^N)^2. § ALGORITHM DETAILS §.§ Spectral Clustering Method Here, we provide a detailed description of the proposed spectral clustering method (see Sec. <ref>) from Algo. <ref>. Recall that the contrastive loss ℒ^con(U_, V_) for a given mini-batch is lower bounded as the following by Jensen's inequality: (_,_) = -1/B∑_i∈log(e^_i^⊺_i/∑_j=1^N e^_i^⊺_j) -1/B∑_i=∈log(e^_i^⊺_i/∑_j=1^N e^_i^⊺_j) = 1/B{∑_i∈log(1+∑_j∈∖{i}e^_i^⊺(_j-_i))) +∑_i∈log(1+∑_j∈∖{i}e^_i^⊺(_j-_i)))} ≥1/B(B-1){∑_i∈∑_j ∈∖{i}log(1+(B-1)e^_i^⊺(_j-_i))+log(1+(B-1)e^_i^⊺(_j-_i))}, and we consider the graph with N nodes, where the weight between node k and l is defined as w(k,l):= ∑_(i,j)∈{(k,l), (l,k)}log(1+(B-1)e^_i^⊺(_j-_i))+log(1+(B-1)e^_i^⊺(_j-_i)). The proposed method employs the spectral clustering algorithm from <cit.>, which bundles N nodes into N/B clusters. We aim to assign an equal number of nodes to each cluster, but we encounter a problem where varying numbers of nodes are assigned to different clusters. To address this issue, we incorporate an additional step to ensure that each cluster (batch) has the equal number B of positive pairs. This step is to solve an assignment problem <cit.>. We consider a minimum weight matching problem in a bipartite graph <cit.>, where the first partite set is the collection of data points and the second set represents B copies of each cluster center obtained after the spectral clustering. The edges in this graph are weighted by the distances between data points and centers. The goal of the minimum weight matching problem is to assign exactly B data points to each center, minimizing the total cost of the assignment, where cost is the sum of the distances from each data point to its assigned center. This guarantees an equal number of data points for each cluster while minimizing the total assignment cost. A annotated procedure of the method is provided in Algo. <ref>. §.§ Stochastic Gradient Descent (SGD) We consider two SGD algorithms: * SGD with replacement (Algo. <ref>) with k=1 for the theoretical analysis in Sec. <ref>. * SGD without replacement (Algo. <ref>) for experimental results in Sec. <ref>, which is widely employed in practical settings. In the more practical setting where _i = f_θ(_i) and _i = g_ϕ(_i), SGD updates the model parameters θ,ϕ using the gradients 1/k∑_i ∈ S_∇_θ, ϕ(__i, __i) instead of explicitly updating and . §.§ Ordered SGD (OSGD) We consider two OSGD algorithms: * OSGD (Algo. <ref>) with k=NB for the theoretical analysis in Sec. <ref>. * OSGD without replacement (Algo. <ref>) for experimental results in Sec. <ref>, which is implemented for practical settings. In the more practical setting where _i = f_θ(_i) and _i = g_ϕ(_i), OSGD updates the model parameters θ,ϕ using the gradients 1/k∑_i ∈ S_∇_θ, ϕ(__i, __i) instead of explicitly updating and . § EXPERIMENT DETAILS In this section, we describe the details of the experiments in Sec. <ref> and provide additional experimental results. First, we present histograms of mini-batch counts for different loss values from models trained with different batch selection methods. Next, we provide the results for N∈{4, 16} on the synthetic dataset. Lastly, we explain the details of the experimental settings on real dataset, and provide the results of the retrieval downstream tasks. §.§ Batch Counts: SC method vs. Random Batch Selection We provide additional results comparing the mini-batch counts of two batch selection algorithms: the proposed SC method and random batch selection. The mini-batch counts are based on the mini-batch contrastive loss (_, _). We measure mini-batch losses from ResNet-18 models trained on CIFAR-100 using the gradient descent algorithm with different batch selection methods: (i) SGD (Algo. <ref>), (ii) OSGD (Algo. <ref>), and (iii) the SC method (Algo. <ref>). Fig. <ref> illustrates histograms of mini-batch counts for N/B mini-batches, where N=50000 and B=20. The results show that mini-batches generated through the proposed spectral clustering method tend to contain a higher proportion of large loss values when compared to the random batch selection, regardless of the pre-trained models used. §.§ Synthetic Dataset With the settings from Sec. <ref>, where each column of embedding matrices , is initialized as a multivariate normal vector and then normalized as ‖_i ‖ = ‖_i ‖ = 1, for all i, we provide the results for N∈{4, 16} and d=2N or d=N/2. Fig. <ref> and  <ref> show the results for N=4 and N=16, respectively. We additionally present the results for theoretically unproven cases, specifically for N=8 and d∈{3, 5} (see Fig. <ref>). The results provide empirical evidence that all combinations of mini-batches leads to the optimal solution of full-batch minimization for the theoretically unproven cases. §.§ Real Datasets To demonstrate the practical effectiveness of the proposed SC method, we consider a setting where embeddings are learned by a parameterized encoder. We employ two widely recognized uni-modal mini-batch contrastive learning algorithms: SimCLR <cit.> and SogCLR <cit.>, and integrate different batch selection methods from: (i) SGD (algo. <ref>), (ii) OSGD (algo. <ref>), (iii) SC (algo. <ref>) into these frameworks. We compare the pre-trained models' performances in the retrieval downstream tasks on the corrupted and the original datasets. We conduct the mini-batch contrastive learning with the mini-batch size B=32 using ResNet18-based encoders on CIFAR-100 and Tiny ImageNet datasets. All learning is executed on a single NVIDIA A100 GPU. The training code and hyperparameters are based on the official codebase of SogCLR[https://github.com/Optimization-AI/SogCLR] <cit.>. We use LARS optimizer<cit.> with the momentum of 0.9 and the weight decay of 10^-6. We utilize the learning rate scheduler which starts with a warm-up phase in the initial 10 epochs, during which the learning rate increases linearly to the maximum value η_max=0.075 √(B). After this warm-up stage, we employ a cosine annealing (half-cycle) schedule for the remaining epochs. For OSGD, we employ k=1500, q=150. To expedite batch selection in the proposed SC, we begin by randomly partitioning N positive pairs into kB-sized clusters, using k=40. We then apply the SC method to each kB cluster to generate k mini-batches, resulting in a total of k× (N/kB) = N/B mini-batches. We train models for a total of 100 epochs. Table <ref> presents the top-1 retrieval accuracy on CIFAR-100 and Tiny ImageNet. We measure validation retrieval performance on the true as well as corrupted datasets. The retrieval task is defined to be finding the positive pair image of a given image among all pairs (the number of images of the validation dataset). We also consider the retrieval task under a harder setting, where the various corruptions are applied per image so that we can consider a set of corrupted images as a hard negative samples. Table <ref> presents the top-1 retrieval accuracy results on CIFAR-100-C and Tiny ImageNet-C, the corrupted datasets <cit.> designed for robustness evaluation. CIFAR-100-C (Tiny ImageNet-C) has the same images as CIFAR-100 (Tiny ImageNet), but these images have been altered by 19 (15) different types of corruption (e.g., image noise, blur, etc.). Each type of corruption has five severity levels. We utilize images corrupted at severity level 1. These images tend to be more similar to each other than those corrupted at higher severity levels, which consequently makes it more challenging to retrieve positive pairs among other images. To perform the retrieval task, we follow the following procedures: (i) We apply two distinct augmentations to each image to generate positive pairs; (ii) We extract embedding features from the augmented images by employing the pre-trained models; (iii) we identify the pair image of the given augmented image among augmentations of 19 (15) corrupted images with the cosine similarity of embedding vectors. This process is iterated across 10K CIFAR-100 images (10K Tiny-ImageNet images). The top-1 accuracy measures a percentage of retrieved images that match its positive pair image, where each pair contains two different modality stemming from a single image.
http://arxiv.org/abs/2307.04407v1
20230710081327
Deep and Decentralized Multi-Agent Coverage of a Target with Unknown Distribution
[ "Hossein Rastgoftar" ]
eess.SY
[ "eess.SY", "cs.SY" ]
Shell et al.: Bare Demo of IEEEtran.cls for IEEE Journals Deep and Decentralized Multi-Agent Coverage of a Target with Unknown Distribution Hossein Rastgoftar H. Rastgoftar is with the Department of Aerospace and Mechanical Engineering, University of Arizona, Tucson, AZ, 85721 USA e-mail: [email protected]. August 12, 2023 ===================================================================================================================================================================================== This paper proposes a new architecture for multi-agent systems to cover an unknowingly distributed fast, safely, and decentralizedly. The inter-agent communication is organized by a directed graph with fixed topology, and we model agent coordination as a decentralized leader-follower problem with time-varying communication weights. Given this problem setting, we first present a method for converting communication graph into a neural network, where an agent can be represented by a unique node of the communication graph but multiple neurons of the corresponding neural network. We then apply a mass-cetric strategy to train time-varying communication weights of the neural network in a decentralized fashion which in turn implies that the observation zone of every follower agent is independently assigned by the follower based on positions of in-neighbors. By training the neural network, we can ensure safe and decentralized multi-agent coordination of coverage control. Despite the target is unknown to the agent team, we provide a proof for convergence of the proposed multi-agent coverage method. The functionality of the proposed method will be validated by a large-scale multi-copter team covering distributed targets on the ground. Large-Scale Coordination, Multi-Agent Coverage, and Decentralized Control. § INTRODUCTION Multi-agent coberage has been received a lot of attentions by the control community over the recent years. Multi-agent coverage has many applications such as wildfire management <cit.>, border security <cit.>, agriculture <cit.>, and wildlife monitoring <cit.>. A variety of coverage approaches have been proposed by the researchers that are reviewed in Section <ref>. §.§ Related Work Sweep <cit.> and Spiral <cit.> are two available methods used for the single-vehicle coverage path planning, while Vehicle Routing Problem <cit.> is widely used for the multi-agent coverage path planning. Diffusion-based multi-agent coverage convergence and stability are proposed in Ref. <cit.>. Decentralized multi-agent coverage using local density feedback is achieved by applying discrete-time mean-field model in Ref. <cit.>. Multi-agent coverage conducted by unicycle robots guided by a single leader is investigated in Ref. <cit.>, where the authors propose to decouple coordination and coverage modes. Adaptive decentralized multi-agent coverage is studied in <cit.>. Ref. <cit.> offers a multiscale analysis of multi-agent coverage control that provides the convergence properties in continuous time. Human-centered active sensing of wildfire by unmanned aerial vehicles is studied in Ref. <cit.>. Ref. <cit.> suggests to apply k-means algorithm for planning of zone coverage by multiple agents. Reinforcement Learning- (RL-) based multi-agent coverage control is investigated by Refs. <cit.>. Authors in <cit.> used Vononoi-based approach for covering a distributed target. Vononoi-based coverage in the presence of obstacles and failures is presented as a leader-follower problem in Ref. <cit.>. Ref. <cit.> experimentally evaluate functionality Voronoi-based and other multi-agent coverage approaches in urban environment. §.§ Contributions This paper develops a method for decentralized multi-agent coverage of a distributed target with an unknown distribution. We propose to define the inter-agent communications by a deep neural network, which is called coverage neural network, with time-varying weights that are obtained such that coverage convergence is ensured. To this end, the paper establishes specific rules for structuring the coverage neural network and proposes a mass-centric approach to train the network weights, at any time t, that specify inter-agent communication among the agent team. Although, the target is unknown to the agent team, we prove that the weights ultimately converge to the unique values that quantify target distribution in the motion space. The functionality of the proposed coverage method will be validated by simulating aerial coverage conducted by a team of quadcopetr agents. Compared to the existing work, this paper offers the following novel contributions: * The proposed multi-agent coverage approach learns the inter-agent communication weights in a forward manner as opposed to the existing neural learning problem, where they are trained by combining forward and backward iterations. More specifically, weights input to a hidden layer are assigned based on the (i) outputs of the previous layer and (i) target data information independently measured by observing the neighboring environment. We provide the proof of convergence for the proposed learning approach. * The paper proposes a method for converting inter-agent communication graph into a neural network that will be used for organizing the agents, structuring the inter-agent communications, and partitioning the coverage domain. * The paper develops a method for decentralized partitioning and coverage of an unknowingly distributed target. This method is indeed more computationally-efficient than the the available Voronoi-based partitioning methods that require all agents' positions to determine the search subdomain allocated to each individual agent. §.§ Outline The remainder of the paper is organized as follows: The Problem Statement and Formulation are given in Section <ref>. The paper methodology is presented in Section <ref>. Assuming every agent is a quadcopter, the multi-agent network dynamics is obtained in Section <ref>, and followed by Simulation Results in Section <ref> and Conclusion in Section <ref>. § PROBLEM STATEMENT AND FORMULATION We consider a team of N agents identified by set 𝒱={1,⋯,N} and classify them into the following three groups: * “boundary” agents identified by 𝒱_B={1,⋯,N_B} are distributed along the boundary of the agent team configuration; * a single “core” agent identified by singleton 𝒱_C={N_B+1} is an interior agent with the global position representing the global position of the agent configuration; and * follower agents defined by 𝒱_I={N_B+2,⋯,N} are all located inside the agent team configuration. Note that 𝒱_B, 𝒱_C, and 𝒱_I are disjoint subsets of 𝒱, i.e. 𝒱=𝒱_B⋃𝒱_C⋃𝒱_I. Inter-agent commucication among the agents are defined by graph 𝒢(𝒱,ℰ) where ℰ⊂𝒱×𝒱 defines edges of graph 𝒢 and each edge represents a unique communication link (if (j,i)∈ℰ, then, i accesses position of j∈𝒱). We define 𝒩_i={j∈𝒱:(j,i)∈ℰ}, ∀ i∈𝒱. as the set of in-neighbors of every agent i∈𝒱. §.§ Neural Network Representation of Inter-Agent Communication Graph 𝒢 is defined such that it can be represented by a deep neural network with M+1 layers, where we use set ℳ={0,⋯,M} to define the layer identification numbers. Set 𝒱 can be expressed as 𝒱=⋃_l∈ℳ𝒱_l where 𝒱_0 through 𝒱_M are disjoint subsets of 𝒱. We use 𝒲_0, 𝒲_1, ⋯, 𝒲_M to identify the neuron of layers 0 through M of the coverage neural network, and 𝒲_l and 𝒱_l are related by 𝒲_l=𝒱_l l∈{0,M} 𝒲_l-1⋃𝒱_l l∈ℳ∖{0,M} , where 𝒲_0=𝒱_0=𝒱_B⋃𝒱_C defines neurons that uniquely represent boundary and core agents. For every neuron i∈𝒲_l at layer l∈ℳ∖{0}, ℐ_i,l∈𝒲_l-1 defines those neurons of 𝒲_l-1 that are connected to i∈𝒲_l. Assuming the agent team forms an n-dimensional configuration in a three-dimensional motion space (n=2,3), we use the following key rules to define ℐ_i,l for every i∈𝒲_l and l∈ℳ∖{0}: |ℐ_i,l|= 1 If i∈𝒲_l-1⋂𝒲_l and l∈ℳ∖{0} n+1 If i∈𝒲_l-𝒲_l-1 and l∈ℳ∖{0} n+1 If i∈𝒲_M 0 If i∈𝒲_0 . We note that 𝒩_i and ℐ_i,l can be related by ⋀_l∈ℳ∖{0}⋀_i∈𝒲_l- 𝒲_l-1(ℐ_i,l=𝒩_i). For better clarification, we consider an agent team with N=26 agents identified by set 𝒱={1,⋯,26} forming a two-dimensional configuration (n=2) shown in Fig. <ref> (a). The inter-agent communications shown in Fig. <ref> (a) can be represented by the neural network of Fig. <ref> (b) with three layers ℳ={0,1,2}, where 𝒲_0={1,⋯,6}, defining the boundary and core leaders, has no in-neighbors 𝒲_2={8,9,10, 12,13,14,16,17,18,20,21,22, 24,25,26} defining followers, each has three in-neighbors. Also, {7,11,15,19,23}∈𝒲_1 each has three in-neighbors but the remaining neurons of {1,⋯,6}, that are repeated in layer 0, each has one in-neighbor. §.§ Differential Activation Function Unlike the available neural network, the activation of the coverage network's neurons are operated differential activation functions given by nonlinear dynamics 𝐱̇_i=𝐟_i(𝐱_i,𝐮_i) 𝐫_i=𝐡_i(𝐱_i) , i∈𝒲_l, l∈ℳ, that is used to model the agent i∈𝒱_l (See Fig. <ref>), where 𝐱_i∈ℝ^n_x,i and 𝐮_i∈ℝ^n_u,i denote the state vector and the control of neuron i, respectively, and 𝐡_i:ℝ^n_x,i→ℝ^3, 𝐟_i:ℝ^n_x,i→ℝ^n_x,i, and 𝐠_i:ℝ^n_x,i→ℝ^n_x,i× n_u,i are smooth functions. The output of neuron i denoted by 𝐫_i∈ℝ^3× 1 is the position of agent i. The input of neuron i is defined by 𝐫_i,d(t) = 𝐩_i (given) i ∈𝒲_0 ∑_j ∈ℐ_i,l w_ij(t)𝐫_j(t) i ∈𝒲_l-𝒲_l-1, l∈ℳ∖{0} where 𝐩_i is a desired constant position for leader agent i ∈𝒲_0. Also, w_i,j(t) > 0 is the time-varying communication weight between i∈𝒲_l and j ∈ℐ_i,l, and satisfies the following constraint: ⋀_l∈ℳ∖{0}⋀_i∈𝒲_l- 𝒲_l-1(∑_j∈ℐ_i,lw_i,j(t)=1), ∀ t. §.§ Objectives Given above problem setting, this paper offers a neural-network-based method for optimal coverage of target set 𝒟 with unknown distribution in a 3-dimensional motion space. To achieve this objective, we assume that positions of boundary leader agents, defined by 𝒲_0∖𝒱_C, are known, and solve the following two main problems: * Problem 1–Abstract Representation of Target: We develop a mass-centric approach in Section <ref> to abstractly represent target by N-N_B+1 position vectors 𝐩_N_B+2 through 𝐩_N that are considered as followers' desired positions. * Problem 2–Decentralized Target Acquisition: We propose a forward method to train the communication weights w_i,j(t), and assign control input 𝐮_i, for every agent i∈𝒱 and in-neighbor agent j∈ℐ_i,l, such that actual position 𝐫_i converges to the desired position 𝐩_i in a decentralized fashion, for every i∈𝒱∖𝒲_0, where i∈𝒱∖𝒲_0 does not know global position 𝐩_j(t) of any in-neighbor agent j∈𝒱. Without loss of generality, n is either 2, or 3 because motion space is three-dimensional. More specifically, for ground coverage n=2 and 𝒟 specifies finite number of targets on the ground. § METHODOLOGY The agent team is aimed to cover a zone that is specified by 𝒟={1,⋯,n_d}, where 𝐝_i∈ℝ^3×1 is the position of target i∈𝒟. We also define intensity function 𝒯:𝒟→(0,1] to quantify the intensity of data point i∈𝒟 positioned at i∈𝒟. For development of the neural-networ-based coverage model, we apply the following Definitions and Assumptions: Boundary leader agents form an n-D polytope in ℝ^n, thus, the boundary agents' desired positions must satisfy the following rank condition: rank([ 𝐩_2-𝐩_1 ⋯ 𝐩_N_B-𝐩_1 ]) =n The polytope defined by the boundary agents is called leading polytope. The leading polytope, defined by the boundary agents, can be decomposed into N_L disjoint n-dimensional simplexes all sharing the core node N_B+1∈𝒲_0. We let ℒ={1,⋯,N_L} define all simplex cells of the leading polytope, where 𝒮_i={h_i,1,⋯,h_i,n,N_B+1} defines vertices of simplex cell i∈ℒ, i.e. h_i,1,⋯,h_n,i∈𝒮_i∖{N_B+1}⊂𝒲_0 are the boundary nodes of simplex i∈ℒ. Per Assumption <ref>, we can write 𝒲_0=⋃_i∈ℒ𝒮_i, ⋀_i∈ℒ(rank([ 𝐩_h_i,1-𝐩_N_B+1 ⋯ 𝐩_h_i,n-𝐩_N_B+1 ]) =n). Every agent i∈𝒱∖𝒲_0 has n+1 in-neighbors, therefore, ⋀_l∈ℳ∖{0}⋀_i∈𝒲_l-𝒲_l-1(|ℐ_i,l|=n+1). The in-neighbors of every agent i∈𝒱∖𝒲_0 defined by 𝒩_i={j_1,⋯,j_n+1} forms an n-D simplex. This condition can be formally specified as follows: ⋀_l∈ℳ∖{0}⋀_i∈𝒲_l-𝒲_l-1(rank([ 𝐩_j_2-𝐩_j_1 ⋯ 𝐩_j_n+1-𝐩_j_1 ]) =n). For every agent i∈𝒱∖𝒲_0, 0.99! 𝒞̅_i={∑_j∈ℐ_i,lσ_j𝐩_j:σ_j≥0 and ∑_j∈ℐ_i,lσ_j=1}, i∈𝒲_l-𝒲_l-1, l∈ℳ, 0.99! 𝒞_i(t)={∑_j∈ℐ_i,lσ_j𝐫_j(t):σ_j≥0 and ∑_j∈ℐ_i,lσ_j=1}, i∈𝒲_l-𝒲_l-1, l∈ℳ, define the convex hulls specified by “desired” and “actual” positions of agent i's in-neighbors, respectively. We define 𝒞=⋃_l∈ℳ∖{0}⋃_i∈𝒲_l-𝒲_l-1𝒞̅_i 𝒞=⋃_l∈ℳ∖{0}⋃_i∈𝒲_l-𝒲_l-1𝒞_i(t) specify the coverage zone that enclose all data points defined by set 𝒟. By considering Definition <ref>, we can express set 𝒟 as 𝒟=⋃_i∈ℐ_i,l𝒟̅_i or 𝒟=⋃_i∈ℐ_i,l𝒟_i(t), where 𝒟̅_i={j∈𝒟:𝐝_j∈𝒞̅_i}, is the target set that is “desired” to be searched by follower agent i∈𝒱∖𝒱_0 whereas 𝒟_i(t)={j∈𝒟:𝐝_j∈𝒞_i(t)}, is the subset of 𝒟 that is “actually” searched by follower agent i∈𝒱∖𝒱_0 at time t. Note that 𝒟̅_i and 𝒟_i(t) are enclosed by the convex hulls 𝒞̅_i and 𝒞_i(t), respectively, that are determined by the “desired” and “actual” positions of the agent i∈𝒱∖𝒲_0, respectively. We assume that 𝒟̅_i≠∅ and 𝒟_i(t)≠∅, at any time t, for every i∈𝒱∖𝒲_0. In order to assure that Assumption <ref> is satisfied, we may need to regenerate target set 𝒟, when target data set 𝒟 is scarcely distributed. When this regeneration is needed, we first convert discrete set 𝒟 to discrete set 𝒟'={𝐝=∑_i=1^n_d𝒩(𝐫; 𝐝_i,Σ_i):𝐝_i∈𝒟, 𝐫∈𝒞} where 𝒩(𝐫; 𝐝_i,Σ_i) is a multi-variate normal distribution specified by mean vector 𝐝_i and covariance matrix Σ_i. Then, we regenerate 𝒟 by uniform dicretization of 𝒟. §.§ Abstract Representation of Target Locations We use the approach presented in Algorithm <ref> to abstractly represent target set 𝒟 by position vectors 𝐩_N_B+2, ⋯, 𝐩_N, given (i) desired positions of leader agents denoted 𝐩_1 through 𝐩_N_B+1, (ii) the edge set ℰ, and (iii) target set 𝒟, as the input. Note that 𝐩_i is considered the global desired position of follower i∈𝒱_I={N_B+2,⋯,N}, but no follower i∈𝒱∖𝒱_0 knows 𝐩_i. The desired position of every follower agent i∈𝒱_I=𝒱∖𝒲_0 is obtained by 𝐩_i= ∑_h∈𝒟̅_i𝒯_h(h)𝐝_h|𝒟̅_i|, ∀ i∈𝒱∖𝒲_0, where 𝒟̅_i, defined by Eq. (<ref>), is a target data subset that is enclosed by 𝒞̅_i and defined by Eq. (<ref>). We notice that the desired position of every follower agent i∈𝒱∖𝒲_0 is assigned in a “forward” manner which in turn implies that 𝒲_l's desired positions are assigned after determining 𝒲_l-1's desired positions, for every l∈ℳ∖{0}. Given desired positions of every follower agent i∈𝒱∖𝒲_0 and every in-neighbor agent j∈𝒩_i, ϖ_i,j>0 defines the desired communication weight between i∈𝒱∖𝒲_0 and j∈𝒩_i, and is obtained by solving n+1 linear algebraic equations provided by 𝐩_i=∑_j∈ℐ_i,lϖ_i,j𝐩_j, ∑_j∈ℐ_i,lϖ_i,j=1. Algorithm <ref> also presents our proposed hierarchical approach for assignment of followers' desired communication weights. We define desired weight matrix 𝐋̅=[L̅_ij]∈ℝ^N× N with (i,j) entry L̅_ij=ϖ_i,j i∈𝒱∖𝒲_0, j∈𝒩_i -1 i=j 0 otherwise . §.§ Decentralized Target Acquisition For a decentralized coverage, it is necessary that every follower agent i∈𝒱_l=𝒲_l-𝒲_l-1, represented by a neorn in layer l∈ℳ∖{0}, chooses control 𝐮_i∈ℝ^n_u× 1, based on actual positions of the in-neighbor agents ℐ_i,l, such that 𝐫_i(t) stably tracks 𝐫_i,d(t) that is defined by Eq. (<ref>). Note that 𝐫_i,d(t) is a linear combination of the in-neighbors' actual positions, for i∈𝒱∖𝒲_0, with (communication) weights that are time-varying and constrained to satisfy equality constraint (<ref>). We use forward training to learn the coverage neural network. This means that communication weights of layer l ∈ℳ∖{ 0 } neurons are assigned before communication weights of layer l+1 ∈ℳ∖{ 0,M} neurons, where communication weight of neuron i ∈𝒱_l=𝒲_l-𝒲_l-1 is learned by solving a quadratic program. Let 𝐫̅_i(t)= ∑_h∈𝒟_i(t)𝒯(h)𝐝_h(t)|𝒟_i(t)|, i∈𝒱_l, l∈ℳ∖{0}, denote the cetroid of subset set 𝒟_i(t)⊂𝒟, where 𝒟_i(t)⊂𝒟 is defined (obtained) by Eq. (<ref>). Then, followers' communication weights are determined by minimizing min∑_h∈ℐ_i,lw_i,h(t)𝐫_j-𝐝̅_i(t^2 subject to equality constraint (<ref>). We define weight matrix 𝐋=[L_ij]∈ℝ^N× N with (i,j) entry L_ij= w_i,j i∈𝒱∖𝒲_0, j∈𝒩_i -1 i=j 0 otherwise . Assume every agent i∈𝒱 chooses control input 𝐮_i such that 𝐫_i(t) asymptotically tracks 𝐫_i,d(t). Then, 𝐫_i(t) asymptotically converges to the desired position 𝐩_i for every i∈𝒱. If every agent j∈𝒲_0 asymptotically tracks 𝐫_j,d(t), then, actual position 𝐫_j converges to 𝐩_j because 𝐫_j,d(t)=𝐩_j is constant per Eq. (<ref>). Then, for every i∈𝒲_1, vertices of the simplex 𝒞̅_i, belonging to 𝒲_0, asymptotically converge to the vertices 𝒞̅_i, where 𝒞̅_i and 𝒞_i enclose target data subsets 𝒟̅_i and 𝒟_i, respectively. This implies that 𝐫_i,d(t), defined as the centroid of 𝒟_i(t) asymptotically converges to 𝐩_i for every i∈𝒲_1. By extending this logic, we can say that this convergence is propagated through the feedforward network 𝒢(𝒱,ℰ). As the result, for every agent i∈𝒲_l and layer l∈ℳ∖{0}, vertices of simplex 𝒞_i(t) asymptotically converge the vertices of 𝒞̅_i which in turn implies that 𝐫_i,d(t) asymptotically converges to 𝐩_i. This also implies that 𝐫_i asymptotically converges to 𝐩_i per the theorem's assumption. § NETWORK DYNAMICS In this section, we suppose that every agent is a quacopter and use the input-state feedback linearization presented in <cit.> and summerized in the Appendix to model quadcopter motion by the fourth-order dynamics (<ref>) in the Appendix. Here, we propose to choose 𝐯_i as follows: 𝐯_i=-k_1,i⃛𝐫_i-k_2,i𝐫̈_i-k_3,i𝐫̇_i+k_4,i(𝐫_i,d(t)-𝐫_i), i∈𝒱, where 𝐫_i,d(t) is defined by Eq. (<ref>). Then, the external dynamics of the quadcopter team is given by <cit.> ddt( [ 𝐘; 𝐘̇; 𝐘̈; ⃛𝐘; ]) =𝐀_MQS[ 𝐘; 𝐘̇; 𝐘̈; ⃛𝐘; ] + 𝐁_MQS[ 𝐑_L; 𝐑̇_L; 𝐑̈_L; ⃛𝐑_L; ] , where 𝐘=vec([ 𝐫_1 ⋯ 𝐫_N ]^T), 𝐑_L=vec([ 𝐩_1 ⋯ 𝐩_N_B+1 ]^T), 𝐋_0=[ 𝐈_N_B+1 0_(N_B+1)×(N-N_B-1) ]^T∈ℝ^N×(N_B+1), 𝐀_MQS= [ 0 𝐈_3N 0 0; 0 0 𝐈_3N 0; 0 0 0 𝐈_3N; 𝐈_3⊗( 𝐊_4 𝐋) -𝐊_3𝐈_3N -𝐊_2𝐈_3N -𝐊_1𝐈_3N ] , 0.99! 𝐁_MQS= [ 0 0 0 0; 0 0 0 0; 0 0 0 0; 𝐈_3⊗( 𝐊_4𝐋_0) 𝐈_3⊗(𝐊_3𝐋_0) 𝐈_3⊗( 𝐊_2𝐋_0) 𝐈_3⊗( 𝐊_1 𝐋_0) ] , j=1,2,3,4, 𝐊_j=diag(k_j,1,⋯,k_j,N), 𝐈_3N∈ℝ^3N× 3N is the identity matrix, and “vec” is the matrix vectorization operator. Note that control gains k_j,i (i∈𝒱 and j=1,2,3,4) are selected such that roots of the characteristic equation |s^4𝐈+s^3𝐊_1𝐋+s^2𝐊_2𝐋+s𝐊_3+𝐊_4|=0 are all located the collective dynamics (<ref>) is stable. § SIMULATION RESULTS We consider an agent team consisting of 57 quadcopters with the reference configuration shown in Fig. <ref>, where we use the model and trajectory control presented in Refs. <cit.> for multi-agent coverage simulation. Here quadcopters 1 through 4 defined by set 𝒱_B={1,2,3,4} are the boundary leader agents; agent 5 defined by singleton 𝒱_C={5} is core leader; and the remaining agents defined by 𝒱_I={6,⋯,57} are followers. The inter-agent communications are directional and shown by blue vectors in Fig. <ref>. The communication graph is defined by 𝒢(𝒱,ℰ) and converted into the neural network shown in Fig. <ref> with four layers, thus, ℳ={0,1,2,3} (M=3), and 𝒱 can be expressed as 𝒱=𝒲_0⋃𝒲_1⋃𝒲_2⋃𝒲_3. In Fig. <ref>, the agents represented by 𝒲_0, 𝒲_1, 𝒲_2, and 𝒲_3 are colored by cyan, red, green, and black, respectively. We apply the proposed coverage algorithm to cover elliptic, multi-circle, and triangular zones, each specified by the corresponding data set 𝒟, where 𝒟 defines 500 data points shown by green spots in Figs. <ref> (a,b,c). As shown, each target set is represented by 52 points positioned at 𝐩_6 through 𝐩_57, where they are obtained by using the approach presented in Section <ref>. These points are shown by red in Figs. <ref> (a,b,c). Figures <ref> shows the components of actual and desired positions of quadcopters 13, 45, and 51 are plotted versus time overt time interval [0,20]s, by solid black and dashed red, respectively. As seen the actual position of these three agents almost reach the designated desired positions at time t=12s. Figure <ref> shows the time-varying communication weights of agent 41 with its in-neighbors defined by 𝒩_41={34,5,32}. As shown, w_41,j(t) converges to its desired value of ϖ_41,j in about 12 seconds for every j∈𝒩_41. § CONCLUSION We proposed a novel neural-network-based approach for multi-agent coverage of a target with unknown distribution. We developed a forward approach to train the weights of the coverage neural network such that: (i) the target is represented by a finite number of points, (ii) the multi-agent system quickly and decentralizedly converge to the designated points representing the target distribution. For validation, we performed a simulation of multi-agent coverage using a team of 57 quadcopters, each of which is represented by at least one neuron of a the coverage neural network. The simulation results verified fast and decentralized convergence of the proposed multi-agent coverage where each quadcopter reached its designated desired position in about 12 seconds. IEEEtran Let x_i, y_i, and z_i denote position components of quadcopter i∈𝒱, and p_i, m_i ψ_i, θ_i, and ψ_i denote the thrust force magnitude, mass, roll, pitch, yaw angles of quadcopter i∈𝒱, and g=9.81m/s^2 be the gravity acceleration. Then, we can use the model developed in <cit.> and present the quadcopter dynamics by 𝐱̇_i=𝐟(𝐱_i,𝐮_i) , where 𝐟(𝐱_i,𝐮_i)=𝐅(𝐱_i)+𝐆(𝐱_i)𝐮_i 0.99! 𝐱_i=[ x_i y_i z_i ẋ_i ẏ_i ż_i ϕ_i θ_i ψ_i ϕ̇_i θ̇_i ψ̇_i p_i ṗ_i ] ^T, 𝐮_i=[ u_1,i u_2,i u_3,i u_4,i ] ^T, 𝐅(𝐱_i)=[ ẋ_i; ẏ_i; ż_i; p_i m(sinϕ_isinψ_i + cosϕ_icosψ_isinθ_i); p_i m(cosϕ_isinψ_isinθ_i- sinϕ_icosψ_i); p_i mcosϕ_icosθ_i-9.81; ϕ̇_i; θ̇_i; ψ̇_i; 0; 0; 0; ṗ_i; 0; ] , 𝐆(𝐱_i)=[ 𝐠_1 𝐠_2 𝐠_3 𝐠_4 ] = [ 0_9× 1 0_9× 3; 0_3× 1 𝐈_3; 0 0_1× 3; 1 0_1× 3; ] , By defining transformation 𝐱_i→(𝐫_i,𝐫̇_i,𝐫̈_i,⃛𝐫_i,ψ_i,ψ̇_i), we can use the input-state feedback linearization approach presented in <cit.> and convert the the quadcopter dynamics to the following external dynamics: ⃜𝐫_i=𝐯_i, ψ̈_i=u_ψ,i, where 𝐯_i is related to the control input of quadcopter i∈𝒱, denoted by 𝐮_i, by <cit.> 𝐯_i=𝐌_1,i𝐮_i+𝐌_2,i, with 𝐌_1,i= [ L_𝐠__1L_𝐟^3x_i L_𝐠__2L_𝐟^3x_i L_𝐠__3L_𝐟^3x_i L_𝐠__4L_𝐟^3x_i; L_𝐠__1L_𝐟^3y_i L_𝐠__2L_𝐟^3y_i L_𝐠__3L_𝐟^3y_i L_𝐠__4L_𝐟^3y_i; L_𝐠__1L_𝐟^3z_i L_𝐠__2L_𝐟^3z_i L_𝐠__3L_𝐟^3z_i L_𝐠__4L_𝐟^3z_i; L_𝐠__1L_𝐟ψ_i L_𝐠__2L_𝐟ψ_i L_𝐠__3L_𝐟ψ_i L_𝐠__4L_𝐟ψ_i; ]∈ℝ^14× 14 , 𝐌_2,i= [ L_𝐟^4x_i L_𝐟^4y_i L_𝐟^4z_i L_𝐟^2ψ_i ] ^T∈ℝ^14× 1 . In this paper, we assume that the desired yaw angle and its time derivative are both zero at any time t, and choose u_ψ,i=-k_5ψ̇_i-k_6ψ_i Therefore, we can assume that ψ_i(t)=0 at any time t, as a result, the quadcopter i∈𝒱 can be modeled by Eq. (<ref>). [ < g r a p h i c s > ] Hossein Rastgoftar an Assistant Professor at the University of Arizona. Prior to this, he was an adjunct Assistant Professor at the University of Michigan from 2020 to 2021. He was also an Assistant Research Scientist (2017 to 2020) and a Postdoctoral Researcher (2015 to 2017) in the Aerospace Engineering Department at the University of Michigan Ann Arbor. He received the B.Sc. degree in mechanical engineering-thermo-fluids from Shiraz University, Shiraz, Iran, the M.S. degrees in mechanical systems and solid mechanics from Shiraz University and the University of Central Florida, Orlando, FL, USA, and the Ph.D. degree in mechanical engineering from Drexel University, Philadelphia, in 2015. His current research interests include dynamics and control, multiagent systems, cyber-physical systems, and optimization and Markov decision processes.
http://arxiv.org/abs/2307.06059v1
20230712102203
Discovery of spectacular quasar-driven superbubbles in red quasars
[ "Lu Shen", "Guilin Liu", "Zhicheng He", "Nadia L. Zakamska", "Eilat Glikman", "Jenny E. Greene", "Weida Hu", "Guobin Mou", "Dominika Wylezalek", "David S. N. Rupke" ]
astro-ph.GA
[ "astro-ph.GA" ]
M_∙ L_∙ d_∙ ṁ ṁ__SS et al. η__Hβ L_Bol/L_Edd erg s^-1 f__BLR f__GR(r,a) H__BLR κ__disc L_Edd L__Hβ L_[OIII] Ṁ_∙ ℳ̇ ℒ ℒ_ion ℒ__Hβ 𝒮 ℛ__FeII ℒ__Hβ,sat q_ms '' ϵ̅ ⟨⟩ q__ion R__BLR -L_5100 r__Hβ R_g R_sg R__Hβ √(r) r_ms R_in R_out σ_T σ__Hβ M_⊙ τ__Hβ τ__Fe q̃ W V_Hβ V_K V__FWHM v^_^_ r__BLR r__BLR- Y_sat C_ion C_Hβ F_Hβ γ__A γ__B γ__C N__C Θ__BLR A_Hβ S_Hβ Z_Hβ C iv Fe ii Mg ii [O iii] R^' θ^' ϕ^' ∭∫_0^∞∫_0^π∫_0^2π Ṁ_∙ v__ v__ v__ v__X v__Y v__Z x__A y__A z__A X__C Y__C Z__C r__A θ__A ϕ__A φ_b x__P y__P z__P r__P θ__P ϕ__P k⃗__A k⃗__p n⃗__P n⃗__A ϵ__AB ∂ μ^' ϕ^' I(τ,μ,ϕ) S(μ,ϕ,,) τ^' α__B α__C r__B r__P r__BP θ__B θ__C θ__P φ__B φ__C φ__P n⃗__BP x__B x__P y__B y__P z__P z__B n⃗__P n⃗_obs i⃗ j⃗ k⃗ D_tid D_sub N_H α__A β__A α__B β__B 0.0cm 0.2cm 16cm 21cm 1.0cm sciabstract lastnote scilastnote lastnote+1 lastnote. 24pt Discovery of spectacular quasar-driven superbubbles in red quasars Lu Shen^1,2,3, Guilin Liu^1,2∗, Zhicheng He^1,2∗, Nadia L. Zakamska^4∗, Eilat Glikman^5, Jenny E. Greene^6, Weida Hu^1,2,7, Guobin Mou^8, Dominika Wylezalek^9, David S. N. Rupke^10 ^1CAS Key Laboratory for Research in Galaxies and Cosmology, Department of Astronomy, University of Science and Technology of China, Hefei, Anhui 230026, China ^2School of Astronomy and Space Science, University of Science and Technology of China, Hefei 230026, China ^3Department of Physics and Astronomy, Texas A&M University, College Station, TX 77843-4242 USA ^4Department of Physics & Astronomy, The Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218, USA ^5Department of Physics, Middlebury College, Middlebury, VT 05753, USA ^6Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544, USA ^7Department of Physics, University of California, Santa Barbara, Santa Barbara, CA 93106, USA ^8Department of Astronomy, School of Physics and Technology, Wuhan University, Wuhan 430072, China ^9Zentrum für Astronomie der Universität Heidelberg, Astronomisches Rechen-Institut, Mönchhofstr 12-14, D-69120 Heidelberg, Germany ^10Department of Physics, Rhodes College, Memphis, TN 38112, USA ^∗To whom correspondence should be addressed; E-mail: [email protected], [email protected], [email protected]. =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Quasar-driven outflows on galactic scales are a routinely invoked ingredient for galaxy formation models. We report the discovery of ionized gas nebulae surrounding three luminous red quasars at z∼0.4 from Gemini Integral Field Unit (IFU) observations. All these nebulae feature unprecedented pairs of “superbubbles” extending ∼20 kpc in diameter, and the line-of-sight velocity difference between the red- and blue-shifted bubbles reaches up to ∼1200 km s^-1. Their spectacular dual-bubble morphology (in analogy to the Galactic “Fermi bubbles”) and their kinematics provide unambiguous evidence for galaxy-wide quasar-driven outflows, in parallel with the quasi-spherical outflows similar in size from luminous Type-1 and -2 quasars at concordant redshift. These bubble pairs manifest themselves as a signpost of the short-lived superbubble “break-out” phase, when the quasar wind drives the bubbles to escape the confinement from the dense environment and plunge into the galactic halo with a high-velocity expansion. § INTRODUCTION In the evolutionary paradigm of feedback from active galactic nuclei (AGN), the merger-driven accretion power in quasars succeeds in driving a powerful wind. This wind is capable of rapidly sweeping away cold gas and dust, clearing the future fuel of galaxy for star formation, and turning the system into an unobscured quasar <cit.>. As the most luminous quasars at every epoch, red quasars are a natural place to hunt for energetic outflows, and their morphology that strongly points to a connection with merging hints for important clues to quasar/galaxy evolution <cit.>. In this paper, we present 3 red quasars at z ∼ 0.4 with unambiguous signatures of superbubble pairs. We conduct IFU mapping of these targets utilizing the Gemini-North Multi-Object Spectrographs (GMOS-N) <cit.> equipped on the Gemini 8m Telescope. The identical setup and analysis strategy have been successfully used to investigate the ionized gas nebulae surrounding 11 Type-2 and 12 Type-1 highly luminous radio-quiet quasars at z ∼ 0.5 <cit.>. These red quasars are selected from the FIRST-2MASS (F2M) sample by cross-matching the FIRST survey <cit.>, Two Micron All Sky Survey (2MASS<cit.>) and the Guide Star Catalog II (GSC-II) with selection criteria of J - K > 1.7 and R - K > 4.0 <cit.>. For comparison purposes, we further require them to have a WISE luminosity matched to our previously observed Type 1 and 2 quasars (λ L_λ[12μ m] ∼ 10^45-45.9 erg s^-1, see Methods and Table S2), to be non-radio-loud (see calculation on radio loudness in Methods and Table S2), and to situate at similar redshift (z∼0.4). § RESULTS We observe all of our targets with GMOS-N IFU in i band (7050–8500 Å) so as to cover the rest-frame wavelength range 4086–5862 Å that encloses the λ5007 Å emission line. We perform data reduction using the standard Gemini package for IRAF and produce the final science data cubes with a spaxel scale of 0.^''05. We further perform flux calibration, Point Spread Function (PSF) subtraction, and a multi-Gaussian fit to the profile of the λ5007Å emission line in each spaxel (See Methods for details). We characterize the morphology and kinematics of these ionized gas nebulae by mapping the spaxel-by-spaxel distribution of three parameters, following L13b: the integrated emission (denoted Int) to characterize the surface brightness, the median velocity (V_ med, the 50%-th quantile) to characterize the line-of-sight velocity, and the velocity interval that encloses 80% of the total flux (W_80, i.e. the difference between the 10%-th and 90%-th quantiles) to characterize the velocity dispersion. All three red quasars show extended ionized gas nebulae, with each of them having a pair of red-shifted and blue-shifted bubbles. In Fig. 1, Fig. 2 and Fig. 3, we map the distribution of the above-mentioned parameters across the entire quasars and their individual blue- and red-shifted bubbles. We find clear signatures of superbubbles in their well-resolved dual-bubble morphology, which is further aided by the information rendered by kinematics: the narrowness of the line in the outskirts of the bubbles is in line with a geometrically thin shell, and the well-defined double-peak velocity profiles near the galactic centers indicate where the pair of bubbles overlap in projection. These velocity profiles facilitate separation of the emission from the blue- and red-shifted bubbles through meticulous line-fitting, so as to track down each individual bubble in a remarkably clean manner (also see Methods and Fig. S4, S5, S6). The existence of superbubble pairs provides unambiguous evidence for the existence of quasar-driven outflows, in analogy to the Fermi bubbles seen in the Milky Way <cit.>. The fact that the peaks and the velocity gradient lie along the same axes also yields a natural outflow interpretation, as these bubbles are clearly photoionized by the quasar <cit.>. The high-velocity differences and the superposition of red-shifted and blue-shifted bubbles further support an outflow origin and exclude alternative possibilities such as an inflow <cit.> or a rotating galaxy <cit.>. We compare the morphology and kinematic properties of these ionized gas nebulae with those surrounding type 1 and type 2 quasars. To do this, we measure the outflow sizes using the spaxels with >5σ detection. As reported in Table 1, this directly observed radius (R_5σ) ranges from 8.7 to 12.3 kpc. We also report the “intrinsic” isophotal radius (R_ int) and ellipticity (ϵ_ int) of the best-fit ellipse measured at a surface brightness of 10^-15/(1 + z)^4 erg s^-1 cm^-2 arcsec^-2 (so that the cosmological dimming effect is corrected, following Liu13a). We find R_ int = 8.6 – 11.7 kpc, similar to that of type 1 (⟨ R^T1_int⟩ = 10.7±1.7 kpc) and type 2 quasars (⟨ R^T2_int⟩ = 12.9 ±3.4 kpc). The median ellipticity of these dual-bubble red quasar nebulae is ϵ_ int∼ 0.75, in contrast to the quasi-spherical type 1 and type 2 quasar nebulae with a median of ϵ^T1_int =0.12 and ϵ^T2_int =0.18, respectively. Spatially resolved and highly organized velocity structure is present in every nebula (see Fig. 1, Fig. 2 and Fig. 3), and blue- and red-shifted emission predominantly reside on the opposite sides of the quasar. The maximum projected velocity difference between the red-shifted and the blue-shifted regions is δ v_max = 394–1216 km s^-1, substantially higher than that of type 1 and 2 quasars found in L13b: 83-576 km s^-1 and L14: 89-522 km s^-1, implying for energetic outflows from these red quasars. The median and the maxima of W_80 are comparable to that of type 1 and type 2 quasars (up to ∼1000 km s^-1). An evident belt-like feature is commonly seen in the W_80 maps across the center and perpendicular to the direction of the bubble expansion, where velocity dispersion is higher than the rest of the nebula. This feature is mainly due to the superposition of red- and blue-shifted bubbles in projection (as already mentioned above), while the actual line width of in each individual bubble remains consistently narrow throughout its entire extent (see Methods and Fig. S4, S5, S6; this phenomenon is also seen in our simulation results (Fig. 4). § SIMULATION Previous theoretical<cit.> and observational <cit.> works suggest that high velocity (≳ 10^3 km s^-1) quasar winds may play a critical role in the evolution of their host galaxies. Numerical simulations are an indispensable tool in understanding the effects induced by outflows, which can model and predict nonlinear and complex processes prohibitable to analytical analysis. However, previous simulations have been primarily focused on AGN-driven jets <cit.>, ultra-fast outflows with velocity in the range of 10^3 - 10^4 km s^-1 <cit.>, starburst-driven outflows <cit.>, and more general galactic outflows driven by supernovae and supermassive black holes in the cosmological simulation TNG50 <cit.>. Simulations providing a direct link to observational data remain scarce, though significant progress has been made to combine a solid physical basis and predictability of observations <cit.>. In order to determine the outflow parameters in a manner more accurate than conventional analytical calculations based on oversimplified assumptions, and to help constrain the conditions under which the formation of the observed superbubbles becomes realistic, we conduct a data-oriented two-dimensional (2D) hydrodynamics (HD) simulation. We emphasize that our simulation is limited to being an auxiliary tool for the above purposes, in contrast to those deriving from fundamental physical principles. In our simulation, we assume a typical quasar with a back hole (BH) mass of =10^9 and an Eddington ratio of 0.3, corresponding to a bolometric luminosity of =3.9× 10^46, a value matched to those of our red quasars. We initiate the outflowing motion by artificially placing a nozzle in the galactic center. The initial kinetic energy (at the nozzle) is chosen based on the observed power of BAL outflows found in the range of 1–10% percent of their <cit.>. The half-opening angle of the wind and the ionization cones are assumed to be the same (Φ=45^∘). The radiation transfer models in the simulation are adopted following ref.<cit.>. We then employ the software Cloudy<cit.> to obtain the emission coefficient based on the gas density and ionization parameters from the simulation. Finally, a 3D system is obtained by rotating the 2D simulation around the y-axis and re-projecting the 3D structure to 2D with an assigned inclination angle (see Methods for the details of the simulation setup, initial conditions, and calculations). To reproduce the observed superbubbles, we find it feasible to maintain the initial power of the quasar wind to be a constant P_ wind = 3.35× 10^45 =8.6% = 2.6% of Eddington luminosity () lasting over the entire simulation. In Fig. 4, we present the resultant simulated maps and the corresponding emission line profiles after having the simulation evolve for 12 Myr, where an inclination angle of i=30^∘ is adopted. As seen in the figure, the simulated bubble pair is consistent with our observations, where the peanut-like morphology, the well-organized velocity structures and the central belt-like high W_80 regions are all reproduced. As an experiment, we render the simulated system to evolve for an additional period of 8 Myr, finding that the simulated nebula gradually grows into a quasi-sphere (see Methods and Fig. S7). This result implies that the lifetime of the observed superbubbles may be as brief as ∼10 Myr, roughly in line with the small space density of red quasars ∼15-30% in the total radio-selected quasars <cit.>. Hence, our simulation reveals that the formation of such superbubbles requires a continuous energetic wind lasting over ∼12 Myr with a kinetic luminosity of 2.6% of the quasar's Eddington luminosity, and the bubble pair quickly grows into a quasi-spherical morphology. If the quasar is reasonably assumed to operate with a 10% duty cycle, then the kinetic-to-Eddington luminosity ratio has to scale up by a factor of 10, placing these outflows among the most powerful ones reported hitherto <cit.>. We note that the simulated bubbles do not depend on the lifetime of an individual episode of quasar activity, which is found in the range of ∼ 10^3 - 10^5 years <cit.>. This is because, as seen in our simulation, the bubbles, once produced, disappear slowly over a much longer timescale. As a result, after the first inflated bubble clears the space inside it, the bubble structure remains in place, until the quasar turns on the next time, then the engulfed material quickly reaches the shell of the previous bubble and continues to push it. Alternatively, a quasi-spherical nebula can be obtained by increasing the opening angle of the wind/ionization cone (see Fig. S7), rendering this angle a sensitive parameter to shape the morphology of the system. Therefore, our simulation raises the possibility that the opening angle of red quasars is smaller than that of type 1 and 2 quasars (Liu13a, Liu13b, Liu14) when they are set to evolve for the same period. Even though our simplistic simulation is incapable of disentangling certain evolutionary and geometric effects, the combination of observation and simulation is beneficial to the understanding of the formation of the spectacular bubble pairs discovered in this work. Our simulation is consistent with the scenario that when the bubbles have escaped from the confinement of a high-density environment (e.g. a galactic disc), the wind dives rapidly into the galactic halo. Therefore, we conclude that the red quasars reported in this work are caught in a short-lived supper-bubble “break-out” phase on their multi-stage evolutionary track. § DISCUSSION All three red quasars show a pair of superbubbles with a projected spatial extent of ∼20 kpc in diameter and a highly organized velocity structure with a line-of-sight velocity difference of ∼300–1200 km s^-1. We conduct a data-oriented 2D hydrodynamics simulation, revealing that the formation of such super-bubbles requires an energetic wind predicted to be observable in a brief time scale (∼10 Myr). The spectacular bubble morphology provides unambiguous evidence for quasar-driven outflows. The existence of bubbles also offers unique opportunities to measure the energy and momentum, which may be inaccessible for winds in other morphologies <cit.>. Our superbubbles is morphologically similar to the Fermi bubbles in the Milky Way <cit.> and the superbubbles seen in local starburst galaxies <cit.>. Superbubbles have been observed in a handful of low-redshift type-2 quasars: J1356+1026 at z=0.123 with extended X-ray emission (20 kpc) co-spatial with the ionized gas <cit.>, and J1430+1339 at z=0.085 (known as the “Teacup AGN”) with a pair of ∼10 kpc radio bubbles <cit.>, emission bubbles <cit.>, and an arc of X-ray emission tracing the radio and ionized gas emission <cit.>. Recently, a one-sided superbubble is reported to be driven by a non-broad absorption line (non-BAL) quasar at z=0.631, HE 0238-1904, and the associated emission reaches a projected distance of ∼55 kpc <cit.>. In addition to the bubbles themselves, these works suggest that quasar-driven superbubbles can imprint themselves in the X-ray and radio wavelengths when blowing their shells, so that important physical parameters (e.g. temperature and column density of ionized gas) are measurable. Along with future X-ray and radio observations, we expect that the Integral Field Spectrograph (IFS) to be equipped on the Chinese Space Station Telescope (CSST) with a higher spatial resolution of ∼0.^''2 will allow for in-depth scrutinization of the kinematics of these nebulae and superbubbles. Such dusty energetic winds and associated shocks are also expected to be responsible for generating radio emission in red quasars <cit.>. The radio-quiet/intermediate nature of our red quasars (with radio loudness in the range of R ∼ -4.9–4.2; see Table S2 and Methods) supports the scenario that their radio emission may originate, at least partly, from a dusty shocked wind. Our three sample red quasars have a radio power of L_1.4GHz∼ 10^24.6 - 25.0 W Hz^-1 as per their peak flux densities at 1.4 GHz as measured in FIRST (see Table S2). F2M1618 shows a higher radio power (L_ 1.4GHz∼ 10^25.4 W Hz^-1 according to the flux density measured in a larger beam size, which indicates extended radio emission. The current radio data do not allow us to conclude whether the radiatively driven outflows cause shocks that result in synchrotron emission, or potientially existent weak radio jets are driving the outflows, in view of the suggestion that radio jets can play a role in driving outflows even in radio-quiet AGNs <cit.>. Future spatially-resolved radio maps, possibly from JVLA observations, promise to help pin down the origin of the radio emission. One of our red quasars, F2M0830, shows evidence of an ongoing merger <cit.>. <cit.> find that 12 out of 13 red quasars show recent or ongoing interaction, suggesting that major, gas-rich mergers may be the origin of quasar activities, in line with <cit.>. However, the time scales of merging and outflows are substantially different. Galaxy merging can last for 1–3 Gyr years, and be observable for ∼0.2–2 Gyr, depending on the method used to identify the merger, the gas fraction, separation of the merging galaxies, and their relative orientation <cit.>. However, based on our simulation, we estimate the observable time-scale of the bubble morphology to be ∼10 Myr, substantially briefer than that of a merging event. Another intriguing fact seen in our surface brightness maps is that the red-shifted bubbles are, in general, more luminous in than their blue-shifted counterparts by ∼0.15 dex (Table 1). This is unlikely caused by the dust torus surrounding the central engine, or the galaxy-wide dusty disk of the host galaxy, in which case higher extinction in the red-shifted bubble is expected, instead. On the contrary, we suspect that dust is not uniformly distributed within an individual cloud, but is preferably distributed on the side of the cloud further away from the galaxy center, so that the flux of the blue-shifted component is suppressed. This phenomenological explanation is schematically illustrated in Fig. 5. The higher dustiness at the far side away from the radiation source is potentially due to dust remnants from mergers, or dust distributed along the polar axis extending out to a few hundred parsecs likely associated with dusty outflows <cit.>. Unfortunately, the host galaxies are minimally detected in our IFU data. We expect that JWST mapping in the mid-infrared or campaigns conducted at sub-millimeter wavelengths may reveal the dust distribution in these host galaxies, providing further constraints on this interpretation. It has been predicted theoretically that an outflow with a kinetic luminosity of ∼0.5%-5% of the AGN’s bolometric luminosity can act as an agent of significant feedback <cit.>. Our simulation reveals that the formation of such superbubbles requires kinetic luminosity of 2.6% of the Eddington luminosity, corresponding to 8.6% of the bolometric luminosity, implying for effective feedback at work. In fact, our simulation renders a mass outflow rate of the final quasar wind one order of magnitude higher than that of the initial wind from the nozzle, implying a highly efficient interaction with the ISM that promises to shape the evolution of its host galaxy. It has been suggested that the fast winds from quasars may impact structures within the interstellar medium and deposit energy into the intergalactic medium <cit.>. Future campaigns to be conducted at sub-millimeter wavelengths employing ALMA or NOEMA may place improved constraints on the impact of winds on cold gas distribution, and consequentially on quenching and/or triggering of the star formation activity in the host galaxy. § MATERIALS AND METHODS IFU observation and data analysis In this section, we describe the IFU observations and the processes in IFU data analysis to obtain the maps of line-integrated surface brightness, line-of-sight velocity, and the velocity dispersion of the emission line. In our IFU campaign, we adopt the two-slit mode with a 5^''× 7^'' field of view (corresponding to ∼28×39 kpc^2 for our quasars at z ∼ 0.4). The science field of view is sampled by 1000 contiguous 0.^''2 diameter hexagonal lenslets, and simultaneous sky observations are obtained by 500 lenslets located ∼1^' away. The seeing at the time of our observations is ∼0.^''4 (2.1 kpc at z=0.4), as determined by measuring the full-width-half-maximum (FWHM) of the profile of multiple field stars in the acquisition image taken right before the science exposure using psfex <cit.>. All of the targets are observed in i band (7050–8500 Å) so as to cover the rest-frame wavelength range 4086–5862 Å that encloses the emission line. To ensure that none of the important emission lines in this region is severely hindered by the slit gaps, we tune the central wavelength to either 760 or 800 nm, according to their respective redshifts. The employed R400-G5305 grating has a spectral resolution of R = 1918. At the wavelengths of for these three red quasars, this corresponds to a full width at half maximum (FWHM) of ∼167 km s^-1 <cit.>, narrower than our observed line in all cases, rendering the velocity profiles spectrally resolved. For each object, we take two exposures of 1620 sec per each without a spatial offset. Relevant information on our Gemini-GMOS observations is summarized in Table S1. We perform the data reduction using the Gemini package within IRAF version 1.14. The spaxel scale of the final science data cubes is set to be 0.^''05. We flux-calibrate our data using the spectra from the Extended Baryon Oscillation Spectroscopic Survey (eBOSS) for F2M1106 and F2M0830 <cit.>, and the spectra obtained at the W. M. Keck Observatory with the Echellette Spectrograph and Imager <cit.> for F2M0830 <cit.>. We follow the method presented in ref.<cit.>. In short, we scale the IFU data against the eBOSS and ESI spectra by mimicking the observing conditions of these existing spectra. The eBOSS spectra of F2M1106 and F2M0830 are collected by fibres with a 2^'' diameter at a median FWHM_eBOSS of ∼1.5^'' and ∼1.2^'', respectively. In order to mimic the observing condition of eBOSS spectra, the IFU image at each wavelength is convolved with a Gaussian kernel with an FWHM of √(FWHM_eBOSS^2 - seeing^2), where the seeing of IFU data is listed in Table S1. The GMOS spectra between the rest-frame 4980 and 5200 Å are extracted using a 2^''-diameter circular aperture. The ESI spectrum of F2M0830 is taken with an 20^''× 1^'' slit, and placed at an angle of 29^∘ from North to East. The GMOS spectrum is extracted using a 4^''× 1^'' box with the same angle centered on the maximum of the integrated flux of the IFU data cube. The length of this box is smaller than that of ESI due to the field of view of the GMOS IFU. Thus, we compare the resultant spectra to the ESI spectra in the rest-frame wavelength range of 5030–5200 Å, as the continuum flux is dominated by the quasar, which is not affected by the size of the box. Quasar light scattered by the interstellar matter<cit.>, star formation in the quasar host<cit.> and the PSF itself might all contribute to the continuum emission. To focus on the kinematics of nebulae gas, we construct a PSF modeled for each target and subtracted it by scaling to its quasar spectrum from the IFU data cube. The PSF is constructed by interpolating between median images in the two rest-frame wavelength intervals of 4970Å–4980Å and 5030Å–5050Å and normalizing the peak flux of each image to unity. The wavelength intervals are chosen free of and line emission. Due to the wavelength coverage of the IFU data of F2M1106 and F2M0830, their PSFs are constructed as the normalized median image in a single wavelength interval of 5030Å–5050Å. The PSFs of the three red quasars are shown in Fig. S1. The quasar spectrum is constructed by removing the recovered red- and blue-shifted [O iii] component in the spectra of the central spaxels. The central spaxel is determined by the maximum of the integrated flux between rest-frame 4900 and 5100 Å. We then reconstruct the combined spectra of the central 5×5 spaxels by fitting a three/four Gaussian model, after removing Fe ii emission. The doublet lines are fitted with the same central velocity and velocity dispersion. The combined and fitted spectra are shown in Fig. S2. The Fe ii emission is determined by fitting to the continua in the rest-frame 5100–5250 Å using the Fe ii template from <cit.> and smoothed using a Gaussian kernel, whose width is one of the free fitting parameters. We find Fe ii emission to be substantial in the central spectrum of F2M1106, as shown in Fig. S2, but undetected in the other two red quasars. To confirm that the Fe ii emission of F2M1106 is dominated by the quasar, we perform the same procedure on each spaxel. The intensity map of Fe ii emission, as shown in Fig. S3, reveal a point source with the same size as the PSF and centered at the central spaxel. Therefore, Fe ii is removed during PSF subtraction. The central spectrum of F2M1106 is not well fitted at <4980Å possibly due to additional Fe and/or the broad wing of the Hβ line. After flux calibration and subtraction of the PSF from the IFU data cubes, we perform a multi-Gaussian fit to the λ5007Å line profile, so that a noiseless model of the line is obtained in every spatial pixel<cit.>. As described in L13b, up to 3 Gaussian components are needed for these fits. The actual number of employed Gaussians is determined by comparing the reduced χ^2 values as a function of the number of components. The uncertainty of the spectrum of each spaxel is the standard deviation of spectra in regions with < 3σ, which is used to calculate the reduced χ^2 values. We then compute the line properties (i.e., intensity, median velocity, velocity dispersion) in every spatial position from the multi-Gaussian fit (instead of the observed profile). In addition, we subtract additional continuum contribution using spectra in λ_rest∼ 5050-5100Å. Only a minimal continuum residual is left after the PSF subtraction due to the spaxel variation of IFU data. Individual Gaussian is then assigned to red- and blue-shifted bubbles according to their mean wavelength. Following ref.<cit.> and L13b, we measure two quantities that characterize the line-of-sight velocity and velocity dispersion of the ionized gas: the median velocity (v_ med) that bisects the total area underneath the emission line profile, and the velocity interval that encloses 80% of the total emission centered at the median velocity (W_80). As noted in previous references, W_80 is more sensitive to the weak broad wings of a non-Gaussian profile, but is similar to FWHM for a Gaussian profile (W_80=1.088× FWHM). Examples of spectra in the wavelength vicinity of in a number of representative spaxels are shown in Fig. S4, Fig. S5, Fig. S6. These spectra are selected across the entire outflow to demonstrate the complexity of the line profile. Bolometric luminosity of red quasars We estimate the bolometric luminosity of our red quasars to guide our simulation. This task is nontrivial, as red quasars are dust-obscured at wavelengths from X-ray to optical and maybe even mid-infrared <cit.>. First, we adopt the bolometric luminosity estimated from luminosity at rest-frame 12μm extrapolated from WISE photometry, rendering a bolometric correction factor of 9 <cit.> (see Table S2). The results are in the range of 10^46.5 - 46.8 erg s^-1. However, these bolometric luminosities may be either underestimated, as the adopted standard bolometric correction is derived from unobscured quasars, or overestimated, if other sources (e.g. dust in the host) contribute to the rest-frame 12μm flux as well. An alternative approach is using the luminosity at rest-frame 5100 Å and applying a bolometric correction of BC_5100 = 9.65 <cit.>, along with additional dust extinction correction. We adopt the extinction E(B-V) from ref. <cit.> by fitting a reddened quasar continuum to a full spectrum. The intrinsic shape of quasar continua is assumed to be a Gaussian distribution with fν∝ν^α (See ref. <cit.> for details). This approach leads to resultant bolometric luminosities in the range of 10^46.0 - 46.3 erg s^-1. Hence, we obtain results consistent within a factor of 3-5 from these two methods. Radio-loudness of red quasars Radio-loudness is defined as the ratio of radio to optical emission with radio-loud objects typically possessing powerful collimated radio jets. However, the presence of reddening and extinction at optical wavelengths may render red quasars to be artificially misclassified as radio-loud. Therefore, we adopt the radio-loudness definition following <cit.>: R = log_10(1.4×10^16 L_1.4GHz/L_ 6μ m), where L_1.4GHz is in units of W Hz^-1 and L_ 6μ m in erg/s. The latter is less sensitive to dust extinction but it still probes the quasar continuum. Radio-quiet objects are defined to have R<-4.6, radio-intermediate R = -3.5 ∼ -4.6, and radio-loud R> -3.5. Under this definition, F2M0830 and F2M1106 are radio-quiet, while F2M1618 is radio-intermediate, regardless of radio fluxes from FIRST or from ref. <cit.> in an NVSS-like beam size. All of them are point-like sources in the FIRST images. Only F2M1618 has a major axis larger than the ∼5^'' beam size of FIRST. Hence, our red quasars are radio-selected objects, but are in the radio-quiet/intermediate regime. Their radio emission may originate, at least partly, from shocked winds <cit.>. Simulation setup We conduct a two-dimensional (2D) hydrodynamics (HD) simulation using zeusmp<cit.> code to reproduce the observed quasar outflow, in order to fully understand its physical conditions. In this section, we describe in detail the simulation setup and initial conditions. The simulation is performed in a two-dimensional (2D) polar coordinate system. There are 1750 grid cells in the radial (r) equally spaced in logarithm from r = 0.5 kpc to r = 50 kpc and 600 grid cells equally spaced in the azimuthal from 0 along the Y-axis to π/2 rotate clockwise. The logarithm radial grid provides high resolution to capture the injection of wind in the simulation grid. The azimuthal range is chosen under the assumption of symmetrical wind/bubble, thus, the results in the other three quadrants are the reflection of that in the first quadrants. We adopt a BH mass of =10^9, corresponding to an Eddington luminosity of ∼ 1.3× 10^47, and a typical quasar with Eddington ratio of 0.3, corresponding to a bolometric luminosity of = 3.9× 10^46, which is consistent with those of our red quasars (ν L_bol, 12μ m = 3.2 - 6.5 × 10^46). The initial environment where the wind is propagating includes a spherical central dense nebular and a spherical surrounded diffuse ISM. The dense nebular has an average number density of 10^2 within a radius of 2 kpc. Following the M-σ relation <cit.> and accounting for the stellar rotational velocity that remains constant with increasing distance away from the galactic center, we use a constant velocity dispersion σ≃ 150 for this initial nebular. Thus, the initial velocity of nebular is assumed in the form of v_ initial=150 sin (θ)^1/2 rotated around the Z-axis, where θ is the polar angle with respect to Z-axis (see Fig. S8). The diffuse ISM is assumed with the number density of n_ ism= 10^3 () × r_0/r( pc) with r_0= 1  pc and an initial velocity of 0. We acknowledge the spherical initial nebular might not be representative of the host galaxies of red quasars. It has been found from the rest-frame visible HST images that the majority of host galaxies of red quasars, including F2M0830, are merging systems <cit.>. In the next step, we propagate an isotropic outflow from the nuclear region (as a “nozzle”) into the assumed environment. The initial kinetic energy of wind is chosen based on the observed power of BAL outflow, which is found in general in the range of 1-10% percent of the of quasars <cit.>. These studies motivate us to blow a high-velocity wind with a number density of n_ wind=1.0× 10^5 and a velocity of v_ wind=1× 10^4, which is launched at a galactocentric distance of 1pc and then propagates into the pre-setup ISM. We experiment with the initial kinetic energy to produce the resultant bubbles in line with the IFU data and found it to be P_ wind=1.12× 10^46, corresponding to 8.6% and 2.6% of the assumed and . As shown in Fig. S8, the size of the outflow expands from about 10 kpc at 10 Myr to 20 kpc at 20 Myr. The expanding speed of the outflow surface (shock-wave) is ∼ 1000 , consistent with our observational results. The number density of the bubble surface is 0.1-10. Since the ionized outflow has been observed to be directional and appear in dual-ionized cones, we choose an opening angle of ionization core of Φ =45^∘<cit.> in our primary simulation run and Φ = 60^∘ as a comparison. The total power of wind within the ionization cone is P_ wind, Φ≤45^∘ = 0.3 P_ wind. We note that only those outflows in the ionization cone are included in the following simulation (also see Fig. 4) since it was suggested that outflows are mostly observed within the ionization structure <cit.>. We acknowledge that this assumption places the opening angle of the wind the same as that of the ionization cone. From the general model of quasar<cit.>, the former is determined by circum-nuclear obscuring material, known as a torus. The observational evidence prefers a larger opening angle of wind than that of ionization cone <cit.>. However, in hydrodynamic simulations, the density distribution of the ISM is more important in terms of shaping the morphology of wind, rather than the opening angle of the wind <cit.>. The set equations of hydrodynamics for the interaction process are as follows: dρ/dt + ρ∇·𝐯 =0, ρd𝐯/dt =- ∇ P - ρ∇Φ, ∂ e/∂ t +∇· (e𝐯 )=- P ∇·𝐯, where ρ is the density of the gas, e is the internal energy density of the gas, P=(γ -1)e is the gas pressure. The viscosity and thermal conductivity are not included in the simulation. We also incorporate a radiation transfer receipt in the simulation. In detail, we include the cooling and heating terms following <cit.>: the Compton heating/cooling rate follows G_ Compton=8.9× 10^-36ξ_X(T_X-4T), the X-ray photoionization heating and recombination cooling rate follows G_X=1.5× 10^-21ξ_X^1/4T^-1/2(1-T/T_X), and the cooling function<cit.> for solar abundance is L_b,l= 2.2× 10^-27T^0.5+2.0× 10^-15T^-1.2 + 2.5 × 10^-24 for T ≥ 1 × 10^5 K, and 2.0 × 10^-31T^2.0 for T < 1 × 10^5 K<cit.>, where T_X is the temperature of X-ray radiation, which is 4 times the Compton temperature T_c. Considering T_c is ∼ 1 × 10^7 K for quasars<cit.>, we here set T_X to be 4 × 10^7 K. The X-ray photoionization parameter is then defined as ξ_X=L_X e^-τ_X(r)/nr^2, where n is the gas density, L_X is the intrinsic X-ray luminosity, and τ_X(r) = M_2 N_Hσ_T is the X-ray optical depth in which the multiplier M_2 is 100 for ξ_X < 10^5 and 1 for ξ_X ≥ 10^5. Finally, we apply the photoionization simulation using cloudy<cit.> to generate the λ5007Å emission coefficient as functions of ionization parameter (U) and gas density (see Fig. S9). The ionization parameter is defined as U = Q_H / (4π r^2 n_H c), where Q_H is the source emission rate of hydrogen-ionizing photons, r is the distance to the absorber from the source, c is the speed of light, and n_H is the hydrogen number density. We adopt the UV-soft Spectral Energy Distribution (SED) template <cit.>, which is commonly used for high-luminosity radio-quiet quasars. We compute a set of models with the ionization parameters in a range of -5 ≤log U ≤ 3 with a step of ΔlogU = 0.1, the gas density in a range of -2 ≤log(n_H) ≤ 7 with a step of Δlog(n_H) =0.1, and a solar metallicity Z = Z⊙. For each grid, we obtain the λ5007Å emission coefficient based on its ionization parameter and gas density from the bubble simulation. Note that the shielding effect due to any foreground cloud is included in the calculation of the ionization parameter. We then multiply the λ5007Å emission coefficient by the length of the grid along the line of sight to obtain the surface brightness of a single grid. The final surface brightness is obtained by accumulating the surface brightness of all grids along the line of sight. To incorporate the viewing angle, we further obtain the 3D  emission coefficient map by rotating the 2D  emission coefficient map around the Y-axis and re-project the 3D  emission coefficient map with an inclination angle i = 30^∘. This inclination angle is chosen to mimic the viewing angle of our red quasars. The results after having the simulation evolve for 12 Myr are shown in Fig. 4. In addition, we allow the simulated system to evolve for an additional period of 8 Myr. The corresponding surface brightness map is shown in the left panel of Fig. S7. As mentioned above, Φ = 60^∘ is adopted as a comparison to the effect of the opening angle of the ionization cone. The surface brightness map of Φ = 60^∘ is shown in the right panel of Fig. S7. Kinetic energy and mass flow of the galactic outflow from the simulation We integrate simulation grids to calculate the kinetic energy and mass flow of the galactic outflow. The kinetic energy of the galactic outflow is calculated as follows: Ė_ out=1/2 ∫_0.5kpc^40kpc∫_0^π∫_0^2π n_ ev^3m_ pr^2sin (θ)drdθ dψ/T, where n_ e is the number density of gas, m_ p is the mass of proton and T is the total evolution time scale in the simulation. We note that we assume the gas is singly ionized, i.e., contains only ionized hydrogen. We do not take into account the doubly ionized Helium, which would introduce a small correction of ∼1.09 <cit.>. As shown in the left panel of Fig. S10, the kinetic energy of galactic outflow driven by the central quasar winds is Ė_ out =3.16× 10^43, about 3% of power of quasar winds. The kinetic energy of the bubble shell (where the emission coefficient exceeds 10^-24) is ∼10% of that of the outflow. The mass flow rate of the galactic outflow is calculated as follows: Ṁ_ out=∫_0.5kpc^40kpc∫_0^π∫_0^2π n_ em_ pr^2sin (θ)drdθ dψ/T. The mass flow rate of quasar wind and outflow as functions of evolved time are shown in the right panel of Fig. S10. The mass flow rate of input quasar winds is Ṁ_ wind∼100 . The mass flow rate of outflow is Ṁ_ out∼ 1000 , one order of magnitude higher than that of input quasar winds. The energetic mass outflow from simulation tentatively suggests that the quasar winds might have sufficient interaction with the interstellar medium and might be capable of shaping the evolution of its host galaxy <cit.>. Nevertheless, we acknowledge that the predicted Ṁ_ out might be an upper limit since it is critically determined by the shape of the assumed galaxy. A lower Ṁ_ out might be expected for a disk galaxy with its wind breaking out into a pair of bubbles perpendicular to the disk <cit.>. 10 Tabor1993 G. Tabor, J. Binney, Elliptical galaxy cooling flows without mass drop-out. 263, 323-334 (1993). scannapieco2004 E. Scannapieco, S. P. Oh, Quasar feedback: the missing link in structure formation. Astrophys. J. 608, 62 (2004). DiMatteo2005 T. Di Matteo, V. Springel, L. Hernquist, Energy input from quasars regulates the growth and activity of black holes and their host galaxies. 433, 604-607 (2005). Hopkins2010 P. F. Hopkins, M. Elvis, Quasar feedback: more bang for your buck. 401, 7-14 (2010). Urrutia2008 T. Urrutia, M. Lacy, R. H. Becker, Evidence for Quasar Activity Triggered by Galaxy Mergers in HST Observations of Dust-reddened Quasars. 674, 80-96 (2008). Glikman2015 E. Glikman, B. Simmons, M. Mailly, K. Schawinski, C. M. Urry, M. Lacy, Major Mergers Host the Most-luminous Red Quasars at z -0.5ex~2: A Hubble Space Telescope WFC3/IR Study. 806, 218 (2015). Wylezalek2022 D. Wylezalek, A. Vayner, D. S. N. Rupke, N. L. Zakamska, S. Veilleux, Y. Ishikawa, C. Bertemes, W. Liu, J. K. Barrera-Ballesteros, H.-W. Chen, A. D. Goulding, J. E. Greene, K. N. Hainline, F. Hamann, T. Heckman, S. D. Johnson, D. Lutz, N. Lützgendorf, V. Mainieri, R. Maiolino, N. P. H. Nesvadba, P. Ogle, E. Sturm, First Results from the JWST Early Release Science Program Q3D: Turbulent Times in the Life of a z 3 Extremely Red Quasar Revealed by NIRSpec IFU. 940, L7 (2022). AllingtonSmith2002 J. Allington-Smith, G. Murray, R. Content, G. Dodsworth, R. Davies, B. W. Miller, I. Jorgensen, I. Hook, D. Crampton, R. Murowinski, Integral Field Spectroscopy with the Gemini Multiobject Spectrograph. I. Design, Construction, and Testing. 114, 892-912 (2002). Liu2013a G. Liu, N. L. Zakamska, J. E. Greene, N. P. H. Nesvadba, X. Liu, Observations of feedback from radio-quiet quasars - I. Extents and morphologies of ionized gas nebulae. 430, 2327-2345 (2013). Liu2013b G. Liu, N. L. Zakamska, J. E. Greene, N. P. H. Nesvadba, X. Liu, Observations of feedback from radio-quiet quasars - II. Kinematics of ionized gas nebulae. 436, 2576-2597 (2013). Liu2014 G. Liu, N. L. Zakamska, J. E. Greene, Similarity of ionized gas nebulae around unobscured and obscured quasars. 442, 1303-1318 (2014). Becker1995 R. H. Becker, R. L. White, D. J. Helfand, The FIRST Survey: Faint Images of the Radio Sky at Twenty Centimeters. 450, 559 (1995). Skrutskie2006 M. F. Skrutskie, R. M. Cutri, R. Stiening, M. D. Weinberg, S. Schneider, J. M. Carpenter, C. Beichman, R. Capps, T. Chester, J. Elias, J. Huchra, J. Liebert, C. Lonsdale, D. G. Monet, S. Price, P. Seitzer, T. Jarrett, J. D. Kirkpatrick, J. E. Gizis, E. Howard, T. Evans, J. Fowler, L. Fullmer, R. Hurt, R. Light, E. L. Kopan, K. A. Marsh, H. L. McCallon, R. Tam, S. Van Dyk, S. Wheelock, The Two Micron All Sky Survey (2MASS). 131, 1163-1183 (2006). Glikman2007 E. Glikman, D. J. Helfand, R. L. White, R. H. Becker, M. D. Gregg, M. Lacy, The FIRST-2MASS Red Quasar Survey. 667, 673-703 (2007). Glikman2012 E. Glikman, T. Urrutia, M. Lacy, S. G. Djorgovski, A. Mahabal, A. D. Myers, N. P. Ross, P. Petitjean, J. Ge, D. P. Schneider, D. G. York, FIRST-2MASS Red Quasars: Transitional Objects Emerging from the Dust. 757, 51 (2012). Su2010 M. Su, T. R. Slatyer, D. P. Finkbeiner, Giant Gamma-ray Bubbles from Fermi-LAT: Active Galactic Nucleus Activity or Bipolar Galactic Wind? 724, 1044-1082 (2010). Predehl2020 P. Predehl, R. A. Sunyaev, W. Becker, H. Brunner, R. Burenin, A. Bykov, A. Cherepashchuk, N. Chugai, E. Churazov, V. Doroshenko, N. Eismont, M. Freyberg, M. Gilfanov, F. Haberl, I. Khabibullin, R. Krivonos, C. Maitra, P. Medvedev, A. Merloni, K. Nandra, V. Nazarov, M. Pavlinsky, G. Ponti, J. S. Sanders, M. Sasaki, S. Sazonov, A. W. Strong, J. Wilms, Detection of large-scale X-ray bubbles in the Milky Way halo. 588, 227-231 (2020). Harrison2014 C. M. Harrison, D. M. Alexander, J. R. Mullaney, A. M. Swinbank, Kiloparsec-scale outflows are prevalent among luminous AGN: outflows and feedback in the context of the overall AGN population. 441, 3306-3347 (2014). Vayner2021 A. Vayner, N. L. Zakamska, R. A. Riffel, R. Alexandroff, M. Cosens, F. Hamann, S. Perrotta, D. S. N. Rupke, T. S. Bergmann, S. Veilleux, G. Walth, S. Wright, D. Wylezalek, Powerful winds in high-redshift obscured and red quasars. 504, 4445-4459 (2021). Dekel2009 A. Dekel, R. Sari, D. Ceverino, Formation of Massive Galaxies at High Redshift: Cold Streams, Clumpy Disks, and Compact Spheroids. 703, 785-801 (2009). Steidel2010 C. C. Steidel, D. K. Erb, A. E. Shapley, M. Pettini, N. Reddy, M. Bogosavljević, G. C. Rudie, O. Rakic, The Structure and Kinematics of the Circumgalactic Medium from Far-ultraviolet Spectra of z -0.5ex~= 2-3 Galaxies. 717, 289-322 (2010). Reyes2011 R. Reyes, R. Mandelbaum, J. E. Gunn, J. Pizagno, C. N. Lackner, Calibrated Tully-Fisher relations for improved estimates of disc rotation velocities. 417, 2347-2386 (2011). Pelliccia2017 D. Pelliccia, L. Tresse, B. Epinat, O. Ilbert, N. Scoville, P. Amram, B. C. Lemaux, G. Zamorani, HR-COSMOS: Kinematics of star-forming galaxies at z 0.9. 599, A25 (2017). VegaBeltran2001 J. C. Vega Beltrán, A. Pizzella, E. M. Corsini, J. G. Funes, W. W. Zeilinger, J. E. Beckman, F. Bertola, Kinematic properties of gas and stars in 20 disc galaxies. 374, 394-411 (2001). di2005 T. Di Matteo, V. Springel, L. Hernquist, Energy input from quasars regulates the growth and activity of black holes and their host galaxies. Nature 433, 604 (2005). murray2005 N. Murray, E. Quataert, T. A. Thompson, On the maximum luminosity of galaxies and their central black holes: feedback from momentum-driven winds. Astrophys. J. 618, 569 (2005). ciotti2009 L. Ciotti, J. P. Ostriker, D. Proga, Feedback from central black holes in elliptical galaxies. i. models with either radiative or mechanical feedback but not both. Astrophys. J. 699, 89 (2009). hopkins2009 P. F. Hopkins, M. Elvis, Quasar feedback: more bang for your buck. Mon. Not. R. Astron. Soc. 401, 7–14 (2009). ostriker2010 J. P. Ostriker, E. Choi, L. Ciotti, G. S. Novak, D. Proga, Momentum driving: which physical processes dominate active galactic nucleus feedback? Astrophys. J. 722, 642 (2010). arav2008 N. Arav, M. Moe, E. Costantini, K. T. Korista, C. Benn, S. Ellison, Measuring column densities in quasar outflows: Vlt observations of qso 2359–1241. Astrophys. J. 681, 954 (2008). moe2009 M. Moe, N. Arav, M. A. Bautista, K. T. Korista, Quasar Outflow Contribution to AGN Feedback: Observations of QSO SDSS J0838+2955. 706, 525-534 (2009). arav2013 N. Arav, B. Borguet, C. Chamberlain, D. Edmonds, C. Danforth, Quasar outflows and agn feedback in the extreme uv: Hst/cos observations of he 0238- 1904. Mon. Not. R. Astron. Soc. 436, 3286–3305 (2013). arav2018 N. Arav, G. Liu, X. Xu, J. Stidham, C. Benn, C. Chamberlain, Evidence that 50% of balqso outflows are situated at least 100 pc from the central source. Astrophys. J. 857, 60 (2018). hamann2011 F. Hamann, N. Kanekar, J. Prochaska, M. Murphy, S. Ellison, A. Malec, N. Milutinovic, W. Ubachs, A high-velocity narrow absorption line outflow in the quasar j212329. 46- 005052.9. Mon. Not. R. Astron. Soc. 410, 1957–1974 (2011). hamann2019 F. Hamann, H. Herbst, I. Paris, D. Capellupo, On the structure and energetics of quasar broad absorption-line outflows. Monthly Notices of the Royal Astronomical Society 483, 1808–1828 (2019). he2019 Z. He, T. Wang, G. Liu, H. Wang, W. Bian, K. Tchernyshyov, G. Mou, Y. Xu, H. Zhou, R. Green, et al., The properties of broad absorption line outflows based on a large sample of quasars. Nature Astronomy 3, 265 (2019). Chen2022 Z. Chen, Z. He, L. C. Ho, Q. Gu, T. Wang, M. Zhuang, G. Liu, Z. Wang, Evidence for the connection between star formation rate and the evolutionary phases of quasars. Nature Astronomy 6, 339-343 (2022). Wagner2012 A. Y. Wagner, G. V. Bicknell, M. Umemura, Driving Outflows with Relativistic Jets and the Dependence of Active Galactic Nucleus Feedback Efficiency on Interstellar Medium Inhomogeneity. 757, 136 (2012). Tanner2022 R. Tanner, K. A. Weaver, Simulations of AGN-driven Galactic Outflow Morphology and Content. 163, 134 (2022). Wagner2013 A. Y. Wagner, M. Umemura, G. V. Bicknell, Ultrafast Outflows: Galaxy-scale Active Galactic Nucleus Feedback. 763, L18 (2013). Schneider2018 E. E. Schneider, B. E. Robertson, Introducing CGOLS: The Cholla Galactic Outflow Simulation Suite. 860, 135 (2018). Nelson2019 D. Nelson, A. Pillepich, V. Springel, R. Pakmor, R. Weinberger, S. Genel, P. Torrey, M. Vogelsberger, F. Marinacci, L. Hernquist, First results from the TNG50 simulation: galactic outflows driven by supernovae and black hole feedback. 490, 3234-3261 (2019). Yuan2018 F. Yuan, D. Yoon, Y.-P. Li, Z.-M. Gan, L. C. Ho, F. Guo, Active Galactic Nucleus Feedback in an Elliptical Galaxy with the Most Updated AGN Physics. I. Low Angular Momentum Case. 857, 121 (2018). borguet2013 B. C. Borguet, N. Arav, D. Edmonds, C. Chamberlain, C. Benn, major contributor to agn feedback: vlt x-shooter observations of s iv balqso outflows. Astrophys. J. 762, 49 (2013). chamberlain2015 C. Chamberlain, N. Arav, C. Benn, Strong candidate for agn feedback: Vlt/x-shooter observations of balqso sdss j0831+ 0354. Mon. Not. R. Astron. Soc. 450, 1085–1093 (2015). Choi2020 H. Choi, K. M. Leighly, D. M. Terndrup, S. C. Gallagher, G. T. Richards, Discovery of a Remarkably Powerful Broad Absorption-line Quasar Outflow in SDSS J135246.37+423923.5. 891, 53 (2020). He2022 Z. He, G. Liu, T. Wang, G. Mou, R. Green, W. Bian, H. Wang, L. C. Ho, M. Sun, L. Shen, N. Arav, C. Chen, Q. Wu, H. Guo, Z. Lin, J. Li, W. Yi, Evidence for quasar fast outflows being accelerated at the scale of tens of parsecs. Science Advances 8, eabk3291 (2022). proga2000 D. Proga, J. M. Stone, T. R. Kallman, Dynamics of line-driven disk winds in active galactic nuclei. Astrophys. J. 543, 686 (2000). Ferland2017 G. Ferland, M. Chatzikos, F. Guzmán, M. Lykins, P. Van Hoof, R. Williams, N. Abel, N. Badnell, F. Keenan, R. Porter, et al., The 2017 release of cloudy. Revista mexicana de astronomía y astrofísica 53 (2017). Glikman2018 E. Glikman, M. Lacy, S. LaMassa, D. Stern, S. G. Djorgovski, M. J. Graham, T. Urrutia, L. Lovdal, M. Crnogorcevic, H. Daniels-Koch, C. B. Hundal, M. Urry, E. L. Gates, S. Murray, Luminous WISE-selected Obscured, Unobscured, and Red Quasars in Stripe 82. 861, 37 (2018). Glikman2022 E. Glikman, M. Lacy, S. LaMassa, C. Bradley, S. G. Djorgovski, T. Urrutia, E. L. Gates, M. J. Graham, C. M. Urry, I. Yoon, The WISE-2MASS Survey: Red Quasars Into the Radio Quiet Regime. arXiv e-prints p. arXiv:2204.13745 (2022). Arav2020 N. Arav, X. Xu, T. Miller, G. A. Kriss, R. Plesha, HST/COS Observations of Quasar Outflows in the 500-1050 Å Rest Frame. I. The Most Energetic Outflows in the Universe and Other Discoveries. 247, 37 (2020). Capellupo2014 D. M. Capellupo, F. Hamann, T. A. Barlow, A variable P v broad absorption line and quasar outflow energetics. 444, 1893-1900 (2014). Martini2003 P. Martini, M. W. Regan, J. S. Mulchaey, R. W. Pogge, Circumnuclear Dust in Nearby Active and Inactive Galaxies. I. Data. 146, 353-406 (2003). Shen2021 Y. Shen, Extreme Variability and Episodic Lifetime of Quasars. 921, 70 (2021). Greene2012 J. E. Greene, N. L. Zakamska, P. S. Smith, A Spectacular Outflow in an Obscured Quasar. 746, 86 (2012). Nesvadba2006 N. P. H. Nesvadba, M. D. Lehnert, F. Eisenhauer, A. Gilbert, M. Tecza, R. Abuter, Extreme Gas Kinematics in the z=2.2 Powerful Radio Galaxy MRC 1138-262: Evidence for Efficient Active Galactic Nucleus Feedback in the Early Universe? 650, 693-705 (2006). Sakamoto2006 K. Sakamoto, P. T. P. Ho, D. Iono, E. R. Keto, R.-Q. Mao, S. Matsushita, A. B. Peck, M. C. Wiedner, D. J. Wilner, J.-H. Zhao, Molecular Superbubbles in the Starburst Galaxy NGC 253. 636, 685-697 (2006). Tsai2009 A.-L. Tsai, S. Matsushita, K. Nakanishi, K. Kohno, R. Kawabe, T. Inui, H. Matsumoto, T. G. Tsuru, A. B. Peck, A. Tarchi, Molecular Superbubbles and Outflows from the Starburst Galaxy NGC 2146. 61, 237 (2009). Greene2014 J. E. Greene, D. Pooley, N. L. Zakamska, J. M. Comerford, A.-L. Sun, Extended X-Ray Emission from a Quasar-driven Superbubble. 788, 54 (2014). Harrison2015 C. M. Harrison, A. P. Thomson, D. M. Alexander, F. E. Bauer, A. C. Edge, M. T. Hogan, J. R. Mullaney, A. M. Swinbank, Storm in a “Teacup”: A Radio-quiet Quasar with 10 kpc Radio-emitting Bubbles and Extreme Gas Kinematics. 800, 45 (2015). Keel2015 W. C. Keel, W. P. Maksym, V. N. Bennert, C. J. Lintott, S. D. Chojnowski, A. Moiseev, A. Smirnova, K. Schawinski, C. M. Urry, D. A. Evans, A. Pancoast, B. Scott, C. Showley, K. Flatland, HST Imaging of Fading AGN Candidates. I. Host-galaxy Properties and Origin of the Extended Gas. 149, 155 (2015). Lansbury2018 G. B. Lansbury, M. E. Jarvis, C. M. Harrison, D. M. Alexander, A. Del Moro, A. C. Edge, J. R. Mullaney, A. P. Thomson, Storm in a Teacup: X-Ray View of an Obscured Quasar and Superbubble. 856, L1 (2018). Zhao2023 Q. Zhao, J. Wang, Discovery of a Spatially and Kinematically Resolved 55 kpc Scale Superbubble Inflated by an Intermediate-redshift Non-BAL Quasar. 943, L25 (2023). Zakamska2014 N. L. , J. E. Greene, Quasar feedback and the origin of radio emission in radio-quiet quasars. 442, 784-804 (2014). CalistroRivera2021 G. Calistro Rivera, D. M. Alexander, D. J. Rosario, C. M. Harrison, M. Stalevski, S. Rakshit, V. A. Fawcett, L. K. Morabito, L. Klindt, P. N. Best, M. Bonato, R. A. A. Bowler, T. Costa, R. Kondapally, The multiwavelength properties of red QSOs: Evidence for dusty winds as the origin of QSO reddening. 649, A102 (2021). Klindt2019 L. Klindt, D. M. Alexander, D. J. Rosario, E. Lusso, S. Fotopoulou, Fundamental differences in the radio properties of red and blue quasars: evolution strongly favoured over orientation. 488, 3109-3128 (2019). Smith2020 K. L. Smith, M. Koss, R. Mushotzky, O. I. Wong, T. T. Shimizu, C. Ricci, F. Ricci, Significant Suppression of Star Formation in Radio-quiet AGN Host Galaxies with Kiloparsec-scale Radio Structures. 904, 83 (2020). Morganti2015 R. Morganti, T. Oosterloo, J. B. R. Oonk, W. Frieswijk, C. Tadhunter, The fast molecular outflow in the Seyfert galaxy IC 5063 as seen by ALMA. 580, A1 (2015). Wylezalek2018 D. Wylezalek, R. Morganti, Questions and challenges of what powers galactic outflows in active galactic nuclei. Nature Astronomy 2, 181-182 (2018). Hopkins2008 P. F. Hopkins, L. Hernquist, T. J. Cox, D. Kereš, A Cosmological Framework for the Co-Evolution of Quasars, Supermassive Black Holes, and Elliptical Galaxies. I. Galaxy Mergers and Quasar Activity. 175, 356-389 (2008). Treister2012 E. Treister, K. Schawinski, C. M. Urry, B. D. Simmons, Major Galaxy Mergers Only Trigger the Most Luminous Active Galactic Nuclei. 758, L39 (2012). Lotz2008 J. M. Lotz, P. Jonsson, T. J. Cox, J. R. Primack, Galaxy merger morphologies and time-scales from simulations of equal-mass gas-rich disc mergers. 391, 1137-1162 (2008). Lotz2010 J. M. Lotz, P. Jonsson, T. J. Cox, J. R. Primack, The effect of gas fraction on the morphology and time-scales of disc galaxy mergers. 404, 590-603 (2010). GarciaBurillo2021 S. García-Burillo, A. Alonso-Herrero, C. Ramos Almeida, O. González-Martín, F. Combes, A. Usero, S. Hönig, M. Querejeta, E. K. S. Hicks, L. K. Hunt, D. Rosario, R. Davies, P. G. Boorman, A. J. Bunker, L. Burtscher, L. Colina, T. Díaz-Santos, P. Gandhi, I. García-Bernete, B. García-Lorenzo, K. Ichikawa, M. Imanishi, T. Izumi, A. Labiano, N. A. Levenson, E. López-Rodríguez, C. Packham, M. Pereira-Santaella, C. Ricci, D. Rigopoulou, D. Rouan, T. Shimizu, M. Stalevski, K. Wada, D. Williamson, The Galaxy Activity, Torus, and Outflow Survey (GATOS). I. ALMA images of dusty molecular tori in Seyfert galaxies. 652, A98 (2021). Bertin2011 E. Bertin, Astronomical Data Analysis Software and Systems XX, I. N. Evans, A. Accomazzi, D. J. Mink, A. H. Rots, eds. (2011), vol. 442 of Astronomical Society of the Pacific Conference Series, p. 435. Dawson2016 K. S. Dawson, et al., The SDSS-IV Extended Baryon Oscillation Spectroscopic Survey: Overview and Early Data. 151, 44 (2016). Sheinis2002 A. I. Sheinis, M. Bolte, H. W. Epps, R. I. Kibrick, J. S. Miller, M. V. Radovan, B. C. Bigelow, B. M. Sutin, ESI, a New Keck Observatory Echellette Spectrograph and Imager. 114, 851-865 (2002). Zakamska2006 N. L. Zakamska, M. A. Strauss, J. H. Krolik, S. E. Ridgway, G. D. Schmidt, P. S. Smith, T. M. Heckman, D. P. Schneider, L. Hao, J. Brinkmann, Type II Quasars from the Sloan Digital Sky Survey. V. Imaging Host Galaxies with the Hubble Space Telescope. 132, 1496-1516 (2006). Letawe2007 G. Letawe, P. Magain, F. Courbin, P. Jablonka, K. Jahnke, G. Meylan, L. Wisotzki, On-axis spectroscopy of the host galaxies of 20 optically luminous quasars at z -0.5ex~0.3. 378, 83-108 (2007). Silverman2009 J. D. Silverman, et al., Ongoing and Co-Evolving Star Formation in zCOSMOS Galaxies Hosting Active Galactic Nuclei. 696, 396-410 (2009). Boroson1992 T. A. Boroson, R. F. Green, The Emission-Line Properties of Low-Redshift Quasi-stellar Objects. 80, 109 (1992). Whittle1985 M. Whittle, The narrow line region of active galaxies - I. Nuclear O III profiles. 213, 1 (1985). Zakamska2008 N. L. Zakamska, L. Gómez, M. A. Strauss, J. H. Krolik, Mid-Infrared Spectra of Optically-Selected Type 2 Quasars. 136, 1607-1622 (2008). Kim2018 D. Kim, M. Im, What makes red quasars red?. Observational evidence for dust extinction from line ratio analysis. 610, A31 (2018). Richards2006 G. T. Richards, M. Lacy, L. J. Storrie-Lombardi, P. B. Hall, S. C. Gallagher, D. C. Hines, X. Fan, C. Papovich, D. E. Vanden Berk, G. B. Trammell, D. P. Schneider, M. Vestergaard, D. G. York, S. Jester, S. F. Anderson, T. Budavári, A. S. Szalay, Spectral Energy Distributions and Multiwavelength Selection of Type 1 Quasars. 166, 470-497 (2006). Shen2011 Y. Shen, G. T. Richards, M. A. Strauss, P. B. Hall, D. P. Schneider, S. Snedden, D. Bizyaev, H. Brewington, V. Malanushenko, E. Malanushenko, D. Oravetz, K. Pan, A. Simmons, A Catalog of Quasar Properties from Sloan Digital Sky Survey Data Release 7. 194, 45 (2011). Stone1992 J. M. Stone, M. L. Norman, ZEUS-2D: A Radiation Magnetohydrodynamics Code for Astrophysical Flows in Two Space Dimensions. I. The Hydrodynamic Algorithms and Tests. 80, 753 (1992). Hayes2006 J. C. Hayes, M. L. Norman, R. A. Fiedler, J. O. Bordner, P. S. Li, S. E. Clark, A. ud-Doula, M.-M. Mac Low, Simulating Radiating and Magnetized Flows in Multiple Dimensions with ZEUS-MP. 165, 188-228 (2006). Kormendy2013 J. Kormendy, L. C. Ho, Coevolution (Or Not) of Supermassive Black Holes and Host Galaxies. 51, 511-653 (2013). he2018 Z. He, A.-L. Sun, N. L. Zakamska, D. Wylezalek, M. Kelly, J. E. Greene, S. B. Rembold, R. Riffel, R. A. Riffel, Morphology of agn emission-line regions in sdss-iv manga survey. 478, 3614–3626 (2018). Riffel2021 R. A. Riffel, O. L. Dors, M. Armah, T. Storchi-Bergmann, A. Feltre, G. F. Hägele, M. V. Cardaci, D. Ruschel-Dutra, A. C. Krabbe, E. Pérez-Montero, N. L. Zakamska, I. C. Freitas, Chemical abundances in Seyfert galaxies - V. The discovery of shocked emission outside the AGN ionization axis. 501, L54-L59 (2021). sutherland1993 R. S. Sutherland, M. A. Dopita, Cooling functions for low-density astrophysical plasmas. 88, 253–327 (1993). mou2017 G. Mou, T. Wang, C. Yang, Numerical study on outflows in seyfert galaxies i: Narrow line region outflows in ngc 4151. The Astrophysical Journal 844, 30 (2017). yu2004 S. Yu. Sazonov, J. Ostriker, R. Sunyaev, Quasars: the characteristic spectrum and the induced radiative heating. 347, 144–156 (2004). dunn2010 J. P. Dunn, M. Bautista, N. Arav, M. Moe, K. Korista, E. Costantini, C. Benn, S. Ellison, D. Edmonds, The quasar outflow contribution to agn feedback: Vlt measurements of sdss j0318-0600. Astrophys. J. 709, 611 (2010). Deharveng2000 L. Deharveng, M. Peña, J. Caplan, R. Costero, Oxygen and helium abundances in Galactic Hii regions - II. Abundance gradients. 311, 329-345 (2000). Condon1998 J. J. Condon, W. D. Cotton, E. W. Greisen, Q. F. Yin, R. A. Perley, G. B. Taylor, J. J. Broderick, The NRAO VLA Sky Survey. 115, 1693-1716 (1998). § ACKNOWLEDGMENTS Acknowledgments: We appreciate the informative discussion with Feng Yuan. This project is based on the data obtained with Gemini telescope (programme ID: GN-2014A-Q-19, PI: G. Liu). We thank the scientists and telescope operators at Gemini telescope for their help. This project used GEMINI package the Image Reduction and Analysis Facility (IRAF) that is distributed by the National Optical Astronomy Observatories which is operated by the Association of Universities for Research in Astronomy, Inc. under cooperative agreement with the National Science Foundation. Funding: This work was supported by the research grants from the China Manned Space Project (the 2nd-stage CSST science project: Investigation of small-scale structures in galaxies and forecasting of observations, No. CMS-CSST-2021-A06 and CMS-CSST-2021-A07), the National Natural Science Foundation of China (No. 12273036, 11421303), the Fundamental Research Funds for the Central Universities (No. WK3440000005), the support from Cyrus Chun Ying Tang Foundations, and the lateral fund from Shanghai Astronomical Observatory (No. EF2030220007). L.S. acknowledges the National Natural Science Foundation of China (No. 12003030). Z.H. is supported by National Natural Science Foundation of China (No. 12222304, 12192220 and 12192221). E.G. acknowledges the generous support of the Cottrell Scholar Award through the Research Corporation for Science Advancement. E.G. is grateful to the Mittelman Family Foundation for their generous support. G.M. is supported by National Natural Science Foundation of China (No. 11833007). Author Contributions: L.S. reduced the data, performed the scientific analysis, and led the writing of the manuscript. G.L. performed the data acquisition, early-stage data reduction and analysis, and co-led the manuscript writing. G.L., N.Z., J.G. and E.G. conceived, designed, and initiated the project. G.L. and N.Z. conducted the observations and co-led the scientific analysis and interpretation. Z.H. and N.Z. developed the theoretical aspect of this work. Z.H. conducted numerical simulations, prepared the related content of the manuscript, and co-led the scientific interpretation. G.M. wrote the code for hydrodynamic simulations based on the open source code zeusmp, and contributed part of the simulation-related text. All authors discussed and commented on the content of the paper. Competing interests: The authors declare that they have no competing interests. Data and materials availability: All data needed to evaluate the conclusions in the paper are presented in the paper and/or the Supplementary Materials. § SUPPLEMENTARY MATERIALS Figs. S1 to S10 Tables S1 to S2
http://arxiv.org/abs/2307.04702v1
20230710165949
Vocal Tract Area Estimation by Gradient Descent
[ "David Südholt", "Mateo Cámara", "Zhiyuan Xu", "Joshua D. Reiss" ]
cs.SD
[ "cs.SD", "eess.AS" ]
.png,.jpg,.pdf .eps Coexistence of self-similar and anomalous scalings in turbulent small-scale solar magnetic fields. Svetlana V. Berdyugina August 12, 2023 ================================================================================================== Articulatory features can provide interpretable and flexible controls for the synthesis of human vocalizations by allowing the user to directly modify parameters like vocal strain or lip position. To make this manipulation through resynthesis possible, we need to estimate the features that result in a desired vocalization directly from audio recordings. In this work, we propose a white-box optimization technique for estimating glottal source parameters and vocal tract shapes from audio recordings of human vowels. The approach is based on inverse filtering and optimizing the frequency response of a wave­guide model of the vocal tract with gradient descent, propagating error gradients through the mapping of articulatory features to the vocal tract area function. We apply this method to the task of matching the sound of the Pink Trombone, an interactive articulatory synthesizer, to a given vocalization. We find that our method accurately recovers control functions for audio generated by the Pink Trombone itself. We then compare our technique against evolutionary optimization algorithms and a neural network trained to predict control parameters from audio. A subjective evaluation finds that our approach outperforms these black-box optimization baselines on the task of reproducing human vocalizations. § INTRODUCTION Articulatory synthesis is a type of speech synthesis in which the position and movement of the human articulators, such as the jaw, lips or tongue, are used as control parameters. Because of their inherent interpretability, articulatory features lend themselves well towards fine-grained and flexible user control over the speech synthesizer <cit.>. Articulatory Synthesis is typically implemented as a physical model, which simulates the propagation of air pressure waves through the human vocal tract. A large number of such models have been developed over the years <cit.>. Obtaining the articulatory features that control the physical model is not a trivial problem. Area functions of the vocal tract can be directly measured with magnetic resonance imaging (MRI) <cit.> or electromagnetic articulography (EMA) <cit.>. However, these procedures are time-consuming, susceptible to noise and variations, and require access to specialized equipment. It is therefore desirable to recover the articulatory features directly from a given speech signal. In general, this task is known as Acoustic-to-Articulatory Inversion (AAI). Two main strands of research can be identified: one is data-driven AAI, which seeks to develop statistical methods based on parallel corpora of speech recordings and corresponding MRI or EMA measurements <cit.>. The other takes an analysis-by-synthesis approach to AAI, in which numerical methods are developed to both obtain acoustic features from articulatory configurations, and to invert that mapping to perform AAI <cit.>. In this work, we focus on the analysis-by-synthesis approach and consider the specific articulatory features that make up the control parameters of an articulatory synthesizer. The AAI task is then framed as obtaining control parameters such that the synthesizer reproduces a target recording. This allows a user to reproduce that vocalization with the articulatory synthesizer, and then modify parameters such as vocal tract size, pitch, vocal strain, or vowel placement. Attempts to solve this problem of sound matching, for articulatory synthesis or other types of synthesis, can generally be classified into black-box and white-box methods. Black-box methods do not rely on information about the structure of the synthesizer. A popular approach is to use derivative-free optimization techniques such as genetic algorithms <cit.> or particle swarm optimization <cit.>. These methods are computationally expensive and can take many iterations to converge to a solution. Various deep neural network (DNN) architectures have also been proposed to predict control parameters that match a given sound <cit.>. They require constructing high-quality datasets for training that cover the space of acoustic outputs. White-box methods can improve the sound matching of specific synthesizers by incorporating knowledge of their internal structure. This can be done by reasoning about their underlying physical processes <cit.> or, more recently, making use of auto-differentiation and gradient descent techniques <cit.>. In this work, we propose a gradient-based white-box optimization technique for sound matching vowel sounds with the articulatory synthesizer known as the Pink Trombone (PT)[<https://dood.al/pinktrombone>]. The PT is a web application that uses well-known models of the glottal source and the vocal tract to implement an intuitively controllable vocal synthesizer. Its user interface is depicted in Figure <ref>. Our technique works as follows. First, we decompose a recording into a glottal source signal and an IIR filter with existing inverse filtering methods. We then obtain a vocal tract configuration by minimizing the difference between an analytical formulation of the tract's transfer function <cit.> and the IIR filter with gradient descent. A differentiable implementation of the mapping between control parameters and the vocal tract configuration allows propagation of the error gradient directly to the control parameters. Section <ref> describes the details of our approach. We find that this approach can accurately recover the vocal tract area function on vowel sounds generated by the PT itself. A subjective listening test shows that without requiring any training procedures, the approach outperforms black-box baselines on the task of reproducing real human vocalization. The results of the objective and subjective evaluations are presented in section <ref>. Section <ref> concludes the paper. § METHOD The PT is based on the widely used source-filter model of speech production. The speech output S(z) = G(z)V(z)L(z) is assumed to be the combination of three linear time-invariant (LTI) systems: the glottal flow G, the vocal tract V, and the lip radiation L. The lip radiation is approximated as a first-order differentiator L(z) = 1 - z^-1 and combined with G to form a model of the glottal flow derivative (GFD). Speech is then synthesized by generating a GFD signal (the source) and filtering it through the vocal tract V. In our sound matching approach, a target sound is first decomposed into the GFD source waveform and coefficients for an all-pole filter, using the inverse filtering technique proposed in <cit.>. The control parameters for the PT glottal source are then obtained directly from the GFD waveform. We propose an objective function based on the magnitude response of the all-pole filter that allows estimating the control parameters of the vocal tract with gradient descent. The overall method is illustrated in Figure <ref>. The source code is available online[<https://github.com/dsuedholt/vocal-tract-grad>]. §.§ Inverse Filtering To separate target audio into a GFD waveform and a vocal tract filter, we use the Iterative Adaptive Inverse Filtering method based on a Glottal Flow Model (GFM-IAIF) <cit.>. IAIF methods in general obtain gross estimates of G, V and L with low-order LPC estimation, and then iteratively refine the estimates by inverse filtering the original audio with the current filter estimates, and then repeating the LPC estimation at higher orders. GFM-IAIF makes stronger assumptions about the contribution of the glottis G, and uses the same GFD model as the PT synthesizer (compare section <ref>), making it a good choice for our sound matching task. From GFM-IAIF, we obtain an estimate for the vocal tract filter V in the form of N+1 coefficients a_0,… a_N for an all-pole IIR filter: V(z) = 1/∑_i=0^Na_iz^-i This also gives us an estimate of the GFD waveform by inverse filtering the original audio through V, i.e. applying an all-zero FIR filter with feed-forward coefficients b_i=a_i. §.§ Glottal Source Controls The PT uses the popular Liljencrants-Fant (LF) model to generate the GFD waveform. Originally proposed with four parameters <cit.>, the LF model is usually restated in terms of just a single parameter R_d, which is known to correlate well with the perception of vocal effort <cit.>. R_d can be obtained from the spectrum of the GFD. Specifically, <cit.> finds the following linear relationship between R_d and H_1-H_2, the difference between the magnitudes of the first two harmonic peaks of the GFD spectrum (measured in dB): H_1-H_2 = -7.6 + 11.1R_d We estimate the fundamental frequency F_0 using the YIN algorithm <cit.>, and measure the magnitudes of the GFD spectrum at the peaks closest to F_0 and 2· F_0 to calculate H_1-H_2 and thus R_d. However, the PT does not use R_d as a control parameter directly. Instead, it exposes a “Tenseness” parameter T, which relates to R_d as T = 1 - R_d/3. T is clamped to values between 0 and 1, with higher values corresponding to higher perceived vocal effort. Additionally, the PT adds white noise with an amplitude proportional to 1 - √(T) to the GFD waveform, to give the voice a breathy quality at lower vocal efforts. Figure <ref> shows the glottal source at varying Tenseness values. The estimated control parameters F_0 and Tenseness correspond to the horizontal and vertical axes in the PT's “voicebox” UI element, respectively (see Figure <ref>). §.§ Vocal Tract While the glottal source affects voice quality aspects like breathiness and perceived effort, the vocal tract is responsible for shaping the source into vowels and consonants. In the PT, the vocal tract is treated as a sequence of M+1 cylindrical segments, with M=43. The shape of the vocal tract is then fully described by its area function, i.e. the individual segment cross-sectional areas A_0,…, A_M. Noting that A = π(d/2)^2, the area function may equivalently be described by the segment diameters d_0,…,d_M. An additional, similar model of the nasal tract is coupled to the vocal tract at the soft palate. However, for the open vowel sounds that we are considering, the soft palate is closed and the coupling effect is negligible. In the PT implementation, the soft palate only opens when parts of the vocal tract are fully constricted, therefore here we focus only on the vocal tract itself. §.§.§ Control Model Directly specifying each segment diameter individually does not make for an intuitive user experience and could easily result in very unrealistic, strongly discontinuous area functions. Instead, the PT implements a tiered control model over the vocal tract based on the model proposed in <cit.>. The control model consists of two tiers. The first tier is a tongue defined by a user-specified diameter t_d and position t_p. The tongue shape is modeled as sinusoid shape and modifies a base diameter, representing a neutral area function, into the rest diameter. Figure <ref> illustrates this. The second control tier are constrictions that the user can apply to the rest diameter at any position along the vocal tract. Similarly to the tongue, constrictions are defined by an index, a diameter, and a model of how they affect the rest diameter. There are however two differences between the tongue and the constrictions: Firstly, constrictions are optional, while the tongue is always present. Secondly, constrictions can fully close the vocal tract, at which point noise is inserted to model plosives and fricatives. For this work, we consider only open area functions, meaning that we do not allow constrictions to reduce the diameter below a certain threshold. §.§.§ Estimating the Area Function Propagation of the glottal source through the vocal tract is modeled by implementing each cylindrical segment as a bidirectional, half-sample delay. The half-sample delay is achieved by processing the signal at twice the audio sampling rate and adding up adjacent pairs of samples. At the M inner junctions, the change in cross-sectional area leads to reflection and refraction, described by scattering coefficients calculated from the segment areas as k_m = A_m-A_m-1/A_m+A_m-1 for m=1,… M. This is the well-known Kelly-Lochbaum (KL) model <cit.>. An illustration of a scattering junction is shown in Figure <ref>. The length of the simulated vocal tract results from the number of segments and the sampling rate. Considering a speed of sound in warm air of c ≈ 350 m/s and an audio sampling rate of f_s = 48000 Hz, implementing half-sample delays as unit delays processed at 2· f_s, M + 1 = 44 segments result in a vocal tract length of 44 · 350 / (2·48000) ≈ 0.16 m. This corresponds to the vocal tract of an average adult male <cit.>, giving the PT a male voice. The number of segments and the unit delays are fixed in the PT. The KL model can be implemented more flexibly through e.g. the use of fractional delays <cit.>. An analytical transfer function for the piecewise cylindrical model using unit delays was derived in <cit.>. The formulation can be straightforwardly adapted to half-sample delays by replacing every delay term z^-n with z^-n/2, and then applying an additional factor of 1 + z^-1 to account for the summing of adjacent samples. The transfer function H_KL can then be stated as: H_KL(z) = (1 + z^-1)z^-(M+1)/2∏^M_m=1(1 + k_m)/K_1, 1 + K_1, 2R_L - R_0(K_2, 1 + K_2, 2R_L)z^-1 R_0 and R_L are the amount of reflection at the glottis and lips, respectively, and K∈ℝ^2×2 is defined as follows: K = [ K_1, 1 K_1, 2; K_2, 1 K_2, 2 ] = ∏_m=1^M[ 1 k_mz^-1; k_m z^-1 ] We now wish to find the tongue controls and constrictions such that |H_KL| approximates |V|, the magnitude response of the vocal tract recovered by inverse filtering. In an approach inspired by <cit.>, we now consider the squared error between the log of the magnitude responses for a given angular frequency 0 ≤ω < π: E(ω) = (log_10|H_KL(e^iω)| - log_10|V(e^iω)|)^2 We can then define a loss function that measures how closely a given vocal tract area function matches the recovered vocal tract filter by evaluating the mean squared error over a set of F linearly spaced frequencies: ℒ = 1/F∑_f=0^F-1E(f/Fπ) We can then find the set of controls that minimizes ℒ, meaning that the corresponding area function approximates |V|. A schematic overview of the computation graph is shown in Figure <ref>. § EXPERIMENTS AND RESULTS We first evaluated the performance of our approach on recovering control parameters for sounds generated by the PT itself. These in-domain sounds are guaranteed to be within the possible output space of the PT, and the ground truth parameters are known. We then applied our approach to estimating control parameters for out-of-domain sounds that were not generated by the PT itself. Ground truth parameters that provide an exact match are not known and likely do not exist due to limitations of the model, which makes evaluation challenging. We performed a listening test to compare the quality of our method to previously proposed, model-agnostic black-box sound matching approaches. For all evaluations, parameter ranges were normalized to [0, 1]. Gradient descent was performed for 100 steps, with a step size of 10^-4 and a momentum of 0.9. §.§ Reconstructing PT-generated Audio §.§.§ Setup For the in-domain evaluation, we generated 3000 total sets of control parameters and attempted to recover the vocal tract area. For all examples, F_0 was uniformly sampled from [80, 200], the tenseness from [0, 1], the tongue position t_p from [12, 29] (measured in segments along the tract), and the tongue diameter t_d from [2.05, 3.5]. The range of F_0 roughly covers the pitch range of adult male speech, while the other control parameter ranges cover the range of possible values defined by the PT interface. The parameters were divided in three sets of 1000 examples each. The first set was taken as-is. A random constriction, with position sampled from [0, 43] and diameter sampled from [0.3, 2], was applied to the vocal tract in the second set. Two such independently sampled constrictions were applied in the third set. For each example, we performed the gradient descent optimization twice with different targets: First, with the target response |V| taken directly from the ground truth frequency response (FR) of the original vocal tract. Since this FR is guaranteed to be within the domain of the KL vocal tract model, it should be able to be matched very closely. Second, with the target response |V| recovered by the GFM-IAIF method. This is no longer guaranteed to have an exactly matching vocal tract configuration, so higher deviation is expected. However, since GFM-IAIF and the PT are based on similar assumptions about the source-filter model, the obtained target responses match the ground truth closely enough to be useful in recovering the original control parameters. §.§.§ Results Table <ref> shows the mean absolute error (MAE) for the tongue parameters t_p and t_d for each condition. Additionally, the MAE values for the total area function (i.e. the diameter of each individual segment) and the recovered FR are given. In the simple case of optimizing the true FR with no constrictions applied, the original vocal tract area could be recovered with very high accuracy, often to an exact match. Constrictions introduce more degrees of freedom and result in a less accurately recovered area function, although the FR was still matched very closely. Figure <ref> illustrates how visibly different area functions can have very similar frequency responses. This relates to the transfer function in equation (<ref>) not depending on the area directly, but rather on the resulting reflection coefficients in equation (<ref>). The locations of the area function's extrema, i.e. the segments at which the area changes from growing wider to growing more narrow or vice versa, therefore affect the transfer function more strongly than the specific value of a given area segment. Since the FR obtained by GFM-IAIF might not be able to be matched exactly by the KL model, some constrictions might be used during the estimation even if there were none applied to the original vocal tract, leading to deviations from the true area function. An example of this is shown in Figure <ref>. The range of frequencies most affected by this depend on the choice of LPC estimation in GFM-IAIF; as noted in <cit.>, modeling the glottal contribution as a 3^rd order filter is well-motivated by the LF model and gives balanced results in practice. Due to the presence of this error introduced through inverse filtering, applying constrictions to the ground truth area function had a considerably less pronounced effect on the error metrics when the FR obtained by GFM-IAIF is used as the optimization target. Inverse filtering also noticeably affected the estimation of the glottal source parameters. The MAE for the prediction of the tenseness T∈[0, 1] was 0.013 when the original GFD waveform was used, but rose to 0.057 when the GFD waveform was recovered by inverse filtering. Even the accuracy of the YIN fundamental frequency estimator dropped slightly: the MAE for F_0∈[80, 200] was 0.04 on the original GFD waveform, and 0.44 on the recovered GFD waveform. Applying constrictions had no effect on the glottal source parameter estimation. Grouping the MAE values by the number of constrictions result in values deviating less than 0.5% from the reported global MAE values for both T and F_0. §.§ Sound Matching Human Vocalizations §.§.§ Black-Box Baselines To assess the out-of-domain performance, we performed a subjective evaluation comparing our gradient-based approach against three black-box optimization methods that have previously been used for the task of sound matching. Genetic algorithms <cit.> employ a population of candidate solutions, which evolve through generations by applying genetic operators such as selection, crossover, and mutation. The fittest individuals, evaluated through a fitness function, are more likely to reproduce and pass on their traits to offspring. Particle Swarm Optimization (PSO) <cit.> involves a group of candidate solutions, called particles, that move through the search space to find the global optimum. Each particle's position is updated based on its own best-known position, the best-known position within its neighborhood, and a random component, with the goal of balancing exploration and exploitation. For both the genetic algorithm and PSO, scores for a given set of parameters were calculated as the mean squared error between the mel-spectrogram of the target audio, and the audio generated by the PT with the current parameters. Neural parameter prediction <cit.> uses a neural network to predict parameters from audio. We train a convolutional neural network (CNN) architecture with two convolutional layers separated by a max-pooling layer and followed by three fully connected layers on a dataset of 1,000,000 randomly sampled parameter sets and their corresponding mel-spectrograms. While the in-domain evaluation focused on static vocal tract configurations, the speech samples used in the out-of-domain evaluation are time-varying. For all baselines and the gradient-based approach, this is handled by estimating the parameters on a frame-by-frame basis. To avoid sudden jumps in the area, the predictions of the baselines were smoothed over time by applying a Savitzky-Golay filter <cit.>. For our gradient approach, the estimation of each frame was initialized with the previous frame's prediction. §.§.§ Listening Test We reproduced 6 short recordings of human vocalizations with each method. The originals and the reproductions, and the individual ratings are available online.[<https://dsuedholt.github.io/vocal-tract-grad/>] The pitch, breathiness, and vowel shape of the recordings is time-varying. Each recording came from a different male speaker, since the PT's fixed vocal tract length limits its output to voices that are read as male (see section <ref>). We set up an online multiple-stimulus test on the Go Listen platform <cit.> asking participants to compare the four reproductions to the original recording and rate the reproduction on a scale of 0–100. We included an additional screening question in which we replaced one of the reproductions with the original recording to ensure participants had understood the instructions and were in a suitable listening environment. 22 participants took part in the listening test. Of those, 4 gave the original recording in the screening question a rating lower than 80, so their results were discarded. The results of the listening test are shown in Figure <ref>. Friedman's rank sum test indicates that the ratings differ significantly (p < 0.001), and post-hoc analysis using Wilcoxon's signed-rank test confirms that the reproductions obtained by our proposed approach are rated significantly (p < 0.001) higher than the three baselines, indicating that our method is well-suited for the sound matching task. § CONCLUSION We presented a white-box optimization technique for sound matching vowel sounds with the articulatory synthesizer. We obtained a vocal tract frequency response through inverse filtering and estimated corresponding articulatory control parameters with gradient descent optimization, propagating error gradients through the mapping of control parameters to the vocal tract area function. We showed that our approach can accurately match frequency responses for audio generated by the synthesizer itself. Reproductions of time-varying human vocalizations generated with our approach outperformed black-box baselines in a subjective evaluation. By showing that articulatory features can be estimated with a gradient-based method, our work lays the foundation for further research into end-to-end sound matching of articulatory synthesizers using neural networks, which require the propagation of gradients. Additionally, our method can be expanded to explore the sound matching of more complex synthesizers, including those with two- and three-dimensional vocal tract models and varying vocal tract lengths that are not limited to adult male voices. § ACKNOWLEDGMENTS This work was supported by UK Research and Innovation [grant number EP/S022694/1]. The authors would like to thank Benjamin Hayes, Yisu Zong, Christian Steinmetz and Marco Comunità for valuable feedback. ieeetr
http://arxiv.org/abs/2307.04079v1
20230709011140
Projective Rectangles
[ "Rigoberto Florez", "Thomas Zaslavsky" ]
math.CO
[ "math.CO", "Primary 51E26, Secondary 05B15, 05B35, 05C22, 51A30, 51E20" ]
myheadings Flórez and ZaslavskyProjective Rectangles empty Dept. of Mathematical Sciences, The Citadel, Charleston, South Carolina 29409 [email protected] Dept. of Mathematical Sciences, Binghamton University, Binghamton, New York 13902-6000 [email protected] A projective rectangle is like a projective plane that has different lengths in two directions. We develop the basic theory of projective rectangles including incidence properties, projective subplanes, configuration counts, a partial Desargues's theorem, a construction from projective planes, and alternative formulations. In sequels we study harmonic conjugation and the graphs of lines and subplanes. [2010]Primary 51E26; Secondary 05B15, 05B35, 05C22, 51A30, 51E20 Projective Rectangles Thomas Zaslavsky August 12, 2023 ===================== empty § INTRODUCTION A projective rectangle is like a projective plane, but narrower than it is tall. More precisely, it is like the set of points on a certain kind of family of lines in a projective plane, with their induced lines. Very precisely, it is an axiomatic incidence structure based on adapting axioms of projective geometry. Projective rectangles are found in all known harmonic matroids, such as full algebraic matroids. Harmonic matroids are matroids within which there is harmonic conjugation <cit.>; their definition was inspired by Lindström's article <cit.> about abstract harmonic conjugation. Harmonic conjugation applied to complete lift matroids of group expansions <cit.> of a triangle (for instance, L_2^k, Example <ref>) led us to structures that looked like vertical strips in projective planes—whence the name “projective rectangle” and the impulse to find a general theory of this idea in terms of incidence geometry. Projective rectangles themselves are almost examples of harmonic matroids, seemingly falling short only in special lines, as we prove in the sequel <cit.>. An indication of what we accomplish in this article: First, the axioms (Section <ref>) and basic consequences for incidence geometry (Section <ref>) and counting (Section <ref>). Especially, we see that a projective rectangle, if it is not a projective plane, contains a multitude of maximal projective planes; we call them its “planes”. Section <ref> develops partial Desarguesian properties of projective rectangles, which satisfy limited versions of the two halves of Desargues's Theorem. In Section <ref> we show that the construction based on a subplane and a special point, alluded to above, actually works to produce projective rectangles in planes that are Pappian, i.e., coordinatized by a field; we do not know how far that subplane construction generalizes. The following section treats the narrowest projective rectangles, which are the simplest and best understood. Next are two sections that give alternative viewpoints: in Section <ref> we see that a projective rectangle is essentially a Paschian transversal design and thus is equivalent to a special kind of orthogonal array, and in Section <ref> we take the approach of projective duality by interchanging points and lines, which may suggest new properties but which we have not studied deeply. We have only an elementary understanding of projective rectangles in general, as is shown by the list of significant open problems in Section <ref>. In sequels we treat adjacency graphs and harmonic conjugation. One concerns the graphs of adjacency of lines and of planes <cit.>. Notably, in projective rectangles that are not projective planes the graph of planes, where adjacency means having an ordinary line in common, has striking internal structure that presents a tantalizing vision of higher dimensionality. The other sequel <cit.> explores abstract harmonic conjugation as a theme linking harmonic matroids and projective rectangles. In one direction, a projective rectangle is almost a harmonic matroid. In the other direction, a harmonic matroid contains a projective rectangle if it contains a matroid of a finite-field expansion of a triangle, in particular if it contains a Reid cycle matroid. Our personal interest is mainly in finite systems, but many results apply to infinite projective rectangles. For instance, Section <ref> encompasses infinite systems, while Section <ref> requires finiteness. Our viewpoint is influenced by matroid theory but is largely that of incidence geometry; matroid theory is not needed to read this paper. We wish to acknowledge the inspiration of the elegant and deep short papers <cit.> of Bernt Lindström. Lindström's ideas, as further developed by the first author in his doctoral dissertation and <cit.>, led to this study of projective rectangles. § PROJECTIVE RECTANGLES An incidence structure is a triple (,ℒ,ℐ) of sets with ℐ⊆×ℒ. The elements of are points, the elements of ℒ are lines. A point p and a line l are incident if (p,l) ∈ℐ. A set P of points is said to be collinear if all points in P are in the same line. We say that two distinct lines intersect in a point if they are incident with the same point. A projective rectangle is an incidence structure (,ℒ,ℐ) that satisfies the following axioms: * Every two distinct points are incident with exactly one line. * There exist four points with no three of them collinear. * Every line is incident with at least three distinct points. * There is a special point D. A line incident with D is called special. A line that is not incident with D is called ordinary, and a point that is not D is called ordinary. * Each special line intersects every other line in exactly one point. * If two ordinary lines l_1 and l_2 intersect in a point, then every two lines that intersect both l_1 and l_2 in four distinct points, intersect in a point. A complete quadrilateral is an incidence structure that consists of four lines, no three concurrent, and their six points of intersection. A nearly complete quadrilateral is like a complete quadrilateral but with only five of the intersection points; the sixth intersection point may or may not exist. Axiom (A<ref>) states that almost every nearly complete quadrilateral in a projective rectangle is complete. This is a partial Pasch axiom (e.g., see <cit.>), not the full Pasch axiom because it has an exception when either of the first two lines is special; then the remaining two lines may or may not be concurrent. This exception is what admits projective rectangles that are not projective planes. Section <ref> has more discussion of the significance of Axiom (A<ref>). Notation: We write pq for the unique line that contains two points p and q. After we establish the existence of projective planes in , we use the notation abc… to mean the unique line (if abc… are collinear) or plane (if they are coplanar but not collinear) that contains the points abc…. The projective planes are some familiar examples of projective rectangles. A projective plane is called a trivial projective rectangle. In particular the Fano plane F_7 is the smallest projective rectangle (see Theorem <ref> Part (<ref>)). The non-Fano configuration is not a projective rectangle; it fails Axiom (A<ref>). The matroid L_2^k is another example of a projective rectangle (see Figure <ref>). It has m=3 special lines. Let A:= { a_g | g ∈_2^k }∪{D }, B:= { b_g | g ∈_2^k }∪{D } and C:= { c_g | g ∈_2^k }∪{D }, where we think of _2^k as a multiplicative group, writing gh for the group operation. Let L_2^k be the simple matroid of rank 3 defined on the ground set E:= A∪ B∪ C by its rank-2 flats. The non-trivial rank-2 flats are A, B, C, which are the special lines, and the sets {a_g, b_g h, c_h } with g and h in _2^k, which are the ordinary lines. We note that L_2^k is the complete lift matroid of the group expansion of a triangle, i.e., L_0(_2^k) in the language of <cit.>. We say more about projective rectangles with m=3 and matroids similar to L_2^k in Section <ref>. § PROPERTIES OF PROJECTIVE RECTANGLES In this section we study essential properties of projective rectangles. We begin with basic facts; then we prove that the projective rectangle contains projective planes and we conclude with a section of counting formulas for later use. §.§ Fundamental properties If a projective rectangle with exactly m special lines has one of them with n points, then we say that the order of is (m,n). We do not assume m or n is finite unless we so state. In Theorem <ref> we prove m≤ n; we also prove that every special line has the same number of points, that every ordinary line has the same number of points, and many other elementary facts about points and lines. (If we define ν := n-1 and μ := m-1, then when the projective rectangle is a projective plane, ν=μ= the order of the plane as customarily defined; that is, one less than the number of points in a line.) The following result states basic properties of a projective rectangle. If is a projective rectangle of order (m,n), then the following hold in : * The point set of ∖ D is partitioned by all special lines deleting D. * There are at least three special lines and four ordinary lines. Moreover, there are at least seven points. * If l is a line and p is a point not in l, then the number of distinct lines incident with p intersecting l equals the number of points on l. * Through each ordinary point there passes exactly one special line. * All ordinary lines have the same number of points. The number of points in an ordinary line is equal to the number of special lines, that is, m. * All special lines have the same number of points, i.e., n points, and the same number of ordinary points, i.e., n-1. * There are exactly m(n-1) ordinary points. * The number of lines incident with an ordinary point is equal to the number of points in a special line, that is, n. The number of ordinary lines that contain each ordinary point is n-1. * The number of points in a special line is at least the number of points in an ordinary line; that is, n ≥ m. * There are exactly (n-1)^2 ordinary lines. * For a given point p in an ordinary line l, there are n-2 ordinary lines intersecting l at p. Proof of Part (<ref>). By Axiom (A<ref>), every point p ∈∖ D belongs to the unique special line pD. Proof of Part (<ref>). From Axiom (A<ref>) we know that in there are four points, no three of them collinear. If one is D, each other one with D generates a special line, all of which are distinct by noncollinearity. If none of them is D, the points generate six distinct lines, of which at most two can contain D because no three of the four points are collinear. Thus, the four remaining lines are ordinary lines. Since in one of the ordinary lines there are at least three points, these points form with D three special lines. We have proved that in there are at least three special lines and three ordinary lines. By Axiom (A<ref>), each special line contains at least two ordinary points, so there are at least seven points. Now consider two special lines s, s' and two ordinary points p_1,p_1' on s and p_1',p_2' on s'. The lines p_ip'_j are four distinct ordinary lines. We prove Part (<ref>). From Part (<ref>) we can deduce that in there are a non-incident ordinary point and ordinary line, also that there are a non-incident ordinary point and special line. Let q ∈ l and p∉ l. From (A<ref>) there is exactly one line incident with p that intersects l at q, and all such lines are distinct. We prove Parts (<ref>) and (<ref>). Given an arbitrary ordinary line l, we know by (A<ref>) that each point in l together with D determines a unique special line. Every special line is generated in this way, by (A<ref>). Thus, there is a bijection between the special lines and the points in l. This implies the number of points in any ordinary line equals the number of special lines. We prove Parts (<ref>) and (<ref>). We suppose that l_1 and l_2 are special lines in with n_1 and n_2 points, respectively. Let p be a point non-incident with either of those lines. Part (<ref>) implies that there are n_1 distinct lines intersecting l_1 that are incident with p. Those n_1 lines also intersect l_2. Indeed, one of those lines is special and the remaining (n_1-1) lines intersects l_2 because they are ordinary. Therefore, n_1 ≤ n_2. Similarly, n_2 ≤ n_1. This proves that all special lines have the same number of points. Deducting 1 for the special point D gives the number of ordinary points on a special line. Proof of Part (<ref>). The number of special lines is m, Part (<ref>) says the number of ordinary points in each special line equals n-1 and Part (<ref>) says the special lines partition the ordinary points. Proof of Part (<ref>). We suppose that p is an ordinary point with exactly k incident lines. Let l be a special line with n points and p∉l. From Part (<ref>) we know that there are exactly n distinct lines intersecting l that are incident with p. This implies that k≥ n. We want to prove that k = n. Suppose by contradiction that there is another line l_1 incident with p and not intersecting l. It is clear the l_1 must be an ordinary line. That is a contradiction, because an ordinary always intersect special lines. By Part (<ref>) every special line has n-1 ordinary points, and by definition there are m special lines. Proof of Part (<ref>). Let p be a point in an ordinary line. Two ordinary points in two special lines give rise to a unique ordinary line. Since every special line has n points and one of them is D, it is easy to see that the two special lines give rise to (n-1)^2 ordinary lines. Those are all the ordinary lines that intersect the two special lines. Since every ordinary line intersects every special line, we conclude that there are no more ordinary lines in . Proof of Part (<ref>). Since p is a point in an ordinary line l, from Part (<ref>) there are n lines incident with p. Only one of those n lines is special; the other n-1 are not. This implies that there are n-2 ordinary lines intersecting l at p. §.§ Projective subplanes We show that a projective rectangle is a combination of projective planes, in the strong sense that every two intersecting ordinary lines are lines of a substructure that is a projective plane. Before our results, though, we have to clarify the notion of substructure of an incidence structure (,,). An incidence substructure of (,,) is an incidence structure (',',') in which ' ⊆, ' ⊆, and ' = |'×', i.e., the incidence relation is the same as in the superstructure but restricted to the elements of the substructure. In particular, if (',',') is a projective plane, we call it a subplane of (,,). In a projective rectangle a subplane may contain an ordinary line and all its points; we call that kind full. A full subplane necessarily has order m-1. A subplane need not be full; it also need not be a maximal subplane, for instance if it is a proper subplane of a full subplane. In fact, that is the only way a subplane can fail to be maximal, as we will see in Theorem <ref>. The special point D is very special, as are the special lines. In a projective rectangle , the special point D is a point of every full subplane. Also, for every special line s and every full subplane π, s∩π is a line of π. A full subplane π contains at least two lines, l and l', which intersect at a point p ∈π, and at least one is ordinary, say l. If l' is ordinary, then every special line s intersects both l and l' at different points, unless s is the special line s_p on p. These two points of s determine a line of π, which is the intersection of s with π. Thus, for every special line except possibly s_p, s ∩π is a line of π. If l' is special, or rather if l'=s'∩π for some special line s', then there is at least one point p' on l' that is neither p nor D. Let q be a point in l ∖ p; then π has a line m determined by p' and q, which is ordinary since it contains not only p ∈ s_p but also q ∉ s_p. Then we can replace l' by m and have the case of two ordinary lines, so we may as well assume l' is ordinary. Let s_1 and s_2 be two special lines that are not s_p. Their intersection is in π, but their intersection is D. Therefore, D ∈π. Let p_1 be the intersection of l with s_1 and let p_2 be the intersection of l' with s_2. Since p_1 ∉ l' and p_2 ∉ l, the line m of π determined by p_1 and p_2 does not contain p. Since the points p_1,p_2 are not D and are not in the same special line, m is ordinary, hence it is contained in π. Therefore, m intersects s_p in a point p_12, which cannot be p, so p and p_12 determine a line of π, which must be s_p∩π. That is, s_p∩π is a line of π. Now we present the fundamental result about subplanes. Let be a projective rectangle. If two ordinary lines in intersect in a point, then both lines are lines of a unique full projective plane in . First we state the construction that gives the projective plane. Let l_0 and l_1 be ordinary lines in with exactly one point q in common. (See Figure <ref>.) Let a_0s= l_0∩ s and a_1s= l_1∩ s, where s ranges over the set of special lines in , and pick three special lines to be called x, y, and z such that q ∈ x. Thus, q=a_0x=a_1x. (We know there are three special lines by Theorem <ref> Part (<ref>).) Let b_1s= n_1∩ s, where n_1 is the ordinary line that passes through a_0y and a_1z. Suppose that s and t denote two special lines. We denote by l_st the ordinary line passing through a_0s and a_1t with s,t x and we denote by n_st the ordinary line passing through a_0s and b_1t with s,t y. Let L={l_st: s,t ∈, s,t x and s t } and N={n_st: s,t ∈, s,t y and s t }. Note that n_1 = l_yz∈ L and l_1 = n_xz∈ N. We set :=(_,ℒ_,ℐ_), where ℐ_ is the incidence relation defined in and [ _ := (⋃_l∈ N l) ∪ (⋃_l∈ L l) ∪ l_0 ∪{ D },; _1 := { s∩_ : s ∈},; _2 := L ∪ N ∪{ l_0},; _ := _1 ∪_2. ] We begin with the incidence structure given by Construction <ref>. With the notation there, we prove that is a projective plane. First of all, we note that one of the defining properties of a projective plane, that there are four points in _ with no three of them collinear, is satisfied by a_0y, a_1z, q, and D. We next prove that given two lines in , they intersect. Suppose that the two given lines are in L (they are ordinary). If they intersect in a point in l_0 or in a point in l_1, there is nothing to prove. Suppose that neither of those two cases holds. So, they are two ordinary lines that intersect l_0 and l_1 in four different points. Therefore, by Axiom (A<ref>) the two given lines intersect. By a similar argument we conclude that if the two given lines are in N, then they intersect. It is clear that any two lines in _1 intersect in D and that a line in _2 intersects every line in _1. Suppose the two given lines are λ and η with λ∈ L and η∈ N. If a_0y∈λ and q∈η, then λ and η intersect both l_0 and n_1 in four distinct points. Since l_0 and n_1 intersect in a_0y, by (A<ref>) we conclude that λ and η intersect. Now suppose that a_0y∉λ. Since λ intersects both l_0 and l_1 in distinct points, and n_1 intersects l_0 and l_1 in distinct points, by (A<ref>) we know that λ intersects n_1. Then λ intersects l_0 and n_1 in distinct points (because n_1 intersects l_0 at a_0y∉λ). The fact that λ and η both intersect l_0 and n_1 in distinct points, with (A<ref>), implies that λ and η intersect in a point. Supposing q∉η, the proof is similar. Since λ meets l_0 at a_0y∉ l_1, and q = l_0 ∩ l_1 ∉η, each of λ and η intersects l_0 and l_1 in distinct points; thus, λ and η intersect in a point. This completes the proof that any two lines in _ intersect. We now prove that given two points p_0, p_1 ∈_, they are in a line in . (If they are in one line, they cannot be in two, because the lines of are ordinary lines or restrictions of special lines of , and every line in is determined by two of its points.) This proof requires cases depending on the locations of the two points. The proofs (if not trivial) depend on repeated application of Axiom (A<ref>). For economy of notation we employ a shorthand: p_34 = A6(l_1,l_2;l_3,l_4| p_12;p_13,p_23,p_14,p_24) means that each pair {l_i,l_j} intersects at p_ij for ij = 12, 13, 14, 23, 24. Axiom (A<ref>) then implies that l_3 and l_4 intersect at a point p_34, provided that l_1 and l_2 are ordinary. In this proof all four lines are always ordinary. Case 1. If both points are in a special line s, the line in is s∩_∈_1. This includes the case in which one of those points is D. Henceforth we assume the points are not in the same special line. Case 2. If both points are in l_0 or l_1, there is nothing to prove. Case 3. Suppose both points are not in x ∪ l_0 ∪ l_1. Then p_0 is in a line l_st = a_0sa_1t for some two special lines s and t, not equal, and p_1 is in a line l_uv = a_0ua_1v for some two special lines u and v, not equal (but s,t may not be distinct from u,v). Form the point p_3 = A6(l_0,l_1;l_st,l_uv| q; q_0u,a_1v,a_0s,a_1t), then the point p_4 = A6(l_st,l_uv;l_1,| p_3;a_1t,a_0s,p_1,p_0), and finally the point p_5 = A6(l_st,l_uv;l_0,| p_3;a_0u,a_1v,p_1,p_0). Now p_3 and p_4 are the intersections of l_0 and l_1, respectively, with . Since p_3 ≠ p_4, is a line generated by a point on l_0 ∖ q and a point on l_1 ∖ q (as p_0, p_1 ≠ q). Since that line is not a special line, it is in L. Therefore, p_0 and p_1 are collinear. Case 4. In this case p_0 ∈ l_0 but p_1 ∉ x ∪ l_0 ∪ l_1. We choose names so p_0 = a_0s and p_1 ∈ l_uv as in Case 3. Choose a_1t∈ l_0 ∖ (∪{q}) and form p_2 = A6(l_0,l_1;l_uv,l_st| q;a_0u,a_1v,a_0s, a_1b); then let p_3 = A6(l_uv,l_st;,l_1 | p_2;a_1t,a_1v,p_0,p_1). Now p_3 is the intersection of with l_1, which implies that is generated by p_0 ∈ l_0 ∖ q and p_3 ∈ l_1 ∖ q. Since is not special, it is a line in L. Case 5. In this most complicated case we assume p_0 ∈ x ∖ q and p_1 ∉ x ∪ l_0 ∪ l_1. As in the preceding cases we take p_1 ∈ l_uv. Step 1: Choose p_2 = A6(n_1,l_0;n_st,l_1 | a_0s;b_1t,a_0s,a_0u,a_1v). Step 2: p_3 = A6(l_0,l_1;n_st,l_uv| q;a_0s,p_2,a_0u,a_1v). Step 3: p_4 = A6(n_st,l_uv;l_1,| p_3;p_2,a_1v,p_0,p_1). Step 4: p_5 = A6(n_st,l_uv;l_0,| p_3;a_0s,a_0u,p_0,p_1). The result is that is generated by p_5 ∈ l_0 ∖ q and p_4 ∈ l_1 ∖ q so it is in L. Case 6. Here we assume p_0 ∈ x ∖ q and p_1 ∈ l_1 ∖ q. In this case we take p_0 ∈ n_su. We first find p_2 = A6(l_1,n_1;n_su,l_1 | a_0s;a_0s,b_1u,q,a_1z). Then we find p_3 = A6(n_su,l_1;,n_1 | p_2;p_0,p_1,b_1u,a_1z) and last p_4 = A6(l_1,n_1;l_0,| a_1t;q,a_0s,p_1,p_3). Then is generated by p_4 ∈ l_0 ∖ q and p_1 ∈ l_1 ∖ q, therefore it is in L. Case 7. Now p_0=q and p_1 ∉ x ∪ l_0 ∪ l_1. As usual we take p_1 ∈ l_uv. The first step is to define p_2 = A6(l_0,l_1;l_uv,n_1 | q;a_0u,a_1v,a_0s,a_1t), and then p_3 = A6(l_1,l_uv;,n_1 | a_0u;p_0,p_1,a_1t,p_2). Since p_3 lies on n_1 it is a point b_1w for a special line w ≠ x. Thus, is generated by p_0 = q = a_0x and p_3 = b_1w; this line is n_xw so it is in N. Case 8. The last case is where p_0=q and p_1 ∈ l_1. Both are in the line l_1. In all cases there is a line in _ that contains both p_0 and p_1, so they are collinear in . We have proved collinearity of all pairs of points in _, so is indeed a projective planes. An interpretation of Theorem <ref> is the following corollary. Given three noncollinear ordinary points in a projective rectangle , there is a unique full projective plane in that contains all three points. Given an ordinary line l and an ordinary point p not in l, there is a unique full projective plane in that contains both. For the first part, let the three points be p,q,r. No special line contains all three, so there is one, say p, that is not in a special line through the others. The lines pq and pr are ordinary lines, they are distinct by noncollinearity of the three points, and they intersect, so by Theorem <ref> there is a unique full projective plane that contains them and the three points. The second part follows by taking q,r ∈ l. In a projective rectangle, every maximal subplane is full. The line set of an incidence subplane π contains two ordinary lines l_1,l_2 and its point set contains their intersection point. It follows from Theorem <ref> that π is a subplane of the full subplane determined by l_1 and l_2. Thus, maximality and fullness are equivalent for projective subplanes of a projective rectangle. From now on, when we refer to a plane in a projective rectangle, we mean a full projective subplane. Also, when we say several lines are coplanar, we mean there is a plane π such that each of the lines that is ordinary is a line of π and for each line s that is special, s ∩π is a line of π. We can now characterize a nontrivial projective rectangle as a projective rectangle that contains more than one maximal projective subplane. Such projective rectangles have properties not common to all projective planes; e.g., they satisfy the dual half of Desargues's Theorem (see Theorem <ref>) and they are harmonic matroids (see <cit.>). Let be a projective rectangle. Every ordinary line in is a line of a plane in . If is nontrivial, then every ordinary line l is a line of at least three planes that contain l. Let l be an ordinary line in . From Theorem <ref> Part (<ref>) we know that there is another ordinary line l' that intersects l at exactly one point. This and Theorem <ref> imply that l is in a plane π. If is nontrivial, there is a point q not in π. Let p_1,p_2 ∈π be points in l that are not in the special line that contains q. Then the plane p_1p_2q that contains both ordinary lines p_1q and p_2q, which exists and is unique by Theorem <ref>, is a plane containing l that is different from π. To find a third plane, let p_1 ∈π_1 and p_2 ∈π_2 be ordinary points not in l. There is an ordinary line p_1p_2 that must contain a third point p_3 since m≥3 by Theorem <ref>. By Corollary <ref> there is a unique plane π_3 that contains l and p_3. If s is a special line in the projective rectangle and π is a plane in , then s ∩π is a line of π. Let p_1 and p_2 be points in distinct special lines that are not s. Then by Axiom (A<ref>) there is an ordinary line l that contains both p_1 and p_2, and by Corollary <ref> there is a plane π that contains l. In π there is another line l' that intersects l at p_1; then q=l∩ s and q'=l' ∩ s are two points in s ∩π, which determine a line in π that is contained in the unique line s of that contains q and q'. Thus, s ∩π is a line of π. Now we prove a generalization of Theorem <ref> to all lines, although we lose uniqueness of the containing plane. Let be a projective rectangle. If two lines l_1 and l_2 intersect in a point p, then they are coplanar. Suppose l_1 is a special line. There are points p_1 in l_1 ∖ l_2 ∖ D and p_2 in l_2 ∖ l_1. By Axiom (A<ref>) there is an ordinary line l_3 determined by p_1 and p_2. If l_2 is ordinary, by Theorem <ref> there is a unique plane π that contains l_2 and l_3. By Proposition <ref> the restriction of l_1 to π is a line of π, so l_1 and l_2 are coplanar. If l_2 is special, then l_3 is ordinary. By Proposition <ref> there is a plane π that contains l_3, and by Proposition <ref> both l_1∩π and l_2∩π are lines of π. Thus, l_1 and l_2 are coplanar. Next is an intersection property of lines that has a consequence for the matroid structure of a projective rectangle. Suppose three lines in a projective rectangle intersect pairwise in three different points. Then they are a coplanar triple. Equivalently, if three lines intersect pairwise (i.e., are pairwise coplanar) but are not a coplanar triple, then they all intersect in the same point. Suppose two ordinary lines l_1, l_2 intersect in a point p and lie in a common plane π, and suppose a third line l_3, possibly special, intersects l_1 and l_2 in points different from p. Choosing any points q_1 ∈ l_1 ∖ p and q_2 ∈ l_2 ∖ p determines a line of π through q_1 and q_2. By Construction <ref> and Theorem <ref>, this line is either an ordinary line of or the restriction to π of a special line of . In particular, this applies to l_3, hence l_1, l_2 and l_3 are a coplanar triple of lines of . In case l_1 is ordinary while l_2 and l_3 are special, by Corollary <ref> l_1 and l_2 are coplanar in a plane π and by Proposition <ref> l_3∩π is a line of π, so the three lines are coplanar. The second statement, which is the contrapositive of the first (and see Corollary <ref>), is a useful restatement. If a finite projective rectangle has order (n,n), then it is a projective plane. Because n=m, the projective plane of Corollary <ref> is the whole projective rectangle. This proposition does not apply to the infinite case; see Example <ref>. §.§ No Vamos configuration The Vamos matroid is the matroid of eight points in Figure <ref>. It is one of the smallest matroids that cannot be represented in a projective geometry; for that reason it is one of the fundamental matroid examples. However, we shall not think of it as a matroid but as an incidence structure with eight points as well as lines and planes. The lines are the solid lines in Figure <ref> and the planes are the ones composed of pairs of lines as described in the caption. (As a matroid a projective rectangle has rank 3 while the Vamos matroid has rank 4 and therefore it is trivial that it cannot be a submatroid of a projective rectangle. That is why it is important to think of the Vamos incidence structure instead of the Vamos matroid, even though they look the same in a diagram.) The Vamos incidence structure is not a substructure of any projective rectangle. Suppose a configuration of this kind exists in a projective rectangle. By Proposition <ref> the lines l_1,l_2,l_3 are concurrent in a point and the lines l_2,l_3,l_4 are also concurrent in a point. Clearly, these points are one point, so l_1 and l_3 contain a common point and hence are coplanar, contrary to the structure of the Vamos matroid. That proves the corollary. § FINITE PROJECTIVE RECTANGLES In finite projective rectangles there are many possibilities for counting elements and configurations. They are the topic of this section. §.§ Counts We extend the counts of points, lines, etc. in Section <ref> to planes and various kinds of incidence. Let be a projective rectangle of order (m,n). * The number of ordinary lines that are concurrent with each ordinary line is m(n-2). * There are m(m-1) ordinary points and (m-1)^2 ordinary lines in each plane. * The number of pairs (p,l) that consist of an ordinary point p and an ordinary line l that contains p is m(n-1)^2. * The number of planes that contain each ordinary line is n-2m-2. * The number of pairs (l,π) such that l is an ordinary line and π is a plane that contains l is (n-1)^2 n-2m-2. * The number of planes in is (n-1)^2(n-2)(m-1)^2(m-2). * For a fixed ordinary point p, the number of triples (p,l,π) such that l is an ordinary line incident with p and π is a plane that contains l is (n-1) n-2m-2. * The number of triples (p,l,π) such that p is an ordinary point, l is an ordinary line, and π is a plane that contains l is m(n-1)^2 n-2m-2. * The number of pairs (p,π) such that p is an ordinary point and π is a plane that is incident with p is m(n-1)^2m-1 n-2m-2. * The number of planes that are incident with each ordinary point is n-1m-1 n-2m-2. Proof of (<ref>). Let l be an ordinary line. From Part (<ref>)) there are m points on l. From Theorem <ref> Part (<ref>) we know there are n-2 ordinary lines that intersect l at each point. All those lines are distinct. Proof of (<ref>). This follows from the fact that the plane is projective of order m-1. We exclude the one special point D and the m special lines in the plane. Proof of (<ref>). Each of the (n-1)^2 ordinary lines (Theorem <ref> Part (<ref>)) contains m ordinary points (Part (<ref>)). Proof of (<ref>). Let l be an ordinary line. From Part (<ref>) there are m(n-2) ordinary lines l' that intersect l at exactly one point. Theorem <ref> guarantees the existence of a unique plane π that contains both l and l'. By Part (<ref>) the number of ordinary lines in π that intersect l is (m-1)^2-1 = m(m-2). Thus, the number of planes on l is the quotient, m(n-2)/m(m-2)=(n-2)/(m-2). Proof of (<ref>). The number of ordinary lines should be multiplied by the number of planes on each line. Proof of (<ref>). The number of incident line-plane pairs should be divided by the number of ordinary lines in a plane. Proof of (<ref>). The number of incident line-plane pairs should be multiplied by the number of points in an ordinary line. Proof of (<ref>). The number of triples in Part (<ref>) should be multiplied by the number of ordinary points from Part (<ref>). Proof of (<ref>). The number of triples in Part (<ref>) should be divided by the number of ordinary lines in pi that contain p, which is m-1. Proof of (<ref>). Either divide the number of triples in Part (<ref>) by m-1, the number of ordinary lines on p in π, or divide the number in Part (<ref>) by m(n-1), the whole number of ordinary lines on p. Two lines are skew if they have no point in common. A skew class of lines is a maximal set of lines, in which every pair is skew. If a line has no skew mate, it is a skew class of one. A line may belong to more than one skew class. Two lines that are skew to the same line may intersect. If is a finite projective rectangle of order (m,n), then the following hold in : * Given an ordinary point p and given any ordinary line l that does not contain p, there are exactly n-m ordinary lines containing p that are skew to l. * If l is an ordinary line, then there are (n-2)(n-m) lines that are skew to l. * If l_1 is skew to l, there are m(n-m) lines skew to l that are concurrent with l_1. Proof of Part (<ref>). From Theorem <ref> Part (<ref>) we know that there are exactly n lines passing through p (including a special line). From Theorem <ref> Part (<ref>) we also know that there are exactly m lines passing through p that intersect l (including a special line). Therefore, there are exactly (n-1)-(m-1) ordinary lines passing through p and skew to l. Part (<ref>) follows by subtracting from the number of ordinary lines, (n-1)^2 (Theorem <ref> Part (<ref>)), the number that are concurrent with l, which is m(n-2) (Theorem <ref> Part (<ref>)), and the number that are l, which is 1. Part (<ref>) follows from Part (<ref>). Suppose that is a nontrivial projective rectangle of order (m,n). Let l be an ordinary line l∈. Tthere is a skew line class containing l that has at least m lines in it. I.e., there are m-1 ordinary lines skew to l and skew to one other. Let M = ⌈ (n-m)/(m-1) ⌉ - m, the largest integer such that (n-1)/(m-1)>m+M. Then there is a skew class containing l that has at least m+M lines in it. I.e., there are m+M-1 ordinary lines skew to l and skew to one other. Let l be an ordinary line and let l_1 l be an ordinary line passing though q∈ l. Let p q be a second point in l. By Theorem <ref> Part (<ref>), since n>m there is an ordinary line l_2 passing through p skew to l_1. Let a_i and b_i' be the points in l_1 and l_2 for i=1,2, …, m, labeled so that the line a_ib_i' is special. Lines a_ib_i and a_jb_j for i,j∈{1,2, …, m} with i j, b_i ≠ b_j, b_i ≠ b_i', and b_j ≠ b_j' are ordinary and are skew to each other, because if they intersect, then by Axiom (A<ref>), l_1 intersects l_2, which is a contradiction. Note that it is easy to choose all b_i ≠ b_i' since m>1. Also, we can suppose that l is the line a_1b_1. Now we suppose that (n-1)/(m-1)-m>0 and M is the largest integer such that (n-1)/(m-1)>m+M. (Thus, n>m+M.) Let s be a special line with points s_1, s_2, …, s_m, …, s_n-1,D. Suppose that S∩ a_ib_i=s_i for i=1, …, m. We prove by induction that there are lines h_1, h_2, …, h_M, skew to one other and to all lines of the form a_ib_i. Assume we have k lines h_1, h_2, …, h_k that are skew to one other and to all lines of the form a_ib_i for some k∈{0,1, …, M-1}, where s_m+t∈ h_t for t=1, 2, …, k. First note that neither h_t nor a_ib_i contains the point s_m+k+1 and that (m-1)(m+k) is the number of points in (⋃_t=1^k h_t∪⋃_i=1^m a_ib_i)∖ S. Thus, the maximum number of ordinary lines passing through s_m+k+1 intersecting a line of the form a_ib_i and the lines h_1, …, h_k is (m-1)(m+k). Since s_m+k+1 is an ordinary point, by Theorem <ref> Part (<ref>) we know there are n-1 ordinary lines passing through this point. Since (n-1)>(m-1)(m+k) there must be at least one ordinary line h_k+1 passing through s_m+k+1 that is skew to all lines of the form a_ib_i and the lines h_1, …, h_k. This proves the induction, completing the proof. In the notation of Theorem <ref>, M = (τ-1)m - 2τ. This is negative or zero if τ = 1, or if τ=2 and m≤4, and positive otherwise, so in the “otherwise” case the second bound on the maximum size of the skew class is the better one. §.§ Constraints on the parameters We have found some integers in Theorem <ref>, namely, ρ=n-2m-2, n-1m-1 n-2m-2, and (n-1)^2(m-1)^2 n-2m-2. These integral fractions imply relationships between m and n. Theorem <ref> is a constraint on n, given a value of m. By Section <ref> m-1 must be the order of a projective plane; that is the only constraint we know on m. Let p,p' be two ordinary points in a special line s. Let s' be any other special line. The planes π that contain both p and p' partition s'∖ D into sets π∩(s'∖ D) of size m-1, and each such set is in a unique plane that contains p and p', so there are (n-1)/(m-1) such planes. For an ordinary point q∈ s' let π(q) denote the plane that contains p,p',q. This plane is unique, by Theorem <ref>, because it is determined by the intersecting ordinary lines pq and p'q. Choose another ordinary point q' ∈ s' ∖π(q) and suppose π(q) and π(q') contain a common point r. Then both planes contain the intersecting ordinary lines pr and p'r, so they must be the same plane. It follows that the distinct planes π(q) for q ∈ s' ∖ D partition the points of s' ∖ D. The intersection π(q) ∩ s' is a line of π(q) that contains D, so the number of ordinary points in it is m-1. The number of sets into which s' ∖ D is partitioned is therefore equal to (n-1)/(m-1), and this is the number of planes that contain both p and p'. For a projective rectangle of order (m,n), there is an integer τ≥ 0 such that n = m + τ (m-1)(m-2). If is nontrivial, then τ≥ 1. We simplify the notation by writing ν=n-1 and μ=m-1. Integrality of (n-2)/(m-2) implies that there is an integer ρ≥ 1 such that ν = 1 + ρ(μ-1). Proposition <ref> implies that ν = σμ for some positive integer σ. Therefore, ν = ρ(μ-1)+1 = σμ. It follows that (ρ-σ)μ = ρ-1, so ρ-1 is a multiple of μ, say ρ = τμ+1 where τ≥0. Then substituting for ρ gives (τμ+1-τ)μ = τμ, and upon division by μ we find that σ = τ(μ-1) + 1. This implies ν = τμ(μ-1) + μ, so n-m = ν-μ = τμ(μ-1). We infer the expressions n-2m-2 = τ(m-1)+1, n-1m-1 = τ(m-2)+1, n-1m-1 n-2m-2 = [τ(m-2)+1] [τ(m-1)+1], (n-1)^2(m-1)^2 n-2m-2 = [τ(m-2)+1]^2 [τ(m-1)+1]. If the projective rectangle is nontrivial, n ≥ (m-1)^2 + 1 and ρ≥ m. If the projective rectangle has m=3, then n= 3 + 2τ, where τ≥0. The value τ=0 gives the Fano plane and τ=1 gives n=5 as with the L_2^2 projective rectangle of Example <ref>. However, not all those values of τ admit a projective rectangle with m=3; there are examples only for n = 2^k+1, that is, for τ = 2^k-1-1 (see Section <ref>). Our numerical constraints need strengthening. § AXIAL AND CENTRAL DESARGUES'S THEOREMS Consider two triangles in a projective rectangle, A = a_1a_2a_3 and B = b_1b_2b_3. (A triangle consists of three points, not all collinear, and the three lines joining the points in pairs.) There are three lines l_i = a_ib_i; if they concur in a point p we say the triangles are centrally perspective from center p. If each of the three pairs of lines a_ia_j and b_ib_j meets in a point p_ij and the points p_12, p_13, p_23 are collinear in a line l, we say A and B are axially perspective from axis l. The Central Desargues's Theorem says that, if two triangles are centrally perspective, then they are axially perspective. The converse is the Axial Desargues's theorem. The two together are generally known as Desargues's Theorem. In a projective plane the points p_ij always exist. However, neither half of Desargues's Theorem is valid in every projective plane; in fact the validity of Desargues's Theorem is equivalent to the existence of plane coordinates in a division ring. Thus, for any plane, knowing whether Desargues's theorem holds true is a fundamental question. Every projective plane is a projective rectangle, so we cannot say that Desargues's Theorem holds true in every projective rectangle; but eliminating projective planes from consideration changes the situation. We first establish that each triangle in the axial configuration is necessarily coplanar. If A= a_1a_2a_3 is a triangle and l is a line that intersects the three lines a_ia_j in three points p_ij, then all six points and the four lines are contained in a unique plane. There are four lines in the configuration of six points: l and the lines l_ij = a_ia_j. At most two can be special, so two are ordinary, say l' and l”. Any two of the four lines intersect, so l' and l” intersect; this implies they are in a unique plane π (by Theorem <ref>). The other two lines of the four are each determined by one point in l and one in l', so each is a line of π, or if special the intersection with π is a line of π. Let be a nontrivial projective rectangle. Every plane in satisfies the Axial Desargues's Theorem when the axis is an ordinary line. We begin by assuming triangles A and B are in planes π_A and π_B, respectively, and are axially perspective from an ordinary line l with intersection points p_ij, as in Figure <ref>. The two planes may be the same or different; if they are different, l is their intersection. We may assume a_i ≠ b_i for i=1,2,3 because otherwise the conclusion is trivial. If a_1b_1, a_2b_2, a_3b_3 are not all coplanar, they are coplanar in pairs, since a_i,b_i,a_j,b_j ∈p_ija_ia_j. Hence, by Proposition <ref> there is a point q at which all three lines are concurrent; therefore, q is a center of perspectivity for A and B. Thus, we assume henceforth that a_1b_1, a_2b_2, a_3b_3 are all in one plane, so that π_A = π_B. There is another plane π_ on l because is nontrivial and l is ordinary (by Corollary <ref>), and in this plane we can find a triangle = _1_2_3 that is axially perspective from l with the same intersection points p_ij = l ∩_i_j. The lines b_i_i and b_j_j are coplanar in a plane p_ijb_i_j = b_i_ib_j_j. Therefore, they intersect in a point s_ij. The pairwise coplanar lines b_1_1, b_2_2, and b_3_3 are not all coplanar because _1_2_3 = π_∌b_1,b_2,b_3. By Proposition <ref>, those three lines have a common point s = s_12 = s_13 = s_23. See Figure <ref>. Similarly, there is a point r = a_1_1∩a_2_2∩a_3_3. We prove that r ≠ s and r,s ∉π_A. If r=s, then a_i_i = ra_i_i = r_i and b_i_i = sb_i_i = r_i, so ra_i_i and rb_i_i are the same line; that is, a_i,b_i,_i are collinear; but this is impossible. Similarly, a_i,b_i,_i are collinear, which is impossible, if r or s ∈π_A. Each plane a_ib_i_i contains r and s so the lines a_ib_i and rs are coplanar. We know that r,s ∉a_ib_i⊂π_A. Hence, we have three triples a_ib_i, a_jb_j, rs of lines that are coplanar in pairs but not all coplanar. By Proposition <ref> there is a point q_ij at which each triple is concurrent. Then taking i=1 and j=2,3, we have q_12 = rs∩a_1b_1 = q_13, so q_12=q_13 is a point on all three lines a_1b_1, a_2b_2, a_3b_3 and a center of perspectivity for A and B. That completes the proof. The case in which A and B are not coplanar is reminiscent of the higher-dimensional Desargues's Theorem for projective geometries. That suggests a central Desargues's Theorem for noncoplanar triangles. Let be a nontrivial projective rectangle. Then satisfies the Central Desargues's Theorem for triangles that are not coplanar. We begin by assuming triangles A and B are in two different planes, π_A and π_B respectively, and are centrally perspective from a point p. We show that we may assume a_i ≠ b_i for i=1,2,3. Since the triangles are not coplanar, they cannot be equal; in particular, say, a_3 ≠ b_3. The conclusion is trivial if a_1=b_1 and a_2=b_2; the axis is then a_1a_2=b_1b_2. Suppose henceforth that a_2 ≠ b_2 and a_3 ≠ b_3. Assume first that a_1 ≠ b_1. Let l_i := a_ib_i (which exists and contains p by central perspectivity), p_ij := l_i ∩ l_j (which exists because a_i,b_i,a_j,b_j,p are coplanar and any distinct three of them, excluding D if one of them is not ordinary, determine the plane), and λ_ij := p_ikp_jk where {i,j,k} = {1,2,3}. The lines λ_ij exist if a_1 ≠ b_1 because if p_ij=p_ik (i,j,k all different), then this point is the intersection of a_ia_j and a_ia_k but that intersection is a_i, and it is also the intersection of b_ib_j and b_ib_k but that intersection is b_i, from which it follows that a_i=b_i, contrary to our assumption. Now we observe that all points p_ij∈π_A ∩π_B, so all lines λ_i ⊆π_A ∩π_B. But as we assumed π_A ≠π_B, their intersection cannot consist of more than one line. It follows that λ_12 = λ_13 = λ_23 and this is the required axis of perspectivity. If a_1=b_1, in the previous discussion the line l_1 degenerates to a point and the rest of the proof is similar but simpler, with a_1p_23 as the axis of perspectivity. We note that any of the lines in the proof might be special, but because we only argue within planes, the proof is not affected. Theorem <ref> reinforces our belief that a nontrivial projective rectangle should be regarded as, in a strange way, nonplanar. Unfortunately, we were not able to make this intuition precise. § THE SUBPLANE CONSTRUCTION OF PROJECTIVE RECTANGLES Given a projective plane π and a subplane π', we wish to get a projective rectangle by taking a point D, all the lines joining it to points of π', all the points on those lines, and all the restrictions to our point set of the lines in π that are generated by our points (i.e., contain at least two of our points). D must be taken in the subplane. Suppose D is not in π'. Take a point P ∈π' and the line PD. This is supposed to be a special line so it must be a line of any plane in the projective rectangle; the proof is that every line of a projective rectangle, thus every line of π', intersects every special line (Axiom (A<ref>)), so L ∩π' cannot be one point. Therefore L ∩π' must be a line of π'. Now consider a second point P' ∈π' ∖ L. Then L and L'=P'D are both extensions of lines of π' so they intersect in π', but they intersect in D; this means D ∈π'. We could simplify the construction: Take a subplane π' and one line l in it, and any point D in π' ∖ l. For the projective rectangle, take all lines that join D to l and for ' take all points of π on those lines. This gives precisely the subplane construction, because already it gives all the points of π' and then only the points generated from D and π' in that construction. A plane is Pappian if it is coordinatized by a (commutative) field. The subplane construction in a Pappian projective plane produces a projective rectangle. Let our point set be ' and the incidence structure induced on it by π be '. There are two kinds of line in ': a long line is a line of π and a short line l is the restriction to ' of a line L of π that is not contained in ', so if l is any short line, L denotes its extension into π. If ' turns out to be a projective rectangle, the long lines will be the special lines of ' and the short lines will be the ordinary lines. Axiom (A<ref>): By definition, since we took every line generated by two points of '. Axiom (A<ref>): Four such points exist in the subplane π'. Axiom (A<ref>): By definition. Axiom (A<ref>): Every point of ' is in a long line, every short line of ' is a restriction of a line L of π, and any two lines of π intersect in a point P. Thus, for each short line l of ', its extension L intersects each long line s in a point which, by definition of ', is in the long line s. Axiom (A<ref>): Follows from (A<ref>) because there are at least 3 special lines. Axiom (A<ref>): Let the other two lines be l_1' and l_2'. If either of them is long, the conclusion follows from Axiom (A<ref>). Therefore, assume l_1' and l_2' are short lines. If two or more of them are in π', then all four are and the property follows from that of a projective plane. This leaves two cases: One of the lines is in π', or none is. We give an analytic proof, using coordinates, when π=π() for a field , so we can take π' to be a subplane generated by a subfield '. We write P := l_1 ∩ l_2, Q_ij := l_i ∩ l_j', R := L_1' ∩ L_2'. We need to prove that R ∈'. We give an analytic proof. Write I_m for the point on the ideal line L_∞ that is on all lines of slope m. We choose D to be the point I_∞ on all vertical lines of π; thus, the point set of our supposed projective rectangle is ' = {[z:x:y] : z=0, or z=1 and x ∈'}. We consider two cases, depending on whether or not one of the short lines is within π'=π('). Case 1. One of the short lines is in π', say l_1 ⊆π'. Since we can assign noncollinear coordinates arbitrarily to any three noncollinear points in π', we may choose the coordinate system so that l_1 has the equation y=0, P = (0,0), l_2 has the equation y=m_2x, and l_2' has the equation y = b_2' (where b_2' ∉' since l_2' ⊈π'). Then Q_12 = I_0. The equation of l_1' has the form y = m_1'x+b_1'. Note that m_2, m_1' ∉' since l_2, l_1' are not in π'. From this information we can find the coordinates of the other intersection points. They are Q_11 = (-b_1'/m_1', 0 ), Q_21 = (b_1'/m_2-m_1', y_21), Q_22 = (b_2'/m_2, b_2'), R = (b_2'-b_1'/m_1' , b_2'). Because Q_11, Q_21, Q_22∈', their x-coordinates are in '. None equals 0. Therefore, m_1'/b_1', m_2-m_1'/b_1', b_2'/m_2∈', so also m_2/b_1'∈'. The x-coordinate of R is b_2'-b_1'/m_1' = b_2'/m_1' - b_1'/m_1' = b_2'/m_2m_2/b_1'b_1'/m_1' - b_1'/m_1'∈', proving that R ∈'. Case 2. None of the four short lines is in π'. We choose coordinates so that P ∈ L_∞; that is, P = I_m for some m ∈, so l_1 has equation y=mx+b_1 and l_2 has equation y=mx+b_2 with b_1,b_2 ∈ and b_1 ≠ b_2. The other lines l_j' have equations y = m_j'x+b_j', where m_j' ≠ m. The special case m_1'=m_2' is not excluded, but then R ∈ L_∞⊆', so we may assume m_1' ≠ m_2'. The special case b_1' = b_2' is also not excluded; then R is in the line x=0; this case will be dealt with in the course of the proof. We can exclude m_1'=m and m_2'=m since then P ∈ l_1' or l_2', respectively, which violates the assumption of Axiom (A<ref>). The intersection points (other than P), which cannot be in L_∞, have coordinates Q_11 = (b_1-b_1'/m_1'-m, y_11), Q_12 = (b_1-b_2'/m_2'-m, y_12), Q_21 = (b_2-b_1'/m_1'-m, y_21), Q_22 = (b_2-b_2'/m_2'-m, y_22), R = (b_1'-b_2'/m_2'-m_1', y_R). The x-coordinates of the Q_ij are in '; we want to show that of R is also in '. Write ρ_ij for the x-coordinate of Q_ij. That is, b_i-b_j' = ρ_ij(m_j'-m). These are four equations E_ij. By combining E_11 with E_21 and E_12 with E_22 we infer that b_2-b_1 = (ρ_21-ρ_11)(m_1'-m) = (ρ_22-ρ_12)(m_2'-m). Thus, m_1'-m/m_2'-m = ρ_22-ρ_12/ρ_21-ρ_11 =: α∈'. (This last step would be forbidden if ρ_21=ρ_11, but that implies l_1' contains D, contrary to assumption.) Now combining E_11 with E_12 and E_21 with E_22 we infer that b_2'-b_1' = ρ_11(m_1'-m) - ρ_12(m_2'-m) = (ρ_12-αρ_11)(m_2'-m) with α∈' and similarly b_2'-b_1' = (ρ_22-βρ_21)(m_1'-m) with β∈'. Rewriting, m_1'-m = b_2'-b_1'/ρ_22-βρ_21, m_2'-m = b_2'-b_1'/ρ_12-αρ_11, which combine to give m_2'-m_1' = (b_2'-b_1') ( 1/ρ_12-αρ_11 - 1/ρ_22-βρ_21), or in a different form, m_2'-m_1'/b_2'-b_1'∈'. This is the reciprocal of the x-coordinate of R; consequently, R ∈'. The one caveat is that, if b_1'=b_2', we cannot proceed from Equation (<ref>); but then that equation implies m_1'=m_2', which was excluded at the beginning of the proof. So this difficulty will not occur. That concludes the proof of Theorem <ref>. If π is Pappian and not prime, it has a prime subplane so there are proper subplanes to carry out this construction. All Desarguesian planes and many others have proper subplanes (e.g., planes over near fields; cf. the book of Hughes and Piper <cit.>). However, we do not know whether the subplane construction works in a non-Pappian plane. We did not try to construct an algebraic proof for Desarguesian planes; we chose to study only Pappian planes to keep the algebra simple. We fear that generalization may require finding a synthetic proof. There are nontrivial projective rectangles in which n=m, but n,m must be infinite. Suppose is a field that has a proper subfield ' of the same infinite cardinality. The subplane construction generates a nontrivial projective rectangle with n=|| and m = |'| = n, within which π(') is one of the (full) planes. This contrasts with the case of finite m=n in Proposition <ref>. § NARROW RECTANGLES The smallest allowed value of m is 3. We call a projective rectangle narrow if it has m=3. The matroid L_2^k of Example <ref> is defined for any group 𝔊 (except the trivial group), simply replacing _2^k by 𝔊. In fact, all we need for 𝔊 is a (nontrivial) quasigroup; this matroid is the complete lift matroid L_0(𝔊K_3) from <cit.> or <cit.>). We define L_0(𝔊K_3) in a way compatible with Example <ref>. The ground set is E:= A∪ B∪ C where A:= { a_g | g ∈𝔊}∪{D }, B:= { b_g | g ∈𝔊}∪{D } and C:= { c_g | g ∈𝔊}∪{D }. The lines (rank-2 flats of the matroid) are A, B, and C and the sets {a_g, b_g h, c_h } with g, h ∈𝔊. If this is a projective rectangle, A, B, and C are the special lines and the other lines are the ordinary lines. But L_0(𝔊K_3) is not always a projective rectangle. Every narrow projective rectangle has the form L_0(𝔊K_3) where 𝔊 is a nontrivial group with exponent 2, and conversely. If is finite the group is ℤ_2^k with k≥1 and its parameters are (m,n)=(3,2^k+1) with k≥1. This proposition includes infinite groups. First we note that every narrow projective rectangle is an L_0(𝔊K_3) where 𝔊 is a quasigroup of order greater than 1. There are three special lines, which we call A, B, and C. We label the elements of each line, except D, by a set G of labels and we define an operation on G by gh=k such that a_gc_hb_k is an ordinary line of . It is clear that this is well defined and that any two of g,h,k determine the third, so G is a quasigroup. Then is the same as L_0(𝔊K_3) except that in the projective rectangle we ignore the trivial lines of the matroid. Now let us assume that a matroid L_0(𝔊K_3) is a projective rectangle. We prove that 𝔊 satisfies the following fundamental property: gh=ef gf=eh. Consider the lines l_1={a_g,b_gh,c_h} and l_2={a_e,b_ef,c_f in Axiom (A<ref>), and two other lines, l={a_g,b_gf,c_f} and l'={a_e,b_eh,c_h}. According to Axiom (A<ref>) the lines l and l' should have a common point, so b_gf=b_eh, which means gf=eh. Any quasigroup is isotopic to a loop (a quasigroup with identity element, 1), so we may assume 𝔊 is a loop. Suppose h=e=1 in Equation (<ref>). Then g=f gf=1; in other words, gg=1 for every element of 𝔊. Suppose g=h and e=f. Then 1=1 ge=eg; that is, 𝔊 is commutative. A property that characterizes a quasigroup that is isotopic to a group is the Quadrangle Criterion <cit.>, which is .[ a_1c_1=a_2c_2; a_1d_1=a_2d_2; b_1c_1=b_2c_2 ]} b_1d_1=b_2d_2. We prove the Quadrangle Criterion for 𝔊 by means of Equation (<ref>). a_1c_1=a_2c_2 a_1a_2=c_1c_2, a_1d_1=a_2d_2 a_1a_2=d_1d_2, b_1c_1=b_2c_2 b_1b_2=c_1c_2. The first two lines imply that c_1c_2=d_1d_2 and combined with the third line we deduce that b_1b_2=d_1d_2, proving the Quadrangle Criterion. Hence, 𝔊 is isotopic to a group. By isotopy we may assume 𝔊 is a group, and we have seen that it is abelian and has exponent 2. If 𝔊 is finite, it is _2^k for some positive integer k as in Example <ref>. These necessary properties of 𝔊 are sufficient for L_0(𝔊K_3) to be a projective rectangle, because exponent 2 implies Axiom (A<ref>), as is easy to verify. The geometry of a narrow projective rectangle is determined by the isotopy type of its quasigroup. Thus, the finite such rectangles are obtained from a finite Pappian projective plane of 2-power order by the subplane construction of Section <ref> using a Fano subplane. § ORTHOGONAL ARRAYS FROM PROJECTIVE RECTANGLES A transversal design is a partition of a set _T of m(n-1) points into m special sets of size n-1 together with a family of m-subsets of _T such that each such m-set intersects each special set exactly once and each pair of points not contained in a special set lies in exactly one m-set. A projective rectangle with D deleted is exactly a transversal design with the extra partial Pasch property Axiom (A<ref>). A dual concept to transversal designs is that of orthogonal arrays; the corresponding dual to projective rectangles is orthogonal arrays with a dual property to (A<ref>). We explore that dual concept in this section.[We thank Douglas Stinson for drawing our attention to transversal designs.] An orthogonal array (OA) is a generalization of orthogonal latin squares. We adopt the notation for orthogonal arrays used in <cit.>. An N× k array with A entries from S (a set of size s) is said to be an orthogonal array, OA_λ(N,k,s,t), with s symbols, strength 0≤ t ≤ k, and index λ if every N× k subarray of A contains each tuple based on S exactly λ times as a row. We write a(r,c) for the label that appears in row r and column c. §.§ An orthogonal array from points and lines We represent a projective rectangle as an orthogonal array of points and lines. In ∖ D we have m special lines partitioning all the points, and (n-1)^2 ordinary lines. By Theorem <ref>, every ordinary line intersects every special line exactly once and every pair of points in different special lines lie in exactly one ordinary line. Each ordinary line will give a row of the orthogonal array and each special line will give a column. We label the points in each special line by the numbers 1,…,n-1 and we write a(p) for the label of the point p. The entries in a row are the labels of the points that appear in that ordinary line, arranged in the column of the special line that contains the point. Thus, each pair of labels appears once in each pair of columns. That is a 2-(n-1,m,1) orthogonal array in standard notation. In the notation used in <cit.>, it is an OA_1((n-1)^2, m, n-1,2). We formulate a special property for an orthogonal array of type OA_1((n-1)^2, m, n-1,2). (OA6) If four rows in the orthogonal array appear like the first five columns c_ij in this table, c_12 c_13 c_24 c_14 c_23 c_34 r_1 a_12 a_13 a_14 r_2 a_12 a_24 a_23 r_3 a_13 a_23 a_34 r_4 a_24 a_14 a_34 where it is possible that c_13=c_24 or c_14=c_23, then there is a sixth column that appears like c_34. (The empty cells are arbitrary.) The property (OA6) does not follow from the definition of an orthogonal array. We are not aware that it has been considered in the theory of orthogonal arrays or dually in transversal designs. Its contrary, that the sixth column of (OA6) never appears, arises (in the language of transversal designs) as the “anti-Pasch configuration” in <cit.> (whose “Pasch configuration” is slightly stricter than ours).[We are very grateful to Charles Colbourn for hunting in the literature and communicating these facts.] Let n≥ m ≥ 3. * A projective rectangle of order (m,n) gives rise to an orthogonal array OA_1((n-1)^2, m, n-1,2) with property (OA6). * An orthogonal array OA_1((n-1)^2, m, n-1,2) gives rise to a projective rectangle of order (m,n) if, and only if, it satisfies the additional property (OA6). Proof of Part (i). We have shown that gives rise to an orthogonal array with the stated parameters. Conversely, suppose we have an OA_1((n-1)^2, m, n-1,2). Let C be the set of m columns, let R be the set of rows, let L be the set of n-1 labels in the array, and write a(r,c) for the entry in row r, column c. We form an incidence structure whose point set is (C× L) ∪ D. The lines of this structure are special lines, of the form s_c = {(c,a) : a ∈ L }∪ D, for each c∈ C, and ordinary lines, of the form l_r = {(c,a) : c ∈ C and a= a(r,c) }, for each r∈ R. We prove this incidence structure satisfies Axioms (A<ref>)–(A<ref>) of a projective rectangle. We assumed n-1≥ m-1≥2 so in the orthogonal array there are at least two distinct labels, which we call a_1 and a_2, and at least 3 columns, of which three are c_1,c_2,c_3. There are also at least 2^3 rows. Proof of Axiom (A<ref>). We consider two points p_1=(r_1,a_1) and p_2=(r_2,a_2) where a_1=a(r_1,c_1) and a_2=a(r_2,c_2). The points belong to the same special line if and only if c_1=c_2. The special line is s_c_1. Otherwise, there is exactly one row r where the entry in column c_1 is a_1 and the entry in column c_2 is a_2. Then p_1 and p_2 belong to the ordinary line l_r. Proof of Axiom (A<ref>). Among the three pairs a(r_1,c_j), a(r_2,c_j) for j=1,2,3, only one can be the same label, a(r_1,c_j) = a(r_2,c_j), because each ordered pair of labels appears only once in the same two columns. Say a(r_1,c_1) ≠ a(r_2,c_1) and a(r_1,c_2) ≠ a(r_2,c_2). Then (c_1,a(r_1,c_1)), (c_1,a(r_2,c_1)), (c_2,a(r_1,c_2)), (c_2,a(r_2,c_2)) are four points, no three collinear. Proof of Axiom (A<ref>). The special line s_c contains at least the three points D, (c,a_1), (c,a_2). The ordinary line l_r contains the points (c_1,a(r,c_1)), (c_2,a(r,c_2)), (c_3,a(r,c_3)). Proof of Axiom (A<ref>). This follows by the definition of the incidence structure. Proof of Axiom (A<ref>). Two special lines intersect only in D. A special line s_c and an ordinary line l_r intersect only in the point (c,a(r,c)). Proof of Part (ii). We assume an orthogonal array is constructed from . Property (OA6) is the interpretation of Axiom (A<ref>) for an OA_1((n-1)^2, m, n-1,2). In Axiom (A<ref>) let l_3 and l_4 be the two lines besides l_1 and l_2. The assumption in the axiom is that points p_ij = l_i ∪ l_j exist for (i,j) = (1,2),(1,3),(2,4),(1,4),(2,3). Let s_ij be the special line that contains p_ij; we note that the special lines are distinct except that s_13 may be the same as s_24 and s_14 may be the same as s_23. In the orthogonal array derived from , the row of line l_i is r_i, the column of line s_ij is c_ij, and the label of p_ij is a(r_i,c_ij)=a(r_j,c_ij). Therefore, the array looks as in Property (OA6), except for the last column. The conclusion of Axiom (A<ref>) is that there is a point p_34 that is incident with both lines l_3 and l_4. That translates to the existence of a final column as in (OA6) with a_34 = a(p_34). Hence, Property (OA6) is satisfied by the array derived from the projective rectangle . Conversely, we prove Axiom (A<ref>) from Property (OA6). Let r_1, r_2 be the rows of the array that correspond to the lines l_1, l_2 in this axiom and let l_3,l_4 be the two other lines with corresponding rows r_3,r_4. The hypotheses of intersection imply that the diagram in Property (OA6) is satisfied, possibly except for the last column. By the assumption of Property (OA6), the final column does exist. This implies that l_3∩ l_4 is the point p_34 in the special line s_34 that corresponds to column c_34 and has the label a(p_34 = a_34. Therefore, the conclusion of Axiom (A<ref>) is satisfied. §.§ An orthogonal array from points and planes Ryser gives a nice construction of an orthogonal array from a projective plane <cit.>. We extend Ryser's ideas to construct an orthogonal array from points and planes of a projective rectangle by partitioning the ordinary points outside a given ordinary line by means of the separate planes that contain that line. The proof is based on the proof that Ryser gives for projective planes, adapted to the existence of multiple planes. Let l be an ordinary line in a finite . The family of sets π∖ (l∪ D) for all planes π that contain l is a partition of the points in ∖ (l ∪ D) into (n-2)/(m-2) parts of m(m-2) points each. We observe that every plane in containing l also contains the special point D. If p∉l ∪ D, then by Corollary <ref> there is a unique plane on l that contains p; thus, the planes on l partition the points in ∖ (l ∪ D). The number of such planes is given by Theorem <ref> Part (<ref>). The number of parts of the resulting partition equals the number of planes that contain the line l. Suppose that (m,n) is the order of the projective rectangle . Let l ∈ be an ordinary line and let π_1, π_2, …, π_w be all the planes in that contain l, where w=(n-2)/(m-2). Then gives rise to an orthogonal array of the form OA_w((m-1)+w, m, m-1,2). Let p_1, p_2, …, p_m be the points of l. We label the points in π_i∖ l by q_1^i, q_2^i, …, q_k^i where k=(m-1)^2 (D is one of these points) and label the lines on p_r in π_i∖ l with 1, 2, …, m-1 for each r=1,2, …, m. We write a_st^i to record the label of the line q_s^ip_t∈π_i. We claim that the matrix A_i=[a_st^i]_s,t is an orthogonal array of the form OA_1((m-1)^2,m,m-1,2). We prove this by contradiction. Suppose that there two ordered pairs in the rows of A_i that are equal; that is, (a_s_1t_1^i,a_s_1t_2^i) =(a_s_2t_1^i,a_s_2t_2^i) with s_1 s_2. Therefore, a_s_1t_1^i=a_s_2t_1^i and a_s_1t_2^i =a_s_2t_2^i. The equality of these labels implies that the points q_s_1^i, q_s_2^i, and p_t_1^i are collinear and that q_s_1^i, q_s_2^i, and p_t_2^i are also collinear. Thus, each p_t_j^i is the unique point of l on the same line q_s_1^i q_s_2^i. Therefore, p_t_1^i = p_t_2^i, but that is impossible because t_1 ≠ t_2. Now let B=[ A_1; A_2; ⋮; A_w ]. This matrix is an orthogonal array of the form OA_λ((m-1)^2+w,m,m-1,2) where λ = ∑_i=1^w 1 = w. That completes the proof. We give an example for Theorem <ref> using the projective rectangle L_2^2 depicted in Figure <ref>. For the sake of simplicity we pick the line l={a_1, b_1,c_1}. We recall that for an ordinary line in L_2^2, there are exactly λ=3 planes having that line in common. Figure <ref> shows the three planes embedded in L_2^2 with l as common line. For the first plane, let's say π_1, we distinguish the points a_1, a_g, b_1, b_g, c_1, c_g and D_1:=D. For a fixed point in l theres two lines in π_1∖ l passing by the fixed point; from the set {1,2} we assign labels to these lines. For the lines {a_1,a_g,D_1} and {a_1,b_g,c_g}, which intersect l at a_1, we assign 1 and 2 to them, respectively. We arbitrarily assign 1 and 2 to {b_1,b_g,D_1} and {a_g,b_g,c_g}, respectively, and also to {a_g,b_g,c_1} and {c_g,c_1,D_1}. With these labels we construct the first four rows of the rectangular array in Table <ref>. The columns of the array are labeled on top with the points in the line l and the rows are labeled on the left with the points in each plane that are not in l. In this case the first four rows are labeled with the points in π_1∖ l. The entries of the rectangular array are the labels of the lines passing through the point in the column label and the point in the row label. For instance, the first entry of the first row in Table <ref> is 1, because the line passing through a_1 and a_g has label 1. The first entry of the fourth row is 1, because the line passing through a_1 and D has label 1. The second plane in Figure <ref>, π_2, has the points a_1, a_h, b_1, b_h, c_1, c_h and D_2:=D. As in π_1, we assign arbitrary labels from {1,2}. We choose 1 to be the label of {a_1,b_h,c_h}, {a_h,b_1,c_h}, and {c_1,c_h,D_2} and 2 as the label of {a_1,a_h,D_2}, {b_1,b_h,D_2}, and {a_h,b_h,c_1}. For the third plane in Figure <ref>, π_3 with points a_1, a_gh, b_1, b_gh, c_1, c_gh and D_3:=D, we also assign arbitrary labels from {1,2}. So, for example, 1 will be the label of {a_1,a_gh,D_3}, {a_gh,b_1,c_gh}, and {a_gh,b_bh,c_1} and 2 will be the label of {a_1,b_gh,c_gh}, {b_1,b_gh,D_3}, and {c_1,c_gh,c_1}. These give the orthogonal array OA_3(12,3,2,2). This is a 12 × 3 array filled with 2 symbols, such that in any 2 columns there are 4 different ordered pairs, each repeated λ=3 times. § THE DUAL INCIDENCE STRUCTURE The dual structure is obtained by interchanging the roles of points and lines. It is interesting in its own right, as it connects projective rectangles with incidence geometry in a different way. The dual is essentially a net with a complete quadrangle property. Being a dual projective rectangle, it contains all the dual projective planes of the planes of the original projective rectangle. A net is an incidence structure (,,ℐ) which consists of a set of points and a set of parallel classes _i (i ∈ an index set) of lines, such that each line is a set of points, every point belongs to exactly one line of each parallel class, and any two lines of different parallel classes have exactly one point in common. The theory of nets is extensive. It is easy to prove that every parallel class has the same number of lines and that the number of points on every line is the same. We call these points and lines ordinary. By adding a special point for each parallel class, which is defined to belong to all lines of that class and no other ordinary lines, and adding one special line that contains all the special points, we get a projectively extended net. (“Projectively” refers to the existence of the special line.) Two points might not be in any common line. They are called collinear if they are in a line. They cannot be in more than one line. A complete quadrangle in a net consists of 4 points, no three collinear, and 6 lines determined by them. A nearly complete quadrangle consists of the same 4 points and 5 of the 6 lines, the 6th line possibly existing or not existing. The dual of Axiom (A<ref>) is (A<ref>*) (Complete Quadrangle Property) Every nearly complete quadrangle is complete. A projective extension of a net has the complete quadrangle property if and only if the unextended net has it. Assume a net has the complete quadrangle property and consider the cases in its extension that are not in itself. If P_1' and P_2' are special points, they are already collinear. Suppose only P_1' is special: then it is in every line of some parallel class, and that class includes a line that contains P_2'. The dual of a projective rectangle is a projective extension of a net that has the complete quadrangle property, at least three parallel classes, and at least 2 lines in each parallel class, and vice versa. We dualize the rectangle axioms and consider how they apply to the net. * Every two distinct lines contain exactly one point in common. This is true by definition if one of the lines is the special line. It is valid in the net except when the lines are parallel. Parallel lines have a common point in the extension. * There exist four lines in the extended net with no three of them concurrent. Take the special line, three special points, and one ordinary line on each of the special points. If the three ordinary lines are concurrent, replace one of them by a parallel line. Or, take two lines from each of two parallel classes. * Every point is in at least three distinct lines. This is equivalent for an ordinary point to the existence of at least 3 parallel classes and for a special point to the existence of a parallel to each ordinary line. * There is a special line D. (A point in with D is called special. A point that is not in D and a line that is not D are called ordinary.) This is part of the definition of a projectively extended net. * Each special point belongs to exactly one line with each other point. This is part of the definition of a projectively extended net. * If two ordinary points P_1 and P_2 are collinear, then any two other points that are collinear with P_1 and P_2 through four distinct lines (i.e., there are four distinct lines P_iP_j' for i,j=1,2), are themselves collinear. It is clear that Axiom (A*<ref>) is the complete quadrangle property for the extended net, excluding the case where P_1 or P_2 is special. Lemma <ref> says that the two formulations are actually equivalent. § OPEN PROBLEMS Our work on nontrivial projective rectangles leaves many unanswered questions. Here are some to add those in the body of the paper. * All our examples of projective rectangles are substructures of Pappian projective planes that can be obtained by the subplane construction. Are there other examples? * We are ignorant of how a special line compares in its intersections with two planes π and π'. Two questions stand out. * If a plane π has an ordinary line l, there are many other planes in which l is a line. However, if l is special, i.e., l = s ∩π for a special line s, we have no idea whether even one other plane has l as a line. * We do not know whether there may be another plane π' such that s ∩π∩π' has a specific cardinality (not greater than m), what the possible values of |s ∩π∩π'| may be, whether 0 is a possible value in every nontrivial (aside from L_2^2, where it is not), or in the infinite case whether it is even possible that s ∩π' may properly contain s ∩π. * We proved the subplane construction of Section <ref> only for Pappian planes, coordinatizable by a field. * Is there an analytic proof for skew fields? * Does an analytic proof using alternative algebras succeed in planes with weaker coordinate algebras such as near fields and alternative algebras? * Is there a synthetic proof for Pappian or Desarguesian or other projective planes? * Does the construction exist in non-Desarguesian, or non-Moufang, planes? * Are all planes in a projective rectangle isomorphic? We were unable to find a proof or a counterexample. * What do the partial Desargues's theorems in Section <ref> imply about automorphisms and coordinatizations? * Is there a rigorous sense in which a projective rectangle is higher-dimensional, as suggested in Section <ref> and <cit.>? * If every plane in is Moufang, it has coordinates in an alternative ring. If all such rings are isomorphic, does extend to a Moufang plane with an alternative ring that extends that of the planes in ? * Given a projective rectangle, in what projective planes can it be embedded? In particular, our constructions by subplanes and harmonic extension give projective rectangles embedded in a Pappian plane but the same rectangles may possibly be isomorphically embeddable in planes that are not Pappian, not Desarguesian, maybe not even Moufang, in a nontrivial way, i.e., not by finding the Pappian plane as a subplane of a non-Pappian plane. 99 dk J. Dénes and A. D. Keedwell, Latin Squares and Their Applications. Academic Press, New York–London, 1974. dls Jeff H. Dinitz, Alan C. H. Ling, and Douglas R. Stinson, Perfect hash families from transversal designs. Australas. J. Combin. 37 (2007), 233–242. rfhc Rigoberto Flórez, Harmonic conjugation in harmonic matroids. Discrete Math. 309 (2009), 2365–2372. bgpp Rigoberto Flórez and Thomas Zaslavsky, Projective planarity of matroids of 3-nets and biased graphs. Australasian J. Combin. 77(2) (2020), 299–338. pr2 Rigoberto Flórez and Thomas Zaslavsky, Projective rectangles: Incidence graphs and higher structure. In preparation. pr3 Rigoberto Flórez and Thomas Zaslavsky, Projective rectangles: Harmonic conjugation. In preparation. Hedayat A. S. Hedayat, N. J. A. Sloane, and J. Stufken, Orthogonal Arrays, Theory and Applications. Springer-Verlag, New York, 1999. HP Daniel R. Hughes and Fred C. Piper, Projective Planes. Grad. Texts in Math., Vol. 6. Springer-Verlag, New York, 1973. MR 48 #12278. Zbl 267.50018. ldt Bernt Lindström, A Desarguesian theorem for algebraic combinatorial geometries. Combinatorica 5 (1985), no. 3, 237–239. lhc Bernt Lindström, On harmonic conjugates in full algebraic combinatorial geometries. Europ. J. Combin. 7 (1986), 259–262. Ryser H. J. Ryser, Combinatorial Mathematics. Carus Math. Monographs, No. 14. Math. Assoc. Amer., New York, 1963. vw J. H. van Lint and R. M. Wilson, A Course in Combinatorics. Second ed. Cambridge University Press, Cambridge, Eng., 2001. b1 Thomas Zaslavsky, Biased graphs. I. Bias, balance, and gains. J. Combin. Theory Ser. B 47 (1989), 32–52. b2 Thomas Zaslavsky, Biased graphs. II. The three matroids. J. Combin. Theory Ser. B 51 (1991), 46–72.
http://arxiv.org/abs/2307.04113v1
20230709080545
Mitosis Detection from Partial Annotation by Dataset Generation via Frame-Order Flipping
[ "Kazuya Nishimura", "Ami Katanaya", "Shinichiro Chuma", "Ryoma Bise" ]
cs.CV
[ "cs.CV" ]
Mitosis Detection from Partial Annotation K. Nishimura et al. Kyushu University, Fukuoka, Japan [email protected] Kyoto University, Kyoto, Japan Mitosis Detection from Partial Annotation by Dataset Generation via Frame-Order Flipping Kazuya Nishimura1 Ami Katanaya2 Shinichiro Chuma2 Ryoma Bise1 Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023 ========================================================================================== Detection of mitosis events plays an important role in biomedical research. Deep-learning-based mitosis detection methods have achieved outstanding performance with a certain amount of labeled data. However, these methods require annotations for each imaging condition. Collecting labeled data involves time-consuming human labor. In this paper, we propose a mitosis detection method that can be trained with partially annotated sequences. The base idea is to generate a fully labeled dataset from the partial labels and train a mitosis detection model with the generated dataset. First, we generate an image pair not containing mitosis events by frame-order flipping. Then, we paste mitosis events to the image pair by alpha-blending pasting and generate a fully labeled dataset. We demonstrate the performance of our method on four datasets, and we confirm that our method outperforms other comparisons which use partially labeled sequences. Code is available at <https://github.com/naivete5656/MDPAFOF>. § INTRODUCTION Fluorescent microscopy is widely used to capture cell nuclei behavior. Mitosis detection is the task of detecting the moment of cell division from time-lapse images (the dotted circles in Fig. <ref>). Mitosis detection from fluorescent sequences is important in biological research, medical diagnosis, and drug development. Conventionally tracking-based methods <cit.> and tracking-free methods <cit.> have been proposed for mitosis detection. Recently, deep-learning-based mitosis-detection methods have achieved outstanding performance <cit.>. However, training deep-learning methods require a certain amount of annotation for each imaging condition, such as types of cells and microscopy and the density of cells. Collecting a sufficient number of labeled data covering the variability of cell type and cell density is time-consuming and labor-intensive. Unlike cell detection and segmentation, which aims to recognize objects from a single image, mitosis detection aims to identify events from time series of images. Thus, it is necessary to observe differences between multiple frames to make mitosis events annotation. Comprehensively annotating mitosis events is time-consuming, and annotators may be missed mitosis events. Thus, we must carefully review the annotations to ensure that they are comprehensive. Partial annotation has been used as a way to reduce the annotation costs of cell and object detection <cit.>. Fig. <ref> shows an example of partially annotated frames. Some mitosis events are annotated (a red-dotted circle), and others are not (light-blue-dotted circles). The annotation costs are low because the annotator only needs to plot a few mitotic positions. In addition, this style of annotation allows for missing annotations. Therefore, it would be effective for mitosis detection. Unlike supervised annotation, partial annotation can not treat unannotated areas as regions not containing mitosis events since the regions may contain mitosis events (Fig. <ref>). The regions naturally affect the training in the partial annotation setting. To avoid the effect of unlabeled objects in unlabeled regions, Qu et al. <cit.> proposed to use a Gaussian masked mean squared loss, which calculates the loss around the annotated regions. The loss function works in tasks in which foreground and background features have clearly different appearances, such as in cell detection. However, it does not work on mitosis detection since the appearance of several non-mitotic cells appears similar to mitosis cells; it produces many false positives. In this paper, we propose a cell-mitosis detection method for fluorescent time-lapse images by generating a fully labeled dataset from partially annotated sequences. We achieve mitosis detection training in a mitosis detection model with the generated dataset. To generate the fully labeled dataset, we should consider two problems: (1) no label indicating regions not containing mitosis cells and (2) few mitosis annotations. We can easily generate the regions not containing mitotic cells by using one image twice. However, such regions do not contribute to identifying mitotic cells and non-mitotic cells since the data do not show natural cell motions. For the training to be effective, the regions not containing mitotic cells should show the natural movements of cells. To generate such regions, we propose frame-order flipping which simply flips the frame order of a consecutive frame pair. As shown in the white rectangles in Fig. <ref>, we can convert a mitosis event to a cell fusion by flipping operation. Hence, the flipped pair is the region not containing mitosis cells. Even though we flipped the frame order, the non-mitotic cells still have natural time-series motion, as shown in the yellow rectangles in Fig. <ref>. In addition, we can make the most of a few partial annotations by using copy-and-paste-based techniques. Unlike regular copy-and-paste augmentation <cit.> for supervised augmentation of instance segmentations which have object mask annotations, we only have point-level annotations. Thus, we propose to use alpha-blending pasting techniques which naturally blend two images. Experiments conducted on four types of fluorescent sequences demonstrate that the proposed method outperforms other methods which use partial labels. Related work Some methods used partially labeled data to train model <cit.>. Qu <cit.> proposed a Gaussian masked mean squared loss, which calculates the loss around the annotated areas. To more accurately identify negative and positive samples, positive unlabeled learning has been used for object detection <cit.>. These methods have used positive unlabeled learning on candidates detected by using partial annotation to identify whether the candidates are labeled objects or backgrounds. However, since the candidates detected by partial annotation include many false positives, the positive unlabeled learning does not work on mitosis detection. the appearance of the mitosis event and backgrounds in the mitosis detection task, it is difficult to estimate positive prior. These methods could not work on mitosis detection. The positive unlabeled learning requires a positive prior. § METHOD: MITOSIS DETECTION WITH PARTIAL LABELS Our method aims to detect coordinates and timing (t, x, y) of mitosis events from fluorescent sequences. For training, we use time-lapse images ℐ = {I_t}_t=1^T and partial labels (a set of annotated mitosis cells). Here, I_t denotes an image at frame t, and T is the total number of frames. Our method generates a fully labeled dataset 𝒟_p= { (I'_t-1, I'_t), 𝒫_t' }^T-1_t=1 from time-lapse images ℐ and partial labels and then trains a mitosis detection model f_θ with the generated dataset. Here, I'_t is a generated image, and 𝒫_t' is a set of mitotic coordinates contained in (I'_t-1, I'_t). Since our method trains the network with partial labels, it can eliminate the costs of checking for missed annotations. §.§ Labeled dataset generation Fig. <ref> shows an overview of our dataset generation. We randomly pick a pair of consecutive frames (I_t-1, I_t) from time-lapse images ℐ. Since the pair may contain unannotated mitosis events, we forcibly convert the pair into a negative pair (i.e., a pair which does not contain mitosis events) by using frame-order flipping. Next, we paste mitosis events to a generated pair using alpha-blending pasting and obtain a generated pair (I'_t-1, I'_t). Since we know the pasted location, we can obtain the mitosis locations 𝒫'_t of the generated pair. Negative pair generation with frame-order flipping: In this step, we generate a pair not containing mitotic cells by using a simple augmentation-based frame-order flipping. Fig. <ref> shows an example of the pair images (I_t-1, I_t). The pair may contain mitosis events. If we assume that the pair does not contain mitotic cells, it affects the training of the mitosis detection model f_θ. To prevent the pair from containing mitosis events, we flip the frame order and treat the flipped pair (I_t, I_t-1) as a pair of negative. Since mitosis is the event that a cell divides into two daughter cells, the mitosis event is transformed into an event in which two cells fuse into one by flipping the order (Fig. <ref>). The flipped event can treat as a non-mitotic event. Note that the motivation behind using frame flipping is to be able to utilize pixels showing the motions of non-mitotic cells negatives by transforming mitosis into other events. Even if the order is flipped, the movements of the non-mitotic cell are still a non-mitotic cell feature, and we consider that these cells are effective for the training of the negative label. Mitosis label utilization with alpha-blending pasting: Next, we paste mitosis events to the flipped pair by using copy-and-paste techniques in order to utilize the positive labels effectively. Copy and paste augmentation has been used for supervised augmentation of instance segmentation <cit.>. Unlike instance segmentation with object masks, we only have locations (t, x, y). A simple solution is to crop images around the mitosis position and copy and paste them to the target image, like in CutMix <cit.>. However, the cropped image naturally contains surrounding objects, and the generated image appears unnatural. Unnatural images cause the detection network to make biased predictions and reduce generalization performance. To avoid this problem, we propose alpha-blending pasting with a Gaussian blending mask. We blend two images by leaving the pixel value in the center and blurring the vicinity of the edge of the image. First, we crop the image around the positive annotations and obtain a set of cropped pair 𝒞 = {(C_t-1^i, C_t^i )}^N_i=0 and initialize (I'_t-1, I'_t)=(I_t, I_t-1) and 𝒫_t'= {}. Here, N is the total number of partial annotations, while C_t-1^i and C_t^i are images before and after the mitosis of the i-th annotation (Fig. <ref>). Define I_t'(l⃗^j), I_t-1'(l⃗^j) as a cropped pair image at the j-th random spatial location l⃗^j. We crop each image centered at l⃗^j to a size that is the same as that of C_t^i. We update the randomly selected patch I_t'(l⃗^j), I_t-1'(l⃗^j) by blending a randomly selected cropped pair (C_t-1^i, C_t^i) with the following formula: I_t'(l⃗^j) = (1-α) ⊙I_t'(l⃗^j) + α⊙C_t^i, where α is a Gaussian blending mask (Fig. <ref>). We generate the blending mask by blurring a binary mask around the annotation with a Gaussian filter. We use a random sigma value for the Gaussian filter. Then, we add the paste location l⃗^j to the set 𝒫_t'. We repeat this process random k times. §.§ Mitosis detection with generated dataset We modified a heatmap-based cell detection method <cit.> to work as a mitosis detection method. Fig. <ref> is an illustration of our mitosis detection model. Given two consecutive frames (I'_t-1, I'_t), the network output heatmap Ĥ_t. We treat the channel axis as the time axis for the input. The first channel is I'_t-1, and the second is I'_t. First, we generate individual heatmaps H_t^j for each pasted coordinate l⃗^j = (l^j_x, l^j_y). H_t^j is defined as H_t^j(p_x, p_y) = exp( -(l_x^j - p_x) ^2 + (l_y^j - p_y) ^ 2/σ^2), where p_x and p_y are the coordinates of H_t^j and σ is a hyper parameter that controls the spread of the peak. The ground truth of the heatmap at t is generated by taking the maximum through the individual heatmaps, H_t = max_j (H^j_t) (H_t in Fig. <ref>). The network is trained with the mean square error loss between the ground truth H_t and the output of the network Ĥ_t. We can find the mitosis position by finding a local maximum of the heatmap. § EXPERIMENTS Dataset: We evaluated our method on four datasets. The first set is HeLa <cit.>, in which live cell images of HeLa cells expressing H2B-GFP were captured with 1100 × 700 resolution <cit.> [We used the publicly available CTC data-set <http://celltrackingchallenge.net/>. We only use HeLa since the number of mitosis events in other cells is small.]. Each sequence contains 92 fluorescent images with 141 mitosis events on average. The second set is ES, in which live cell images of mouse embryonic stem cells expressing H2B-mCherry were captured with 1024 × 1024 resolution. Each sequence contains 41 fluorescent images with 33 mitosis events on average. The third set is ES-D in which mouse embryonic stem cells expressing H2B-mCherry were induced to differentiate and used to capture live cell images. Each sequence contains 61 fluorescent images with 18 on average events on average. The fourth set is Fib, in which live cell images of mouse fibroblast cells expressing H2B-mCherry were captured with 1024 × 1024 resolution. Each sequence contains 42 fluorescent images with 11 mitosis events on average. Each dataset consists of four sequences of images. We performed four-fold cross-validation in which two sequences were used as training data, one as validation data, and one as test data. As shown in Fig. <ref>, the appearance and density are different depending on the dataset. Implementation details: We implemented our method within the Pytorch framework <cit.> and used a UNet-based architecture <cit.> for the mitosis-detection network. The model was trained with the Adam optimizer with a learning rate of 1e-3. σ, which controls the spread of the heatmap, was 6. The cropping size of the positive annotations was 40 pixels. We randomly change the number of pasting operations k between 1 and 10. We used random flipping, random cropping, and brightness change for the augmentation. Evaluation metrics: We evaluated our method using the F1 score <cit.>, which is widely used in mitosis detection. Given ground-truth coordinates and detected coordinates, we performed one-by-one matching. If the distance of the matched pair was within spatially 15 pixels and temporally 6, we associated the closest coordinate pairs. We treated the matched pair as true positives (TP), unassociated coordinates as false positives (FP), and unassociated ground-truth coordinates as false negatives (FN). Comparisons: We conducted four comparisons that involved training the model with partially labeled data. For the first method, we trained the model by treating unlabeled pixels as non-mitosis ones (Baseline <cit.>). The second method used the Gaussian masked loss (GM <cit.>). The masked loss was calculated on the masked pixels around the positive-label pixels. Thus, the method ignored unlabeled pixels. The third method used positive unlabeled learning to identify mitosis from candidates obtained by the detection model trained with the masked loss (PU <cit.>). The fourth method generated pseudo-labels from the results of positive unlabeled learning and retrained the detection model with the pseudo-label (PU-I <cit.>). In Table <ref>, we compared our method with previous methods in one and five-shot settings. We used N samples per sequence in the N-shot settings. For a robust comparison, we sampled one or five mitosis annotations under five seed conditions and took the average. Overall, our method outperformed all compared methods in F1 metric. GM <cit.>, PU <cit.>, and PU-I <cit.> are designed for detecting objects against simple backgrounds. Therefore, these methods are not suited to a mitosis detection task and are inferior to the baseline. The baseline <cit.> treats unlabeled pixels as non-mitosis cell pixels. In the partially labeled setting, unlabeled pixels contain unannotated mitosis events, and unannotated mitosis affects performance. Unlike cell detection, mitosis detection requires identifying mitosis events from various non-mitotic cell motions, including motions that appear mitotic appearances. Although GM <cit.> can ignore unlabeled mitosis pixels with the masked loss, it is difficult to identify such non-mitosis motions. Therefore, GM estimates produce many false positives. PU <cit.> uses positive unlabeled learning to eliminate false positives from candidates obtained from the detection results with partial labels. However, positive unlabeled learning requires a positive prior in the candidates and a certain amount of randomly sampled positive samples. Since the candidates contain many false positives, the positive prior is difficult to estimate. In addition, there is no guarantee that positive unlabeled learning can work correctly with the selected N-shot annotations. Moreover, since positive unlabeled learning does not work in the mitosis detection task, PU-I <cit.> can not select accurate pseudo labels. Unlike these methods, our method can estimate mitosis events accurately. Since our method generates a fully labeled dataset from a partial label, it effectively uses a few partial annotations. Effectiveness of each module: We performed an ablation study on the HeLa dataset to investigate the effectiveness of the proposed module. We used random augmentation (i.e., random elastic transformation <cit.>, brightness change, and gaussian noise) instead of using frame-order flipping (FOF). We generated I_t^aug by augmenting I_t and input the pair (I_t, I_t^aug) to the network. In the w/o ABP setting, we directly pasted cropped images on the target image as in CutMix <cit.>. Table <ref> demonstrates that the proposed modules improve mitosis detection performance. Fig. <ref> shows examples of the estimation results for each condition. Without the FOF setting, the detection model estimates a high value for all moving cells, leading to over-detection. Without the ABP setting, the detection model overfits the directly pasted image. The directly pasted image tends to include unnatural boundaries on the edge, leading to missed detections in real images. Robustness against missing annotations: We confirmed the robustness of the proposed method against missing annotations on the ES dataset. We changed the missing annotation rate from 0% to 30%. A comparison with the supervised method in terms of F1-score is shown in Fig. <ref>. The performance of the supervised method deteriorates as the percentage of missing labels increases, whereas the performance of the proposed method remains steady. Since our method flips the frame order, we can avoid the effects of missing annotations. Appearance of generated dataset: Fig. <ref> shows an example of the generated image pair. The cropped mitosis image pairs were pasted on the red-dotted circle. It can be seen that the borders of the original image and the pasted image have been synthesized very naturally. § CONCLUSION We proposed a mitosis detection method using partially labeled sequences with frame-order flipping and alpha-blending pasting. Our frame-order flipping transforms unlabeled data into non-mitosis labeled data through a simple flipping operation. Moreover, we generate various positive labels with a few positive labels by using alpha-blending pasting. Unlike directly using copy-and-paste, our method generates a natural image. Experiments demonstrated that our method outperforms other methods that use partially annotated sequences on four fluorescent microscopy images. Acknowledgements: This work was supported by JSPS KAKENHI Grant Number JP21J21810 and JST ACT-X Grant Number JPMJAX21AK, Japan. splncs04
http://arxiv.org/abs/2307.06191v1
20230712143036
The measurement postulates of quantum mechanics are not redundant
[ "Adrian Kent" ]
quant-ph
[ "quant-ph" ]
eqnarray CP-1.80em/ #1⟨ #1|#1| #1⟩⟨#|1⟨ #1 ||#⟩1| #1⟩#1#1⟨#|1⟩#2⟨ #1 | #2 ⟩#1#2#3⟨ #1 | #2 | #3 ⟩ Tr#1‖ #1 ‖#1#2⟨ #1 , #2 ⟩S1 2 [email protected] for Quantum Information and Foundations, DAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge, CB3 0WA, U.K.Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo, ON N2L 2Y5, Canada. Masanes, Galley and Müller <cit.> argue that the measurement postulates of non-relativistic quantum mechanics follow from the structural postulates together with an assumption they call the “possibility of state estimation”. Their argument also relies on what they term a “theory-independent characterization of measurements for single and multipartite systems”. We refute their conclusion, giving explicit examples of non-quantum measurement and state update rules that satisfy all their assumptions. We also show that their “possibility of state estimation” assumption is neither necessary nor sufficient to ensure a sensible notion of state estimation within a theory whose states are described by the quantum formalism. We further show their purportedly “theory-independent” characterization assumes several properties of quantum measurements that exclude plausible alternative types of measurement. We illustrate all these points with specific alternative measurement postulates and post-measurement state update rules. We conclude that, contrary to some folklore, quantum mechanics is by no means an island in theory-space. It can consistently be extended by rules for obtaining information about quantum states other than via POVMs. Whether such rules are realised in nature, for example in linking quantum theory and gravity, is an empirical question that cannot be resolved by theoretical analysis alone. The measurement postulates of quantum mechanics are not redundant Adrian Kent July 2023 ================================================================= § INTRODUCTION No-go theorems ruling out some types of extension or alternative to quantum mechanics have played a crucial role in advancing our understanding of fundamental physics. In particular, they help delineate the scope for new solutions to two of the deepest problems in contemporary physics, the quantum measurement problem and the unification of quantum theory and gravity. Alas, “zombie theorems” have also propagated. Von Neumann famously argued that “an introduction of hidden parameters is certainly not possible without a basic change in the present [quantum] theory”, and claimed to show that “the present system of quantum mechanics would have to be objectively false, in order that another description of the elementary processes than the statistical one be possible” (pp. 210 and 325 of <cit.>). As Bell pointed out <cit.>, von Neumann's argument fails, and indeedd his conclusion was explicitly refuted by Bohm <cit.>, following the earlier ideas of de Broglie <cit.>. Von Neumann assumed that the linearity of expectation values of a combination of observables that holds in quantum theory should also hold for states defined by hidden variables, which produce deterministic outcomes to the measurement of any observable. This does indeed exclude the possibility of such states, but it is quite unreasonable. Yet, Bell noted <cit.>, false impossibility “proofs” continued to be proposed at least as late as 1978 <cit.>. Another example is Eppley and Hannah's claim <cit.> that a contradiction would arise in any dynamical theory in which a classical gravitational field interacts with quantum matter. In their words … we show that if a gravitational wave of arbitrarily small momentum can be used to make a position measurement on a quantum particle, i.e., to “collapse the wave function into an eigenstate of position,” then the uncertainty principle is violated. If the interaction does not result in collapse of the wave function, it is then possible to distinguish experimentally between superposition states and eigenstates. We show that this ability allows one to send observable signals faster than c when applied to a state consisting of two spatially separated particles with correlated spins. This has been challenged on a variety of grounds <cit.>, among them that Eppley-Hannah's discussion of quantum measurements on entangled systems is simply incorrect <cit.>. In fact, a simple model refutes Eppley-Hannah's claim that superluminal signalling necessarily follows from their assumption <cit.>. Confirmation bias seems to be a factor in these and other fallacious arguments and misrepresentations. Many theorists believe that quantum theory is so elegantly and delicately coherent that any alteration must necessarily be inconsistent. Many also believe there is no alternative to a quantum theory of gravity. Purported proofs of these hunches are often uncritically welcomed, since they justify doing what many theorists want to do anyway, which is not only to focus on quantum theory and quantum gravity but also to dismiss alternatives and concerns without further discussion. The former is a perfectly defensible theoretical choice, but the latter is not. We should be alert to our biases and learn from the history of misclaims. No-go theorems purporting to exclude large classes of non-quantum or post-quantum theories need careful analysis. This paper looks at interesting recent work <cit.> by Masanes, Galley and Müller (MGM) on the logical relationship between the postulates of quantum mechanics. MGM claim that the measurement postulates of quantum mechanics are “operationally redundant”. This claim, in the title and elsewhere, appears to suggest that the measurement postulates can be derived from the other postulates of quantum mechanics. In fact, though, even MGM recognize this is not true. They appeal to an additional strong assumption, which they call “the possibility of state estimation”. However, this terminology too is misleading, as we will explain, in that it neither characterizes the possibility of state estimation for, nor gives a natural constraint on, extensions of quantum mechanics. The qualification operationally redundant also needs careful discussion. At first reading, one might perhaps take it to mean something like “redundant if we treat preparation and measurement devices as black boxes with inputs and outputs” or “redundant within the Copenhagen framework” or even “redundant within any sensible view of quantum mechanics”. MGM say they “… take an operational approach, with the notions of measurement and outcome probability being primitive elements of the theory, but without imposing any particular structure on them.” In fact, though, their approach involves many assumptions about the form and properties of measurements, including some that are not explicitly stated. Several of these are not natural constraints on extensions of quantum mechanics, from an operational perspective or otherwise. Even allowing for all their assumptions, MGM's claim to derive the quantum measurement postulates is incorrect. We refute it by giving alternative measurement postulates that have all the properties MGM require but that do not satisfy the quantum measurement postulate or state-update rule. The key insight underlying our alternative postulates is that quantum mechanics can consistently be extended by hypothetical measurement devices that give complete information about the state of a subsystem <cit.>. It is consistent to postulate that these hypothetical measurement devices leave the state unaltered; it is also consistent to postulate that they alter the state via a map that depends on the measurement outcome, which leads to consistent non-linear versions of quantum theory. These results were first proven in Ref. <cit.>. Further examples and applications of measurement devices that give partial and/or stochastic information were given in Ref. <cit.>. These examples were given in the context of relativistic quantum mechanics. A fortiori, they also define consistent alternatives to, or extensions of, non-relativistic quantum mechanics, which is the focus of MGM's discussion. We give several more examples of theoretically interesting hypothetical devices here. These give partial information about the quantum state of a subsystem. They can thus be constructed from devices that give complete information about the quantum state, together with suitable classical post-processing devices and randomness generators. They are of potentially independent interest, since they suggest other ways in which nature might allow measurements that go beyond quantum measurements, some of which might possibly arise in theories that combine quantum theory with classical and/or other degrees of freedom. We show that some of these devices define alternative measurement postulates that refute MGM's purported derivations of the quantum measurement postulates. We also discuss the implications of others for MGM's approach. These latter devices define measurement postulates do not satisfy all of MGM's assumptions and so do not provide further direct counterexamples to MGM's purported derivation. However, they do define further interesting and consistent extensions of quantum mechanics, which MGM suggest – in the title and elsewhere in Ref. <cit.>– should not exist. They highlight (i) some tacit assumptions about possible measurement rules that MGM make and that seem hard to justify, (ii) that some of their explicit assumptions are significantly and unnecessarily restrictive, and (iii) weaknesses in some of their justifications. In particular, they highlight the problems mentioned above with MGM's (so-called) “possibility of state estimation” assumption: namely, that it is neither necessary nor sufficient to establish the possibility of state estimation. In short, they highlight that MGM's definition of a measurement postulate excludes alternative measurement postulates that seem natural, interesting and even potentially physically relevant. This suggests that it is unlikely to be possible to justify any definition similar to MGM's that would rescue their intended results. The measurement postulates of quantum theory are not redundant, but an essential part of the theory's definition. Whether they describe the only way to obtain information from quantum states is an open empirical question. We should keep an open mind and investigate it in untested regimes, for example where delocalized mesoscopic masses have measurable gravitational effects (see e.g. <cit.>). § MGM'S CHARACTERISATION OF QUANTUM MECHANICS §.§ Assumptions MGM do not make We first note assumptions that might be thought necessary (if not necessarily sufficient) to characterise quantum mechanics but that MGM do not make. §.§.§ MGM do not assume no-signalling MGM consider only non-relativistic quantum mechanics. In particular, the relativistic no-signalling principle plays no role in their discussion. Nor does the quantum no-signalling principle, which ensures that standard quantum measurements on separated entangled systems do not give a signalling mechanism. In fact, MGM assume nothing at all about measurements on an entangled subsystem. We discuss these issues further below, following MGM in focussing on non-relativistic quantum mechanics. §.§.§ MGM do not assume measurements are repeatable Some results characterising quantum measurements assume some form of repeatability, for example that successive measurements should have the same outcome when there is no intervening evolution. This holds for projective measurements but not for general POVMs. MGM's measurement postulates cover general quantum measurements, and so they do not assume this version of repeatability. In fact, they make no explicit postulate about successive measurements. In particular, they do not postulate that two or more measurements applied in sequence to a quantum system can be considered as a single measurement on the system. §.§ MGM's postulates MGM characterise non-relativistic quantum mechanics by three structural postulates together with two measurement postulates. Postulate (states). To every physical system there corresponds a complex and separable Hilbert space ℂ^d, and the pure states of the system are the rays of P ℂ^d. The rays are equivalence classes under non-zero complex multiplication. Following MGM we represent these by normalised states ψ∈ℂ^d, and use the notation ℂ^d to represent finite-dimensional Hilbert spaces and countably infinite-dimensional Hilbert spaces (denoted by d= ∞). Postulate (transformations). The reversible transformations (for example, possible time evolutions) of pure states of ℂ^d are the unitary transformations ψ→ Uψ with U ∈ U(d). Postulate (composite systems). The joint pure states of systems ℂ^a and ℂ^b are the rays of the tensor product Hilbert space ℂ^a ⊗ℂ^b. Postulate (measurement). Each measurement outcome of system ℂ^d is represented by a linear operator Q on ℂ^d satisfying 0 ≤ Q ≤ I, where I is the identity. The probability of outcome Q on state ψ∈ℂ^d is given by P(Q|ψ) = ⟨ψ | Q | ψ⟩ . A (full) measurement is represented by the operators corresponding to its outcomes Q_1,…,Q_n, which must satisfy the normalization condition ∑_i=1^n Q_i = I . MGM do not state whether n may be infinite. We will return to this point later. Postulate (post-measurement state-update). Each outcome is represented by a completely-positive linear map Λ related to the operator Q via tr(Λ(|ψ⟩⟨ψ|)) = ⟨ψ | Q | ψ⟩, for all ψ . The post-measurement state after outcome Λ is ρ = Λ(|ψ⟩⟨ψ|)/tr(Λ(|ψ⟩⟨ψ|)) . A (full) measurement is represented by the maps corresponding to its outcomes Λ_1, …, Λ_n whose sum ∑_i=1^n Λ_i is trace-preserving. After presenting these postulates, MGM state they will “prove that the “measurement” and “post-measurement state-update” postulates are a consequence of the first three postulates.” (p. 2 of Ref. <cit.>) Note that neither the term “operationally” nor any other qualification are mentioned at this point. This illustrates how hard it can be to avoid unintentionally overstating limited technical no-go results. It is important to be clear that MGM actually argue that the measurement and state-update postulates follow from the three structural postulates together with other strong assumptions, which we review below. In other words, MGM do not even try to prove what the quote above claims. § MGM'S CHARACTERISATION OF A MEASUREMENT POSTULATE MGM give what they describe as a “[f]ormalism for any alternative measurement postulate” baseed on “a theory-independent characterization of measurements for single and multipartite systems”, on pp. 3-4 of Ref. <cit.>. Earlier, on p.2, they make some comments on the structure of mixed states in quantum theory and hypothetical alternative theories. We address these first, since they are important for our discussion. §.§ MGM and the role of mixed states §.§.§ Proper mixed states MGM use “mixed states” to refer to what are generally called “proper mixed states”, that is, probabilistic mixtures of pure states. More precisely, MGM say a mixed state “is an equivalence class of indistinguishable ensembles, and an ensemble ( ψ_r , p_r ) is a probability distribution over pure states.” Their notation is ambiguous, but we will generally assume it includes both finite and countably infinite ensembles. Where we restrict to finite ensembles, or allow uncountably infinite ensembles that have a well-defined quantum density matrix, we will state this explicitly. It will not matter for our counterexamples to MGM's purported derivation of the quantum measurement postulates whether the ensembles are restricted to be finite or countably infinite. MGM note that mixed states are not mentioned in the standard postulates of quantum mechanics. Indeed, a mixed state is not normally considered to be a fundamental notion in quantum mechanics. We normally consider ensembles in contexts where our knowledge of the true state is imperfect: some other agent, or nature, has prepared a specific pure state ψ_r drawn from the ensemble, but we know only the ensemble ( ψ_r , p_r ). In principle, the postulates of quantum mechanics can be applied directly to ensembles without referring to density matrices: if ( ψ_r , p_r) undergoes a time evolution U, the resulting ensemble is (U ψ_r , p_r ), and so on. In fact, of course, quantum mechanics allows a much simpler treatment, in which the ensemble ( ψ_r , p_r ) is represented by the density matrix ρ = ∑_r p_r |ψ_r ⟩⟨ψ_r | . A time evolution U produces U ρ U^† and a measurement with outcome associated to the operator Q and completely positive linear map Λ has probability ( Q ρ ) and post-measurement density matrix ρ' = Λ ( ρ ) ( Λ ( ρ ) ) . Because the density matrix ρ contains all the information about the possible outcomes and probabilities of any sequence of measurements on the ensemble, two ensembles with the same density matrix are indistinguishable, and hence belong to the same equivalence class. Hence in quantum mechanics the equivalence class of the ensemble ( ψ_r , p_r ) is represented by the density matrix (<ref>). As MGM stress, this follows from the measurement postulates of quantum mechanics. As they note, different measurement postulates could imply different equivalence relations and hence different equivalence classes of ensembles and a different set of mixed states. MGM thus do not assume that proper mixed states must necessarily be represented in the form (<ref>). We would argue that in fact one need not make any assumption about measurements on proper mixed states, since measurement postulates (quantum or otherwise) can be applied directly to ensembles. The action of a measurement on an ensemble is determined by its action on the pure states in the ensemble, and so we only need postulates that determine the outcome and effect of measurements on pure states. For the same reason, an open-minded analysis of possible non-quantum measurement postulates need not – indeed, we would argue, should not – assume there must be any non-trivial equivalence relation between ensembles. An alternative to or extension of quantum theory can still be well-defined and interesting even if it implies no ensemble equivalences. After all, MGM aim (and claim) to show that no alternative measurement postulates exist. Whether some alternative measurement postulates require less convenient calculations than quantum theory is a separate (and secondary) question. We would also argue that, whereas a pure state is a fundamental notion in quantum theory, an ensemble is not. On the standard view of quantum theory underlying any ensemble description of a physical system there always is an underlying pure state of a suitably chosen larger system (for example, the universe, although smaller systems may suffice). [We do not mean to exclude the possibility that there is not (or not always) a pure state of the universe. For example, the most complete cosmological theory possible might use a density matrix to define the initial conditions. But an initial state density matrix is not associated with a specific ensemble, so MGM's discussion of ensembles would not directly apply. In particular their “possibility of state estimation” assumption would have different implications if it applied to density matrices rather than ensembles. In any case, this possibility is not discussed by MGM, so we do not pursue it here. ] MGM take a different view of the status of proper mixed states defined by ensembles. These play a crucial role in their assumption of the “possibility of state estimation”, which is meant to hold for ensembles as well as pure states. This postulate requires that ensembles of states in ℂ^d should be characterized by the outcome probabilities of some finite set of measurements. Since infinitely many parameters are needed to characterise the set of ensembles of states, this implies that there are equivalence classes whose ensembles require infinitely many parameters to characterise. As we discuss below, that excludes otherwise interesting and consistent alternative measurement postulates §.§.§ Improper mixed states MGM's discussion of mixed states does not apply to the so-called “improper mixed states” that arise from entanglement with other systems.[The terminology is confusing. So-called improper mixed states are fundamental in quantum theory, while so-called proper mixed states are either improper mixed states in a suitably enlarged Hilbert space or secondardy quantities defined by combining quantum theory and classical probabilities. However, the terminology has unfortunately become standard, so we retain it here.] This is a major gap in their analysis. Their states and composite systems postulates imply that the states of n systems are represented by normalised states ψ∈ℂ^d_1⊗…⊗ℂ^d_n. Their transformations postulate implies that, even if the state is initially a product state, it can (and generically will) evolve into an entangled state. In particular, the reduced density matrix ρ_1 = _d_2 … d_n ( |ψ⟩⟨ψ| ) ∈ O ( ℂ^d_1 ) of system 1 may (and generically will) not represent a pure state in ℂ^d_1. So, pace MGM, a discussion of quantum measurement postulates, or alternative measurement postulates in the quantum framework, necessarily has to consider measurements on improper mixed states. The alternative measurement postulates we set out below are explicitly defined on improper mixed states, mostly via the reduced density matrix (<ref>). §.§.§ Distinguishing proper and improper mixtures At first sight, the distinction between proper and improper mixed states might seem hard to maintain. If is entangled then, from the perspective of an observer who only has access to system 1, measurements on systems 2, … , n generally replace ψ by an ensemble of states corresponding to the possible measurement outcomes, with the corresponding outcome probabilities. If ψ is an entangled state of the first system with the rest, it has Schmidt decomposition ψ = ∑_i=1^k ( p_i )^1/2ϕ_i ⊗χ_i , where k>1, the p_i >0, ∑_i=1^k p_i =1, the states ϕ_i ∈ℂ^d_1 are orthonormal, and the states χ_i ∈ℂ^d_2⊗…⊗ℂ^d_n are orthonormal. A standard quantum projective measurement onto a basis of ℂ^d_2⊗…⊗ℂ^d_n that includes the χ_i then results in outcome i and the post-measurement state ψ'_i = ϕ_i ⊗χ_i with probability p_i. For an observer who has access only to system 1 and is ignorant of the measurement outcome, the state of system 1 before the measurement is described by the reduced density matrix (<ref>) representing the improper mixed state, i.e., by ρ_1 = _d_2 … d_n ( |ψ⟩⟨ψ| ) = ∑_i=1^k p_i |ϕ_i ⟩⟨ϕ_i| . For the same observer, the state of system 1 after the measurement is described by the ensemble ( ϕ_i , p_i ), a proper mixed state also represented by the density matrix on the right hand side of Eqn. (<ref>). This locally imperceptible transition between improper and proper might suggest they should be treated as equivalent when considering possible measurement postulates. In fact, though, the true post-measurement state of the n systems is a pure state of the form ϕ_i ⊗χ_i , where i is the measurement outcome. No physical principle in non-relativistic quantum mechanics precludes an observer accessing system 1 from being immediately aware of the outcome of measurements on the other systems: information can be communicated instantaneously in non-relativistic quantum mechanics. So we can consistently maintain the distinction between the pre-measurement improper mixed state given by the reduced density matrix (<ref>) and the post-measurement state. It is only for observers who lack available information that the latter is described by an ensemble whose proper mixed state has density matrix given by the right hand side of Eqn. (<ref>). Although this goes beyond the scope of MGM's discussion, it is important to emphasize this distinction can also consistently be maintained in relativistic quantum mechanics when the systems are spatially separated, by defining the reduced state so as to allow for quantum measurements within, but not outside, the past light cone <cit.>. In summary, quantum or alternative measurement postulates need to describe measurements on improper mixed states. They do not, we have argued, also need to describe measurements on proper mixed states. However, MGM's postulates apply to proper mixed states, which makes them ill-motivated. They do not apply to improper mixed states, which makes MGM's postulates under-defined and, as we will see, also allows a simple refutation of MGM's claimed derivation of the quantum measurement postulates. §.§ MGM's notion of outcome probability function §.§.§ OPFs and Contextuality MGM state (p.3 of Ref. <cit.>) that their “theory-independent characterization of measurements … is based on the concept of outcome probability function (OPF)”. According to their definition, each measurement outcome that can be observed on system ℂ^d is represented by the function f: P ℂ^d → [0,1] that is defined as the probability f ( ψ ) = P( f | ψ ) for each pure state ψ∈ P ℂ^d. The complete set of OPFs of system ℂ^d is denoted by ℱ_d. A full measurement with n outcomes is specified by the OPFs f_1 , … , f_n corresponding to each outcome, which must satisfy ∑_i=1^n f_i ( ψ ) = 1 for all pure states ψ . MGM stress (p. 4) that this definition is not intended to imply that, if outcomes are associated to elements of bases or more generally to positive operators, the outcome probabilities are necessarily independent of the chosen basis or decomposition of the identity. That is, they do not assume non-contextuality. A measurement outcome thus needs to be understood as defined relative to a specified full measurement, and its probability function generally depends on that full measurement. So, for example, the basis selection devices we describe below can be included within the OPF framework, although the probability that the outcome is a given state depends on which basis including the state is chosen. §.§.§ Properties of OPFs MGM define a complete set ℱ_d of OPFs of system ℂ^d to be one that is closed under taking mixtures, composition with unitaries, and systems composition. These are defined as follows: Property 1 (ℱ_d is closed under taking mixtures): Suppose that the random variable x with probability p_x determines which 2-outcome measurement {f_1^x, f_2^x }∈ F_d we implement, and later on we forget the value of x. Then the probability of outcome 1 for this “averaged” measurement is ∑_x p_x f_1^x ∈ℱ_d , which must be a valid OPF. Therefore, mixtures of OPFs are OPFs. Property 2 (ℱ_d is closed under composition with unitaries): We can always perform a transformation U ∈ U(d) before a measurement f ∈ℱ_d, effectively implementing the measurement f ∘ U ∈ℱ_d , which then must be a valid OPF. Property 3 ((ℱ_d is closed under systems composition): Since ℱ_d is complete, it also includes the measurements that appear in the description of ℂ_d as part of the larger system ℂ_d ⊗ℂ_b ≃ℂ_db, for any background system ℂ_b. Formally, for each background state ϕ∈ℂ_b and global OPF g ∈ℱ_db there is a local OPF f_ϕ,g∈ℱ_d which represents the same measurement outcome: f_ϕ,g(ψ) = g(ψ⊗ϕ) for all ψ∈ P ℂ_d. MGM motivate Eqn. (<ref>), and hence Property 1, only for 2-outcome measurements. They do not postulate that if f is a measurement outcome then there is necessarily an outcome I-f, represented by the function (1-f) that gives the probability (1-f)(ψ) = 1 - P(f | ψ ) for each ψ∈ P ℂ^d, though this would seem natural. Nor do they postulate that for each n-outcome measurement M= { f_1 , … , f_n }, with n>2, and each nontrivial partition S_1 ∪ S_2 = { 1 , … , n } of the outcomes, there is a 2-outcome measurement M' = { g_1 , g_2 }, where g_i = ∑_j ∈ S_i f_j. This would also seem natural, since M' can be implemented by applying M and then forgetting all information about the outcome except the S_i to which it belongs. Without some such assumptions, the motivation given by MGM for Property 1 implies no constraint on n-outcome measurements for n>2. However, we will take Property 1 to require that any mixture (i.e. convex combination) of OPFs is an OPF, as this is stated in the text, albeit not fully motivated there. §.§ MGM's “possibility of state estimation” assumption MGM's arguments rely crucially on a further assumption, which they describe as follows: Assumption (possibility of state estimation). Each finite-dimensional system ℂ^d has a finite list of outcomes f^1 , … , f^k ∈𝔽_d such that knowing their value on any ensemble ( ψ_r , p_r ) allows us to determine the value of any other OPF g ∈𝔽_d on the ensemble ( ψ_r , p_r ). § SETTING FOR OUR POST-QUANTUM MEASUREMENT POSTULATES We define here some specific examples of possible types of measurement that are not allowed within standard quantum mechanics. For the purposes of our discussion, we consider these as defining a “post-quantum” mechanics, in which one or more type of non-quantum measurement is allowed as well as all standard quantum measurements. Quantum measurements follow the measurement and post-measurement state-update rules given above; non-quantum measurements need not necessarily follow either rule. It is also interesting to consider the scope for more general theories in which the structural quantum postulates hold but the quantum measurement postulates do not. Howver, we do not consider such theories here, focusing on how specific alternative measurement postulates challenge MGM's assumptions and arguments. Following MGM, we consider quantum mechanics for a set of n physical systems, where 1 ≤ n ≤∞: here and below we use ∞ to denote countable infinity. These systems are labelled by i, where 1 ≤ i ≤ n, and are represented by rays in complex vector spaces of dimension d_i, where 2 ≤ d_i ≤∞. The composite systems postulate tells us that the states are rays in ℂ^d_1⊗…⊗ℂ^d_n. (We abuse this notation to mean ℂ^d_1⊗ℂ^d_2⊗… in the case n = ∞.) MGM do not discuss the relation of the systems considered to the rest of the universe, but it will be important for our discussion. We take the product space to be large enough that there is no entanglement with any other systems, so that the state of the n systems is indeed a ray in ℂ^d_1⊗…⊗ℂ^d_n. Depending on the context and the version of quantum theory considered, this may mean the state describes the entire universe at a given time [We emphasize again that our discussion follows MGM in considering non-relativistic quantum mechanics, so this description has to be understood within a non-relativistic model. In particular, the state does not involve gravitational degrees of freedom.], with ℂ^d_n (or some ℂ^d_i with i>1 in the case n=∞) describing an “environment” of effectively inaccessible degrees of freedom. Alternatively, ℂ^d_1⊗…⊗ℂ^d_n may be a subsystem of the universe known not to be entangled with the rest – for example a collection of systems prepared in a pure state, processed and distributed among laboratories and kept isolated from the environment. We consider measurements on system 1 unless otherwise specified. §.§ Subjective and objective factorisations The alternative measurement postulates we consider below do not require the factorisation ℂ^d_1⊗…⊗ℂ^d_n to be objective. Because there is no finite speed signalling bound in non-relativistic quantum mechanics, in principle measurements can be carried out arbitrarily swiftly on any degrees of freedom, whether or not they are localized. In the non-relativistic context it thus makes sense to consider postulates (quantum or alternative) for measurements that can similarly be applied to any factor of any factorisation, whether or not it describes localized degrees of freedom. There will generally be many isomorphisms of the form ℂ^d_1⊗…⊗ℂ^d_n≃ℂ^d'_1⊗…⊗ℂ^d'_p , where ∏_i=1^n d_i = ∏_i=1^p d'_i and { d_i }≠{ d'_i }. However, it is worth noting that there are nonetheless naturally preferred factorisations in non-relativistic quantum mechanics, since laws defined in terms of one or more natural factorisations are natural candidates for possible extensions of or alternatives to quantum mechanics. Examples of natural factorisations include ℋ_ total = ⊗_i=1^k ℋ_ i , where the index i enumerates different particle types, or particles of different masses m_i >0, and ℋ_ total = ℋ_ boson⊗ℋ_ fermion . It is interesting to consider the possibility that post-quantum measurement postulates we discuss below could apply (only) to these or other specific factorisations or sets of factorisations. Although we will not pursue this further here, it also offers another natural way of defining alternative measurement postulates, in which measurements can only be applied to specific factors (for example bosonic or fermionic degrees of freedom) and the types of allowed measurement depend on the factor to which the measurements apply. §.§.§ Local Factorisations Another example is the factorisation defined by degrees of freedom associated with different local regions. In the idealized limiting case in which systems are effectively pointlike, this gives us ℋ_ total = ⊗_x ∈ℝ^3ℋ_ x . Coarse-graining gives ℋ_ total = ⊗_iℋ_V_i , where the regions V_i define a partition of ℝ^3 into local regions, which could for example be defined by a regular lattice. Alternative measurement postulates based on these factorisations are particularly natural options when we consider extensions of non-relativistic quantum mechanics that could fit well with special and general relativity. In this context, to respect the causal structure, it is natural to take ℋ_ total and its decompositions to define the state on (or more precisely, asymptotically close to) the past light cone.<cit.> We do not discuss this further here, given MGM's focus on non-relativistic quantum mechanics, but note that it is one of the main motivations for considering the possibility of alternative postulates of the type we discuss in extensions of relativistic quantum theory <cit.>. § POST-QUANTUM MEASUREMENT POSTULATES All the measurement postulates considered in this section are defined via hypothetical devices that give information about a pure quantum state ψ∈ℂ^d_1⊗…⊗ℂ^d_n of a set of n systems. To simplify the discussion we assume here the quantum state ψ is not altered by any of our postulated post-quantum measurements. That is, we take the post-measurement update rule to be trivial, although non-trivial alternatives (for example those that define consistent non-linear versions of quantum theory <cit.>) are also interesting. The postulates apply whether ψ is an entangled state or a product state with respect to the given factorisation. If ψ is randomly drawn from an ensemble ( ψ_i , p_i ), the postulates apply to the state ψ_i actually chosen, whether or not this choice is known to the device user. This last rule is consistent with the quantum measurement postulate and seems the most natural option for an alternative measurement postulate, since we do not expect measurement probabilities or outcomes to depend on a user's knowledge. It is consistent with MGM's treatment, as in Eqn. (4) of Ref. <cit.>. We discuss the role of proper mixed states in MGM's argument further below. §.§ State readout devices An infinite precision state readout deviceRD applied to ψ∈ℂ^d_1⊗…⊗ℂ^d_n outputs an infinite precision classical description of ρ_1 = _d_2 … d_n ( |ψ⟩⟨ψ| ) ∈ O ( ℂ^d_1 ) . Here O ( ℂ^d_1 ) denotes the linear operators on ℂ^d_1. The operator ρ_1 defines the reduced density matrix of ψ for system 1 and is normalised, hermitian and positive semi-definite. The description is given as coordinates in some chosen basis, which we assume is either input into the device prior to the readout or defined by the construction of the device. This description is output in some idealized classical form, for example as an infinite printout or through a set of pointer readings. A finite set of pointer readings could suffice, given idealized pointers whose positions are in principle readable with infinite precision. We discuss these idealizations further below. A finite precision state readout device FPRD takes as input a positive integer m. Applied to ψ∈ℂ^d_1⊗…⊗ℂ^d_n it outputs an classical description of the reduced density matrix (<ref>) with respect to the given basis, to m digit binary precision, in the sense that the coefficients of basis elements are given to the nearest multiple of 2^-m. §.§ State function readout devices Let f: O ( ℂ^d_1 ) → O ( ℂ^d_1 ) be a function mapping density matrices to density matrices. An infinite precision state function readout deviceFRD applied to ψ∈ℂ^d_1⊗…⊗ℂ^d_n outputs an infinite precision classical description of f ( ρ_1 ) . The description is given as above. A finite precision state function readout device FFRD takes as input some positive integer m and outputs the function value, in a prescribed basis, to m digit binary precision. §.§ Expectation value readout devices An infinite precision expectation value readout deviceERD takes as input some hermitian observable A defined on system 1, and outputs, to infinite precision, the expectation value ( A ρ_1 ). A finite precision expectation value readout device FERD takes as input some positive integer m and outputs ( A ρ_1 ) to m digit binary precision. §.§ Stochastic eigenvalue readout devices A stochastic eigenvalue readout deviceSEVRD takes as input a hermitian observable A, with finitely or countably many eigenvalues, defined on system 1. It produces as output data that identifies an eigenvalue λ_i of A, randomly chosen using the Born probabilities (P_i ρ_1), where P_i are the projections onto the corresponding eigenspaces. The output may identify the relevant eigenvalue, without necessarily directly representing its value. For example, if the eigenvalues are labelled so that λ_1 < λ_2 < …, the possible outputs may be positive integers, with the eigenvalue λ_i identified by output i. Note that these input and output rules are the same as those for a quantum measurement of A. However, an SEVRD does not alter the input quantum state. (Recall that we are postulating a trivial post-measurement state-update.) It can thus be applied repeatedly to estimate the quantum probability distribution for outcomes of measurements of A. A finite precision stochastic eigenvalue readout device FSEVRD takes as additional input some positive integer m and outputs the relevant eigenvalue to m digit binary precision. The Born probabilities defining the random choice are still calculated to infinite precision. An integer labelled stochastic eigenvalue readout device ISEVRD outputs the integer labelling the relevant eigenvalue, given some ordering …λ_-1 < λ_0 < λ_1 < …, or λ_0 < λ_1 < …, or …λ_-1 < λ_0 of the eigenvalues (if there are infinitely many descending and ascending, not infinitely many descending and not infinitely many ascending, respectively). A finite integer labelled stochastic eigenvalue readout device FISEVRD takes additional input some positive integer m. It uses some given ordering …λ_-1 < λ_0 < λ_1 < …, which may terminate in either direction or both. For |i| ≤ m it outputs the label i with probability (P_i ρ_1), where P_i is the projection onto the corresponding eigenspace. With probability ( 1 - ∑_i=-m^m (P_i ρ_1) ) it outputs an error code, for example, “overflow”. §.§ Stochastic state projection eigenvalue readout devices A stochastic state projection eigenvalue readout deviceSPRD is a special case of a stochastic eigenvalue readout device that takes an input a classical description of P_ϕ, a projection onto a specified pure state ϕ∈ℂ^d_1. It outputs 1 with probability (P_ϕρ_1) and 0 with probability ((1 - P_ϕ ) ρ_1). §.§ Stochastic uncertainty readout devices A stochastic uncertainty readout deviceSURD takes as input some hermitian observable A, with finitely or countably many eigenvalues, defined on system 1. It produces as output data that identifies an eigenvalue λ_i of A - ⟨ A ⟩, randomly chosen using the Born probabilities (P_i ρ_1), where P_i are the projections onto the corresponding eigenspaces. The output is the numerical value of the relevant eigenvalue, to infinite precision. An SURD could be constructed by applying an ERD and SEVRD in either order and combining their outputs, given that neither device alters the quantum state. We consider it, though, as a standalone device. A finite precision stochastic uncertainty readout device FSURD takes as additional input a positive integer m and outputs the relevant eigenvalue to binary precision m. The Born probabilities defining the random choice are still calculated to infinite precision. §.§ Stochastic positive operator devices A stochastic positive operator readout deviceSPOD takes as input a complete finite or countable (n = ∞) set of positive operators { A_i }_i=1^n, with ∑_i A_i = I. It produces as output data that identifies one of the A_i, randomly chosen using the Born probabilities (A_i ρ_1). Note that these input and output rules are the same as those for a quantum measurement of the POVM { A_i }. However, an SPOD does not alter the input quantum state. (Recall again that we are postulating a trivial post-measurement state-update.) It can thus be applied repeatedly to estimate the quantum probability distribution for outcomes of measurements of { A_i }. A finite integer labelled stochastic positive operator readout device FSPOD takes additional input some positive integer m. For i ≤ m it outputs the label i with probability (A_i ρ_1). With probability ( 1 - ∑_i=1^m (A_i ρ_1) ) it outputs an error code, such as “overflow”. §.§ State overlap devices A state overlap deviceSOD takes a classical description of a specified pure state ϕ∈ℂ^d_1 and real parameter a ∈ (0,1) as inputs. It outputs 1 if (P_ϕρ_1) > a and 0 otherwise. §.§ Smoothed state overlap devices Various smoothed versions of state overlap devices can be defined. As an illustrative example, we define a smoothed state overlap deviceSSOD to take a classical description of a specified pure state ϕ∈ℂ^d_1, a real parameter a ∈ (0,1) and a real parameter k>0 as inputs. It outputs 1 with probability 1 / (1 + exp ( -k( (P_ϕρ_1) -a))) and 0 with probability exp ( -k( (P_ϕρ_1) -a)) / (1 + exp ( -k( (P_ϕρ_1) -a))). In the limit k →∞, this reproduces the behaviour of an SOD. §.§ Basis selection devices A basis selection deviceBSD takes a classical description of an orthonormal basis {ϕ_i }_i=1^d_1 of ℂ^d_1 as input. It outputs the value of i that maximizes (P_ϕ_iρ_1), choosing randomly among maxima if there is more than one. §.§ Smoothed basis selection devices Various smoothed versions of a basis selection device can be defined. As an illustrative example, we define a smoothed basis selection deviceSBSD to take a classical description of a specified orthonormal basis {ϕ_i }_i=1^d_1 of ℂ^d_1 and a real parameter k>0 as input. It outputs a value of i chosen randomly from a distribution with probabilities N exp ( k (P_ϕ_iρ_1) ), where N is a normalisation constant. In the limit k →∞, this reproduces the behaviour of a BSD. §.§ Entropy meters An infinite precision von Neumann entropy meterVNEM outputs the von Neumann entanglement entropy S( ρ_1) = - ( ρ_1 logρ_1 ). An infinite precision Renyi entropy meterREM ( α), for real α≥ 0, with α≠ 1, outputs the Renyi entanglement entropy S_α (ρ_1) = 1/ 1- α ( ( ρ_1 )^α ) . Writing the von Neumann entropy S (ρ_1 ) as S_1 ( ρ_1 ), these can be combined to define an infinite precision universal entropy meterUEM, which takes as input a real α≥ 0 and outputs S_α (ρ_1 ). Finite precision versions of these meters take as input a positive integer m and output the relevant quantities to m digit binary precision. §.§ Entropy certifiers A universal entropy certifier UEC takes as input a real α≥ 0 and a real E > 0. If S_α (ρ_1 ) > E it outputs 1, otherwise 0. For d_1 < ∞, we have S_α (ρ_1 ) ≤log (d_1 ), with equality if and only if ρ_1 is the uniformly mixed density matrix ( 1 d_1) I_d_1<cit.>. We thus take the allowed input range to be 0 < E < log ( d_1 ) when d_1 is finite. Various smoothed versions of entropy certifiers can be defined. As an illustrative example, we define a smoothed UEC as one that takes as input real parameters α≥ 0, E> 0 and k>0. It outputs 1 with probability 1 / (1 + exp ( -k( S_α ( ρ_1 ) - E))) and 0 with probability exp ( -k( S_α ( ρ_1 ) - E)) / (1 + exp ( -k( S_α ( ρ_1 ) - E))). In the limit k →∞, this reproduces the behaviour of a UEC. §.§ Entanglement analysers An entanglement analyser EA takes as input an orthonormal basis {ψ_i : 1 ≤ i ≤ d_1 } of ℂ^d_1. It produces as output a matrix ( M_ij )_i=1,j=1^d_1 , d_1, whose elements are defined by M_ij = ⟨ϕ_i | ϕ_j ⟩ , where ⟨ψ_i | ψ⟩ = | ϕ_i ⟩ where the left hand side is a partial inner product and | ϕ_i ⟩∈ℂ^d_2⊗…⊗ℂ^d_n. A finite precision entanglement analyser FPEA takes as additional input a positive integer m, and outputs the matrix to m digit binary precision, in the sense that the real and imaginary parts of the matrix elements are given to the nearest multiple of 2^-m. § COMMENTS ON INFINITE PRECISION INPUTS, OUTPUTS AND PROBABILITIES §.§ Inputs Most of the above devices are defined to have classical inputs that are specified with infinite precision. The readout devices require a specification of a reference basis. Expectation and eigenvalue readout devices require a hermitian operator A to be input; state projection and overlap devices require a classical description of a state ϕ; in both cases, these need to be specified with respect to a reference basis. Universal entropy meters and certifiers require a real number α as input. Smoothed devices also require a smoothing parameter, a or E. Infinite precision inputs cannot be typed on a keyboard in finite time. An analogue classical input, like a freely rotatable dial, can only be as precise as the approximation in which its classical description holds, i.e., only finitely precise. So infinite precision inputs are not physical. However, the same point applies to the standard measurement postulate of quantum theory. As given above, it requires each measurement outcome to be represented by a linear operator. If we think of a measurement as a device with inputs and outputs, the inputs are a set of linear operators { Q_i }. Specifying the Q_i in general requires infinite precision numbers with respect to a given reference basis. The input prescriptions for our post-quantum measurement postulates thus should be understood as idealizations similar to those of the standard quantum measurement postulate. A real world quantum measurement deviates from the ideal in various ways, including that it is a process with some non-zero duration rather than an instantaneous act. For quantum measurements, the relevant operators, outcomes and probabilities are determined by the physics of the measurement devices. There may be a sense in which they are defined to infinite precision in nature, but we cannot determine them to infinite precision. If any of our hypothetical post-quantum measurements were realised in nature, via presently unknown physics, we should presumably expect the real world version to similarly deviate from the ideal version. However, as in the quantum case, allowing infinite precision inputs in the measurement postulates is a mathematically convenient idealisation that does not, per se, necessarily represent new or problematic physics. §.§ Outputs Infinite precision state readout devices, expectation value devices and entropy meters are defined to give infinite precision outputs. Clearly, these too are unphysical if taken literally. Printing out real numbers to infinite precision would take infinite time and storage. An analogue classical output can only be finitely precise. Again, analogous issues arise in the idealized version of standard non-relativistic quantum mechanics found in many textbooks. For example, one finds the statement that measuring the position x̂ of a single particle in state ψ produces the outcome x with probability density | ψ ( x ) |^2. The postulated outcomes here are vectors in ℝ^3, specified to infinite precision. To be clear, the example of infinite precision position measurements goes beyond the postulates framed by MGM, which restrict to Hilbert spaces of countable or finite dimension and to measurements with finitely or (perhaps) countably many outcomes.[MGM describe a measurement as represented by operators corresponding to outcomes Q_1 , … , Q_n that satisfy the normalisation condition ∑_i=1^n Q_i = I. They do not explictly say that n = ∞, defining a countable sum, is allowed. However, the measurements may be on spaces ℂ^d, where d= ∞, representing a countably infinite-dimensional space, is explicitly allowed by MGM, so one might assume so.] Infinite precision position measurements also imply an unnormalisable post-measurement state with infinite momentum uncertainty and infinite energy. Nonetheless, defining quantum measurements with infinite precision outputs is a useful idealization. Real world position measurements have finite precision, of course, and are treated via POVMs that involve a stochastic smoothed version of the infinite precision postulate above. The same would presumably be true of our hypothetical post-quantum measurements, if realised in nature. So, we regard the finite precision versions as roughly analogous to an approximate position measurement defined (for example) by Gaussian POVMs, in the sense that they are more realistic and less physically problematic, although still idealized. §.§ Probabilities The probabilistic versions of our post-quantum measurement postulates involve probabilities defined as mathematical expressions, which are typically real numbers defined to infinite precision. This is also true of the probabilities defined by MGM's version (and other versions) of the standard quantum measurement postulate. There is no consensus on whether there is a fundamentally satisfactory interpretation of probabilities Whatever view one takes on this, one might also ask whether additional problems arise from the fact that the probabilities in (post-)quantum theory are defined to infinite precision. For example, on a frequentist view, probabilities are expressed as the frequencies in an infinite set of trials. If a given probability has any physical representation in the universe, this must thus involve an infinite number of events, which would occur in infinitely many separate space-time regions, spread over infinite space, time or both. This might not be possible in our universe (or even a hypothetical multiverse). Unlike the hypothesis of an infinite precision output from an effectively instantaneous localised measurement, though, it does allow the possibility that infinite resources can be available to express an infinite precision quantity. In any case, we see no difference between the issues raised by infinite precision probabilities in standard quantum measurement postulates and in our hypothetical post-quantum measurement postulates. § PROPERTIES OF OPFS AND OUR ALTERNATIVE POSTULATES Recall that our alternative measurement postulates are defined for a set of n physical systems that have no entanglement with any other systems, so that their state is a ray in ℂ^d_1⊗…⊗ℂ^d_n. MGM's definition of a complete set of OPFs as one satisfying properties (1-3) above relies on their tacit assumption that OPFs and measurements are defined by their action on pure states. In verifying that our alternative postulates satisfy their postulates (1-3), we thus need only consider measurements on a pure state ψ∈ℂ^d of a single system, for properties 1 and 2, and measurements on a product of pure states, ψ⊗ϕ∈ℂ^d ⊗ℂ^b for property 3. Applying Properties (1-3) to our alternative measurement postulates raises some technical issues. For example, consider the infinite precision state readout device. This satisfies Property 2: if we apply a unitary U before using the device, we get a readout of U ψ. To test Property 3, we need to apply it to ψ⊗ϕ∈ℂ^d ⊗ℂ^b. This gives a readout of ψ⊗ϕ in some chosen basis. If ϕ is known to infinite precision, then of course in principle this defines ψ to infinite precision. However, we also need to suppose that we are able to carry out infinite precision calculations (ideally, in finite time) to obtain ψ. To test Property 1, we need to consider random mixtures. For example, if we apply unitary U_i with probability p_i, then forget which U_i was applied, the readout produces U_i ψ with probability p_i as required. To test property 3 non-trivially on a random mixture of readout device and quantum measurements we require a way of equating at least some of the outcomes. For example, consider a mixture of one quantum measurement and one readout device measurement. Take the outcomes of the quantum measurement to be positive integers. We can then suppose some post-processing device applied to the random measurement output that maps the classical descriptions of a countable subset {ψ_i }_i=1^∞ of the pure states to positive integers, taking ψ_i to i. (Again, ideally, this should take finite time.) Forgetting whether we carried out the quantum measurement or the readout device plus post-processing then gives mixtures of the positive-integer outcomes. Finite precision readout devices avoid the idealisation required in assuming that infinite operations require finite time. However, a given version of finite precision generally only approximately commutes with the operations involved in Properties (1-3). For example, if ψ_f is the state described by a finite precision readout of the quantum state ψ, and U is a unitary, then U ( ψ_f ) ≃ ( U ψ )_f but in general equality is not precise. Another issue arises when we consider the relationship of our devices to Property 3. Consider, for example, stochastic eigenvalue readout devices. Let A be a non-degenerate hermitian observable on ℂ^d ⊗ℂ^b. Suppose an SEVRD with input A is applied to ψ⊗ϕ∈ℂ^d ⊗ℂ^b. This produces one of the db eigenvalues λ_i of A as output. We can, following the statement of Property 3, view this as a measurement on ℂ^d parametrized by A and ϕ. However, the measurement does not generally correspond to any SEVRD producing eigenvalues of some observable A_ϕ on ℂ^d. The projective measurement on the joint system defines a positive operator valued measurement on the first subsystem. Hence the action of an SEVRD on the joint system corresponds to the action of a stochastic positive operator device on ℂ^d. The moral of this example is that, to be consistent with MGM's postulates, we need to postulate measurements defined by SPODs rather than restricting to SEVRDs. This issue applies to many of our devices: ERDs, SPRDs, SURDs, (S)SODs, (S)BSDs, VNEMs, REMs, UECs, EAs. While these have well defined actions on composite systems, they do not generally reduce to the action of simple devices on an individual subsystem. The simplest and most natural resolution of these various issues is to extend our alternative measurement postulates to include the closure (as defined by Properties (1-3) of the measurements defined by the relevant devices. Thus, we extend the category of FPRDs, EAs, and so on, to include devices defined by any finite sequence of mixtures (including mixtures with quantum measurements), composition with unitaries, and system composition. The relevant probability distributions and outcome labellings, unitaries, and specified extended systems with specified background pure states are specified as additional inputs, in the order in which they are applied. § RELATION OF OUR POST-QUANTUM MEASUREMENT POSTULATES TO STANDARD QUANTUM MECHANICS §.§ Logical consistency and compatibility with relativity All of our post-quantum measurement postulates define ways of obtaining information about unknown quantum states, without disturbing them, that are not allowed by standard quantum theory. Recall our convention that our post-quantum measurements are carried out on system 1. If ψ = ψ_1 ⊗…⊗ψ_n ∈ℂ^d_1⊗…⊗ℂ^d_n is a product state, an infinite precision state readout device gives a classical description of ψ_1, without altering ψ. This allows an observer of system 1, who does not know the state prior to using the readout device, to construct a second copy of ψ_1, violating the quantum no-cloning theorem <cit.>. A classical description of ψ_1 to arbitrary precision can also be obtained by repeated uses of an infinite precision expectation device, or a stochastic eigenvalue readout device, or a state overlap device, or a smoothed state overlap device, with suitable choices of operators and states as inputs. A classical description of ψ_1 to high precision can also be obtined by using an FFRD with suitably high precision, or by repeated uses of suitably high precision versions of the other devices just listed. Devices with these powers violate extensions <cit.> of the no-cloning theorem that bound the sum of the fidelities attainable by any quantum operations that produce two imperfect copies of an unknown input state. Recall again that if ψ is an entangled state of the first system with the rest, it has a Schmidt decomposition of the form ψ = ∑_i=1^k ( p_i )^1/2ϕ_i ⊗χ_i , where k>1, the p_i >0, ∑_i=1^k p_i =1, the states ϕ_i ∈ℂ^d_1 are orthonormal, and the states χ_i ∈ℂ^d_2⊗…⊗ℂ^d_n are orthonormal. A standard quantum projective measurement onto a basis of ℂ^d_2⊗…⊗ℂ^d_n that includes the χ_i then results in the post-measurement state ψ'_i = ϕ_i ⊗χ_i with probability p_i. Before the measurement, the output of an infinite precision Renyi entropy meter S_α (ρ_1 ) = 1/1- αlog ( ρ_1^α ) = 1/ 1 - αlog ( ∑_i=1^k (p_i )^α )> 0 , where α≠ 1. The pre-measurement output of an infinite precision von Neumann entropy meter is S (ρ_1 ) = - (ρ_1 log (ρ_1 )) = - ∑_i=1^k p_i log p_i > 0 . After the measurement, the meters output S_α (ρ'_1 ) = 0 and S( ρ'_1 ) = 0 . An observer of system 1 with an entropy meter can thus tell whether or not the measurement has taken place on the remaining systems, without any direct communication from those systems. This violates the quantum no-signaling principle, according to which the probability P( a_1 | ψ ; M_1 , M ) = P( a_1 | ψ ; M_1 ) , where the left hand side is the probability of outcome a_1 from a measurement M_1 on subsystem 1 of a composite system in state ψ, conditioned on measurement M being carried out on the other subsystems and the right hand side is the probability unconditioned on M. RDs, FRDs, ERDs, SEVRDs, SPRDs, SURDs, SPODs, (S)SODs, (S)BSDs, UECs and EAs also violate the quantum no-signaling principle. Some folk intuitions suggest that violating no-cloning or quantum no-signaling necessarily creates a logical inconsistency. This would imply that the relevant postulates cannot consistently be added to standard quantum theory. This is not correct, as we now explain. One intuition is that violating no-cloning introduces nonlinearities into quantum theory that are incompatible with the tensor product structure given by the composite systems postulate, leading to inconsistencies. A state readout machine does indeed allow an observer to evolve an initially unknown state using unitaries that depend on that state, so that the final state does not depend linearly on the initial state. However, any evolution that can be implemented by observers equipped with readout machines could also, in principle, be implemented by observers who know the initial state of a set of systems and who learn the outcomes of all measurements on each system <cit.>. Hence, there is no inconsistency in this form of nonlinearity <cit.>. A related intuition is that allowing cloning or some form of signaling-via-measurement necessarily breaks the delicate relationship between quantum mechanics and special relativity. Even if this were true, it would not affect the logic of our critique, since MGM consider quantum and alternative measurement postulates for non-relativistic quantum mechanics. But it is not true. It is true that assuming both that the effects of measurements propagate instantaneously in some reference frame, and also assuming that instantaneous cloning is possible, gives a form of signaling-via-measurement that is indeed incompatible with special relativity <cit.>. As noted earlier, though, the hypothetical devices we describe can be kept consistent with special relativity and with the quantum measurement postulates, if we define the devices to act on localized subsystems and define the reduced density matrix of a localized subsystem as the trace of the full quantum state defined on (asymptotically close to) the past light cone <cit.>. So defined, the devices are sensitive to the effects of quantum measurements within the past light cone, but not outside, so that the causal structure of special relativity is respected. In summary, while our various post-quantum measurement postulates certainly have novel features, they can consistently be combined with quantum measurement postulates to form a post-quantum theory. §.§ Signalling without carriers? Our post-quantum measurement postulates use information about systems 2 to n to define the outcomes and/or outcome probabilities of measurements on system 1. They thus imply that global information determines what happens in local measurements. This may seem unnatural. But this much is also true of the quantum measurement postulates. Suppose that A and B are separated, share an entangled state ψ, and have a common rest frame whose coordinates both use. If B carries out a measurement of his subsystem, quantum theory tells us his outcome probabilities depend on the reduced density matrix _A ( |ψ⟩⟨ψ| ), a quantity that depends on the global state ψ. In fact, these outcome probabilities are precisely the same as those for our stochastic eigenvalue and positive operator devices. Moreover, quantum theory tells us that the outcome probabilities of any subsequent (with respect to the shared time coordinate) measurement of A's subsystem depend on B's result, which is not the case for the stochastic eigenvalue and positive operator devices. In this sense quantum theory appears in greater tension (although still not actual conflict <cit.> ) with relativistic locality than a theory in which these devices define the only form of measurement. However, as noted above, when quantum measurements are combined with our devices, they allow signalling between subsystems. This is not possible in standard quantum theory. According to our alternative measurement postulates, the fact, or choice, of quantum measurements on one subsystem generally influences the outcomes and/or outcome probabilities of device measurements on other separated subsystems. This need not violate the relativistic no-signalling principle when the devices are considered in a relativistic context <cit.>. However, it violates the quantum no-signalling principle, and also conflicts with the intuition that transmitting information requires a physical carrier. That intuition itself deserves some scrutiny. For example, the ways in which information is transmitted in quantum field theory and general relativity have (at least) stretched our understanding of “physical information carrier”. Still, it remains a common intuition that there is “something”– a perturbation of space-time, or a dynamical quantum field – mediating interactions in these theories. One possible response to this is to go beyond MGM's postulates and add a no-signalling postulate to the basic principles of quantum mechanics. Another is to accept that the intuition that information requires a carrier may simply be wrong and that our alternative measurement postulates are reasonable as they stand. A third option is to recognize that quantum mechanics is only an effective non-relativistic theory, and even relativistic quantum field theory is incomplete, so versions of these theories with different or additional measurement postulates need not necessarily be taken as theories in their final form. If new measurement postulates do apply in nature, they may ultimately be understood as part of a fundamental theory with new mechanisms for carrying information. For example, if our postulates play a role in some nonlinear theory combining quantum theory and gravity, it might be that the gravitational degrees of freedom carry more information than we currently assume. If the aim is simply to understand the logical relationship between various postulates of quantum mechanics, the first option is interesting to pursue. If, though, the aim is to understand whether there remain plausible alternatives to quantum mechancs, the second option seems more reasonable. Given all the initially counter-intuitive features of quantum mechanics and of fundamental physics, it seems hard to mount a case for the no-signalling-without-carriers intuition as dogma. The third point fortifies this stance. § REFUTATIONS OF MGM'S PURPORTED DERIVATIONS §.§ Refutation of MGM's derivation of the measurement postulate Consider, for definiteness, a theory in which the only possible measurements are those defined by a finite precision von Neumann entropy meter, applied to the d-dimensional system 1, and the combinations of these measurements that define closure under Properties (1-3), as described above. Since 0 ≤ S(ρ_1 ) ≤log (d_1 ), the number of possible outputs when precision m is input is bounded by ⌈ 2^m log (d_1 ) ⌉ + 1. The FPVNEM thus defines a full measurement with at most ⌈ 2^m log (d_1 ) ⌉ +1 output probability functions f_0 , f_2^-m , …, where f_a defines the probability of output a. For any pure state ψ∈ℂ^d, f_0 ( ψ ) = 1 and f_a (ψ ) = 0 for a ≠ 0. This also holds if we consider the FPVNEM acting on a pure state ψ⊗ϕ∈ℂ_d ⊗ℂ_b ≃ℂ_db, as required by Property 3. Properties (1-3) are all trivially satisfied. The “possibility of state estimation” is also trivially satisfied, since f_0 has value 1 on any ensemble (ψ_r , p_r ) of states ψ_r ∈ℂ^d. On a single system the von Neumann entropy meter defines a trivial measurement: its OPFs can be represented in the form of MGM's Eqn. (15), with f_0 (ϕ ) = ⟨ϕ | I | ϕ⟩ and  f_a (ϕ ) = ⟨ϕ | O | ϕ⟩ for  a ≠ 0 , where I and O are the identity and zero operator. We can write this as f_a (ϕ ) = ⟨ϕ | F_a | ϕ⟩ , where F_a = I if a=0 and F_a = O if a ≠ 0. But on a system comprising two or more subsystems the FPVNEM defines non-trivial measurements, and its OPFs do not satisfy MGM's Eqn. (16): it is not true that ( f_a ⋆ g_b ) (ψ ) = ⟨ψ | F_a ⊗ G_b | ψ⟩ , for any entangled ψ∈ℂ^a ⊗ℂ^b. Our theory thus satisfies all MGM's assumptions, but its measurements are not described by their measurement postulate, which requires all measurements to satisfy their Eqns. (15) and (16). This refutes their purported derivation of the measurement postulate. §.§ Refutation of MGM's derivation of the post-measurement state-update postulate Even in a theory where all OPFs satisfy MGM's Eqns. (15) and (16), so that their measurement theorem characterizes measurements within the theory, the quantum post-measurement state-update rule need not hold. Consider a theory in which only measurements defined by stochastic positive operator readout devices and the combinations of these measurements that define closure under Properties (1-3), as described above, are possible. Stochastic positive operator readout device measurements satisfy MGM's Eqns. (15) and (16) and are characterized by their measurement theorem. A readout of operator A_i has OPF f_A_i which satisfies f_A_i ( ϕ ) = ⟨ϕ | A_i | ϕ⟩ and ( f_A_i⋆ g_B_j ) ( ψ ) = ⟨ψ | A_i ⊗ B_j | ψ⟩ for any ϕ∈ℂ^d and ψ∈ℂ^d ⊗ℂ^b. Stochastic positive operator readout device OPFs also satisfy the “possibility of state estimation” assumption, for the same reason that general quantum measurements do. The probability of outcome A_i given a SPOD measurement on an ensemble ( ψ_r , p_r ) is (A_i ρ ), where the ensemble density matrix ρ is given by Eqn. (<ref>). The equivalence classes of ensembles in this theory thus correspond to density matrices. Knowing the value of a finite set of positive operator OPFs suffices to identify the density matrix and henee the value of all other positive operator OPFs, as in standard quantum theory. However after an SPOD measurement of ϕ the post-measurement state remains ϕ. The state update after outcome A_i can be represented by the completely-positive map ⟨ϕ | A_i | ϕ⟩ I, but this map is ϕ-dependent. There is no ϕ-independent completely-positive map representing the state-update, for stochastic positive operator readout device measurements defined by general POVMs { A_i }, as required by MGM's Eqns. (2) and (3). This refutes MGM's purported derivation of the post-measurement state-update postulate. § FURTHER PROBLEMS WITH MGMS ASSUMPTIONS AND ARGUMENTS One might perhaps wonder whether the refutations above rely on technicalities in MGM's assumptions that might reasonably be altered so as to exclude the above counterexamples and others. For example, the first counterexample highlights that MGM try to derive the quantum measurement postulates from assumptions that apply only to measurements on pure states and proper mixtures. Their assumptions could be extended to apply to measurements on improper mixtures. The second counterexample highlights that MGM do not assume that a sequence of measurements can be considered as a single measurement. The joint probabilities for a sequence of SPOD measurements on an ensemble ( ψ_i , p_i ) depend in general on the individual states and probabilities, not only on the density matrix (<ref>). So (<ref>) does not represent an equivalence class of ensembles for sequential measurements. If these were considered to be single measurements, the argument above for the “possibility of state estimation” would no longer hold. However, there are several problems with this sort of "rescue strategy". First and foremost, from a mathematical perspective, entropy meters, SPODs, and our other hypothetical devices give well-defined and natural measurement rules. A classical physicist who knew nothing about quantum theory except the structural postulates defining the space of states, and was told that information could be obtained about the states without altering them, would not be surprised: essentially the same is true in classical physics.[ They might perhaps suspect that, as in classical physics, the statement describes idealized measurements, and that a real world measurement will always have a nonzero effect. But they might reasonably also assume that, as in classical physics, the effect can be made arbitrarily small, and so the idealization is justifiable.] They would also not be surprised to be told that ideal measurements can be described by pure state readout devices, since again classical physics suggests that states are ontic and directly observable. Several of our other devices, including SPODs, can be built from pure state readout devices with post-processing, so these too would not seem surprising. Nor should they necessarily be surprised to be told that entropy meters represent a possible type of measurement in quantum theory. Having understood the structure of the quantum state space, they would recognise that entanglement is a well-defined and quantifiable feature of quantum states, with some nice properties, including that entanglement measures are invariant under local unitaries. It does not seem a huge leap to imagine it might plausibly represent a measurable physical quantity. Second, it is not obvious that there is a sensible way to extend the postulates to exclude both our counterexamples. For instance, the obvious extension to address the first counterexample is to reframe the assumptions to apply to measurements on a subsystem (say subsystem 1) of pure states in the space ℂ^d_1⊗…⊗ℂ^d_n that represents a composite of n systems. But the finite precision von Neumann entropy meter still satisfies Properties (1-3) in this setting. It has a finite set of OPFs. So, even if we consider measurements on ensembles of the form ( ψ_r , p_r ), where ψ_r ∈ℂ^d_1⊗…⊗ℂ^d_n, it trivially also satisfies the “possibility of state estimation” assumption, since knowing the value of its OPFs on an ensemble determines the value of any OPF (as there are no other OPFs in this counterexample). Third, there is little or no intellectual value in mechanically extending a system of postulates until one excludes all known alternative measurement rules. We would not gain much insight from showing that the quantum measurement postulates are derivable from a set of rules specifically tailored to exclude any others. We need to look at least as critically at any proposed set of assumptions as at any proposed alternative measurement rules. And in fact, even MGM's existing assumptions make problematically arbitrary choices that are hard to motivate. We review these next. §.§ Measurements with infinitely many outcomes MGM's Eqn. (<ref>) does not clearly distinguish finite and infinite sums. As written, it appears to restrict full measurements to those with finitely many outcomes (as does the definition of full measurement for quantum mechanics, given after Eqn. (1) of Ref. <cit.>). On the other hand, MGM allow the dimension d of a system's state space to take the value d=∞, representing countable infinity. It would thus be consistent to allow n=∞, again representing countable infinity, in Eqn. (<ref>), since standard textbook quantum mechanics certainly allows this. Even this, though, is restrictive. For example, our state readout devices, expectation value readout devices and entropy meters all have uncountably many possible outcomes, even for states in finite-dimensional spaces. Standard quantum theory also allows POVMs with uncountably many outcomes in finite-dimensional spaces. It might be argued that this feature of the relevant devices is an artefact of the arguably unphysical assumption that measurement outputs can be given with infinite precision. Similarly, it might be argued that position measurements and other quantum measurements with uncountably many outcomes are unphysical idealisations. So, it might be argued that it is reasonable for a formalism for measurement postulates to restrict to countably many outcomes, as MGM appear to. There is even a case for restricting to finitely many outcomes, on the grounds that no realistic measurement can produce more than finitely many different results in a given finite time interval. In summary, MGM are unclear whether they intend to allow infinite outccome measurements. Their assumptions could be amended so as to explicitly exclude our infinite precision devices. We have defined finite precision versions of all these devices because there is an arguable physical case that any quantum or alternative measurement postulate should describe finite precision measurements. §.§ Definition of measurements via action on pure states MGM's definition of an OPF has a much more serious problem. It tacitly assumes that an OPF, and hence a full measurement, is determined by its action on pure states. As the post-quantum measurement postulates above illustrate, this is neither necessary nor especially natural. For example, the state function readout devices corresponding to the functions f ( ρ_1 ) = ρ_1^n, for positive integer n, all produce the same output (ρ_1 )^n = ρ_1 = |ψ_1⟩⟨ψ_1| for an input pure state | ψ_1 ⟩, but produce different outputs when ρ_1 is an improper mixed state. Similarly, universal entropy meters all produce output 0 when a pure state is input, but different outputs (depending on the input α) when ρ_1 is an improper mixed state. Our other examples either have similar properties or can be generalized so as to. MGM do not justify their assumption that any conceivable type of measurement is necessarily determined by its action on pure states. We find it hard to see any good justification. Entanglement is a fundamental feature of the quantum formalism, whose existence follows directly from the structural postulates above. A purportedly theory-independent characterization of measurements ought to allow for measurements that are sensitive to entanglement in ways that standard quantum measurements are not. One might perhaps hope to rule this possibility out from other plausible assumptions, but ruling it out by fiat is surely unsatisfactory. §.§ Problems with MGM's “possibility of state estimation” assumption §.§.§ The concept of “state estimation” depends on the measurement postulates The logic behind MGM's characterization of their assumption appears to be as follows. Suppose we have a list of outcomes f^1 , … , f^k with the property MGM claim is required for state estimation, i.e., that knowing their value on any ensemble ( ψ_r , p_r ) allows us to determine the value of any other OPF g ∈𝔽_d on the ensemble ( ψ_r , p_r ). We can then generate a finite list of full measurements M^1 , … , M^k, where M^i has outcomes f^i , g^i_1 , … , g^i_k_i and f^i + ∑_j=1^k_i g^i_j = I . (It is possible that g^i_j = f^k for some values of i,j,k; it is also possible that M^i = M^j for some values of i,j. This does not affect our argument, though it may make some of the measurements in the procedure described below redundant.) Carrying out measurement M^i repeatedly (say N^i times) on a given ensemble ( ψ_r , p_r ) allows a statistical estimation of the probability P_i = ∑_r p_r f^i ( ψ_r ) of obtaining outcome f^i on the ensemble. Knowing these probabilities precisely for all the f^i would allow a precise determination of the value of any other OPF g on the ensemble, and hence would determine the equivalence class of the ensemble. Hence one might think that knowing the probabilities to good precision allows the value of any other OPF g on the ensemble to be estimated to good precision, and hence determines the equivalence class of the ensemble to good precision. But does this last point follow? In standard quantum theory, we can indeed sketch an argument for it, as follows. Knowing the P_i precisely determines the value of any OPF on the ensemble ( ψ_r , p_r ), and hence determines the density matrix ρ = ∑_r p_r |ψ_r ⟩⟨ψ_r | that determines the equivalence class of the ensemble. We know that density matrices in d dimensions can can be described with k= (d^2 -1 ) parameters. Moreover we know that P_i = ( ρ f^i ) depends linearly on f^i. Without loss of generality (removing redundant outcomes f^j from the given list if necessary if the list is linearly dependent), we can thus take k= (d^2 -1 ). Standard statistical tests should, with high confidence, give an estimate in the form P^e_i - ϵ_i < P_i < P^e_i + ϵ_i, where P^e_i is the frequency of the outcome corresponding to f^i in the N^i trials, and ϵ_i is a suitable multiple of the standard deviation. The vector function P ( ρ ) = ( ( ρ f^1 ) … ( ρ f^k )) is a continuous, differentiable, linear and invertible function of ρ. Hence, assuming all the statistical estimates are valid (which will be true with high confidence given suitable choices of the ϵ_i) restricts ρ to a subset B of the set of density matrices on ℂ^d, where F(ρ ,σ )= ( √(√(ρ)σ√(ρ)) )^2 > 1 - ϵ for all σ∈ B and the parameter ϵ can be made arbitrarily small with suitably large N^i. Thus we can obtain an estimate σ of ρ with fidelity F(ρ, σ ) arbitrarily close to 1, and with arbitrarily high confidence, by choosing the N^i to be sufficiently large. Several of these points need not necessarily hold true for alternative measurement postulates, however. Suppose some alternative measurement postulates hold, and we are given a list of measurement outcomes f^1 , … , f^k with the specified property: i.e. that knowing their value on any ensemble ( ψ_r , p_r ) allows us to determine the value of any other OPF g ∈𝔽_d on the ensemble ( ψ_r , p_r ). We can again generate a finite list of full measurements M^1 , … , M^k, where M^i has outcomes f^i , g^i_1 , … , g^i_k_i and f^i ( ψ ) + ∑_j=1^k_i g^i_j (ψ ) = 1 for all pure states ψ. For an ensemble ( ψ_r , p_r ), this gives us ∑_r p_r ( f^i ( ψ_r ) + ∑_j=1^k_i g^i_j (ψ_r )) = 1 . We write P_i = ∑_r p_r f^i (ψ_r ) . By assumption, knowing the P_i precisely determines the value of any OPF on the ensemble ( ψ_r , p_r ). Carrying out measurement M^i repeatedly (say N^i times) on a given ensemble ( ψ_r , p_r ) allows us to make an estimate P_i^e of the probability P_i = ∑_r p_r f^i ( ψ_r ) of obtaining outcome f^i on the ensemble. Since knowing the probabilities P_i precisely would determine the value of any other OPF g on the ensemble, it would determine the equivalence class (call it E) of the ensemble. However, it does not necessarily follow that knowing the probabilities to good precision allows the value of any other OPF g on the ensemble to be estimated to good precision, and hence determines the equivalence class of the ensemble to good precision. We cannot (within MGM's framework) assume that the equivalence class of the ensemble is represented by the density matrix ρ = ∑_r p_r |ψ_r ⟩⟨ψ_r |, nor that it can be described by k= (d^2 -1 ) or any other specific number of parameters, nor even that it is a closed or connected subspace in ℝ^n for some n. We cannot assume that P ( (ψ_r , p_r ) ) = ( f^1 ( ( ψ_r , p_r ) ) … f^k ( ( ψ_r , p_r ) ) ) is a linear, differentiable or continuous function on the space of equivalence classes, nor even that the space is such that these terms are well defined. We cannot assume that the P^e_i correspond to any valid equivalence class of ensembles. Even if they do correspond to an equivalence class E', we cannot assume that there is some fidelity surrogate FS defined on the equivalence classes, with the property that FS(E', E) → 1 as the N^i →∞. In short, under alternative measurement postulates, there need not necessarily be any notion of state estimation resembling that implied by quantum measurement postulates. Even if there is such a notion, MGM's “possibility of state estimation” assumption does not necessarily imply that state estimation is possible. §.§.§ State estimation is possible even when MGM's “possibility of state estimation” assumption fails State estimation with infinite precision readout devices Consider a system described by the finite-dimensional space ℂ^d. Suppose that the quantum measurement postulate holds, as does the post-quantum measurement postulate defined by some version of the state readout devices discussed above. Suppose that no types of measurement other than quantum measurements and state readout measurements are possible. Suppose first that we allow infinite precision state readout measurements. These define an uncountably infinite set of OPFs { f_ψ : ψ∈ℂ^d }, defined on pure states ϕ∈ℂ^d by f_ψ ( ϕ ) = 1 if ϕ= ψ , 0 otherwise . For an ensemble E = (ψ_r , p_r ) this gives f_ψ ( E ) = p_r if ψ = ψ_r for some r , 0 otherwise . We also know there is a finite list of quantum measurement outcomes f^1 , … , f^d^2 - 1 whose values on any ensemble E = (ψ_r , p_r ) determine the value of all quantum measurement outcomes on that ensemble. If the ensemble is a single pure state, E= ( ϕ , 1 ), then the values of f^1 , … , f^d^2 - 1 determine ϕ. They thus also determine the values of all the OPFs f_ψ, from (<ref>). If MGM's “possibility of state estimation” assumption applied only to pure states, rather than ensembles, it would thus be satisfied. However, for a general ensemble E = (ψ_r , p_r ), the values of f^1 , … , f^d^2 - 1 determine the density matrix ρ = ∑_r p_r |ψ_r ⟩⟨ψ_r | but not the specific ensemble E. Suppose that ρ has maximal rank d. It is the density matrix of (infinitely) many different ensembles, and every pure state ψ belongs to some but not all of these ensembles.[In fact, every ψ does not belong to a generic finite or countable ensemble represented by ρ.] Define p_F (ψ ) = p if the ensemble F includes (ψ, p ) (i.e. includes the state ψ with probability p), and p_F ( ψ) = 0 otherwise. For every pure state ψ, the values p_F ( ψ), for ensembles F whose density matrix is ρ, range over a finite interval [0 , p_F^ max ( ψ ) ]. The values of f^1 , … , f^d^2 - 1 on E thus do not determine the values of any of the f_ψ on E. Now consider any finite list of outcomes that includes the quantum measurement outcomes f^1 , … , f^d^2 - 1 together with some list f_ϕ_1 , … , f_ϕ_l of our post-quantum measurement outcomes. A generic finite ensemble E = (ψ_r , p_r ) will include none of the ϕ_i, so that f_ϕ_i (E) = 0 for all i. There are infinitely many finite ensembles E' with the same density matrix ρ as E that also include none of the ϕ_i. The values of f^1 , … , f^d^2 - 1 and f_ϕ_1 , … , f_ϕ_l do not distinguish among these ensembles. Hence there is no finite list of outcomes whose value on an ensemble E allows us to determine the values of generic f_ψ on E. That is, MGM's “possibility of state estimation” assumption fails for this combination of quantum and post-quantum measurement postulates. There are two significant issues here. First, as noted earlier, while the “possibility of state estimation” fails for ensembles, it holds for pure states. The reason that it fails for ensembles is that, because a state readout device allows ensembles (not just density matrices) to be distinguished, MGM's version of “state estimation” requires the description of an ensemble to be inferrable from finitely many state readout outcomes, which is impossible. It is not clear, though, that the requirement is reasonable. It is reasonable to base a measurement postulate on the measurement of pure states, which play a fundamental role in any theory based on the quantum formalism. It is not obvious why a measurement postulate should be based on measurements of proper mixtures, which need not necessarily have any fundamental status. For the quantum measurement postulate, the distinction does not matter: if a finite set of measurement outcomes determines the value of all quantum measurement outcomes on all pure states, it also does on all mixed state density matrices. MGM's approach either assumes the distinction will not matter for any possible alternative measurement postulate (which we have seen is false) or assumes a fundamental role for proper mixtures without discussion or clear justification. Second, even if we accept, for the sake of discussion, that it is reasonable to base a measurement postulate on the implications for measuring proper mixtures, it seems wrong to suggest that the postulate characterizes the possibility of state estimation. Specifically, it seems wrong to suggest that state estimation is not possible in the example under discussion, which combines quantum measurements with infinite precision state readout devices. We know that quantum measurements suffice to estimate the density matrix ρ of an proper mixture, in the sense that, by carrying out sufficiently many quantum measurements, we can obtain an estimate ρ_e such that (with very high probability) F( ρ_e , ρ ) > 1 - ϵ, for any given ϵ > 0. But also, if we apply an infinite precision state readout device repeatedly to a finite ensemble E= (ψ_i , p_i ), we can obtain an estimate E_e = (ψ^e_i, p^e_i ), where {ψ^e_i }⊆{ψ_i }. Write p^e_i = 0 if ψ_i is not included in the states of E_e. If we repeat the operation sufficiently often, we can ensure (with very high probability) that ∑_i | p_i - p^e_i | < ϵ, for any given ϵ > 0, and hence that all outcome probabilities can be estimated to within ϵ. This is a natural criterion for estimating a proper mixed state defined by a finite ensemble, paralleling the procedure and result obtainable for quantum measurements. A similar discussion applies to estimating a countably infinite ensemble E=(ψ_i , p_i )_i=1^∞. For any ϵ >0, we can ensure that (with arbitrarily high probability) all outcome probabilities p_i > ϵ are estimated to within (say) ϵ /2, by repeating the readout device operation sufficiently often. This gives us an estimate for E in the form of a finite ensemble E_e = (ψ^e_i, p^e_i ), where p^e_i > 0 and {ψ^e_i }⊆{ψ_i }, with the properties that (i) ψ_i ∈ E_e if p_i > ϵ and (ii) | p^e_i - p_i | < ϵ/2. This is a natural criterion for estimation of a proper mixed state defined by a countably infinite ensemble. State estimation with finite precision readout devices A similar analysis applies if we consider quantum measurements together with (only) the post-quantum measurement postulate defined by finite precision readout devices. Suppose we apply a finite precision readout device repeatedly to a finite ensemble E=(ψ_i , p_i )_i ∈ I, and set the precision to increase suitably (for example, by one for each measurement). We can then obtain an estimate E_e = (ψ^e_i, p^e_i )_i ∈ I', where I' ⊆ I. Again we write p^e_i = 0 if i ∈ I ∖ I'. Then, if we repeat the operation sufficiently often, we can ensure (with very high probability) that | ψ^e_i - ψ_i | < ϵ for all i ∈ I' and that ∑_i | p_i - p^e_i | < ϵ, for any given ϵ > 0. Hence the states in the ensemble with probability larger than ϵ, and their probabilities, can both be estimated to within ϵ. This is a natural criterion for proper mixed state estimation in a finite precision context, again paralleling the procedure and result for quantum measurements. As above, a similar discussion applies to estimating a countably infinite ensemble. State estimation with state overlap devices For ϵ close to 1, a state overlap device SOD ( ϕ , ϵ ) reports whether an input pure state ψ is close to ϕ in the sense that ( P_ϕ P_ψ ) = | ⟨ϕ|ψ⟩ |^2 > 1 - ϵ. For any given ϵ, we can find a finite set of ψ_i such that for any pure state ψ at least one ψ_i satisfies | ⟨ψ_i |ψ⟩ |^2 > 1 - ϵ. Hence, by repeated use of state overlap devices on a finite ensemble E= (ψ_i , p_i ), with suitable choices of ϕ and with values of ϵ tending to 1, we can obtain an estimate E_e = (ψ^e_i, p^e_i )_i ∈ I', in a similar way to that obtained by finite precision readout devices, and satisfying the same estimation criterion. A similar analysis applies to smoothed state overlap devices. §.§.§ Devices satisfying the “possibility of state estimation” State estimation with entropy meters Consider again a system described by the finite-dimensional space ℂ^d. Suppose now that the quantum measurement postulate holds, as does the post-quantum measurement postulate defined by one of the entropy meters discussed above. Suppose that no types of measurement other than quantum measurements and entropy meter output measurements are possible. This example illustrates again the unnaturality of (i) assuming that measurement outcome functions are uniquely defined by their action on pure states, and (ii) considering their action on proper mixed states in defining an assumption intended to characterize the possibility of state estimation. Given an ensemble ( ψ_r , p_r ) of pure states in ℂ^d, an entropy meter outputs the entropy of the state actually presented. Since that state is one of the pure states ψ_r, the meter always outputs 0. The “possibility of state estimation” assumption is thus satisfied, since a list of (d^2 - 1) quantum measurement outcomes suffices to determine the output of all quantum measurement outcomes on a given ensemble, and also (trivially) determines the output of an entropy meter. Similar comments apply to entropy certifiers and smooth entropy certifiers. A universal entropy certifier applied to an ensemble ( ψ_r , p_r ) of pure states in ℂ^d produces output 0. A smoothed UEC produces output 1 with probability 1 / (1 + exp ( k E))) and output 0 with probability exp (kE) / (1 + exp ( k E))). State estimation with stochastic eigenvalue readout devices Consider again a system described by the finite-dimensional space ℂ^d. Suppose now that the quantum measurement postulate holds, as does the post-quantum measurement postulate defined by the stochastic eigenvalue devices discussed above. Suppose that no types of measurement other than quantum measurements and stochastic eigenvalue readout measurements are possible. The outcomes and outcome probabilities for a stochastic eigenvalue readout device with input A (a hermitian operator) are the same as those for a quantum measurement of the observable A. Hence they are determined by any list of (d^2 -1 ) quantum measurement outcomes that suffices to determine the proper mixed state ρ_1. This example thus satisfies MGM's “possibility of state estimation” assumption. It also satisfies a version of the assumption that applies to improper mixed states. State estimation with expectation value readout devices Consider again a system described by the finite-dimensional space ℂ^d. Suppose now that the quantum measurement postulate holds, as does the post-quantum measurement postulate defined by the expectation value devices discussed above. Suppose that no types of measurement other than quantum measurements and expectation value output measurements are possible. For any observable A, the output of an expectation value device, (A ρ_1 ), is determined by the proper mixed state ρ_1. Hence it is determined by any list of (d^2 -1 ) quantum measurement outcomes that suffices to determine the proper mixed state ρ_1. This example thus satisfies MGM's “possibility of state estimation” assumption. It also satisfies a version of the assumption that applies to improper mixed states. § DISCUSSION We have highlighted several problems with MGM's analysis. The most salient point is that their claimed derivation of the quantum measurement postulates from structural postulates is incorrect. Their approach also has other theoretically significant defects. They ignore the crucial role of entanglement in quantum theory by assuming that measurement postulates can be framed in terms of the action of measurements on pure states of a subsystem. They assume a fundamental role for proper mixed states, i.e., probabilistic ensembles of pure states, in formulating quantum theory and defining measurement postulates. They assume that, in any alternative to quantum theory, proper mixed states must fall into large equivalence classes (analogous to, although not necessarily taking the form of, equivalence classes defined by mixed state density matrices in quantum theory). They assume that any alternative to quantum theory must satisfy a particular property that allows state estimation in quantum theory, even though this property is not generally aligned with natural notions of state estimation in general theories. Each of these assumptions restricts the allowed types of measurement towards standard quantum measurements. None of them is reasonable in an open-minded analysis of the possibility of alternatives. Of course, standard quantum measurement theory has some elegant features. Quantum measurements on a subsystem can generally be treated, up to a point, as unitary quantum evolution of a larger system. This allows the Heisenberg cut to be shifted at will, at least in a range between mesoscopic subsystems and conscious observers, without significantly affecting the predictions. This has even encouraged some to try eliminating measurement postulates altogether via some Everettian approach <cit.>. Critics have pointed out several problems with Everettian ideas (see e.g. <cit.> for some discussion). However, after nearly a hundred years, we have no empirical evidence for any alternative to standard quantum measurements. Still, standard quantum measurement theory has problems. We seem to see definite measurement results, which suggests that the Heisenberg cut cannot be shifted beyond the point of conscious observation: hence the quantum measurement problem. Another and perhaps sharper way of highlighting this issue is that we frame quantum theory as a mathematically precise dynamical theory, and understand it as a probabilistic theory, but have no mathematically precise definition of its sample space. We do not know if gravity is quantum. If not, quantum theory has a definitely limited domain. A theory that somehow combines a classical description of space-time with quantum matter would necessarily imply, inter alia, that space-time defines some forms of measurement on quantum matter. Such measurements need not necessarily be standard quantum measurements. There are also puzzles elsewhere, most notably in cosmology, which suggest we may not fully understand the quantum evolution of the universe, or possibly that standard quantum theory does not describe its evolution <cit.>. And then there is consciousness. Clearly consciousness involves some form of quantum measurement: we consciously access information about quantum systems. Precisely how this works is mysterious, as is the whole relationship between consciousness and physics (see e.g. <cit.> for some discussions). There are also tensions between some arguably natural-seeming assumptions about conscious perceptions and quantum theory (see e.g. <cit.>). None of these puzzles show that the standard quantum measurement postulates are definitely inadequate. Nor do they point to specific alternative postulates that would definitely resolve them, as far as we can currently see. But they do make a clear case for keeping an open mind, and for spending more time looking for interesting empirically testable alternatives to quantum theory and less on efforts to show it is essentially unique. § ACKNOWLEDGEMENTS I gratefully acknowledge financial support from the UK Quantum Communications Hub grant no. EP/T001011/1 This work was also supported by an FQXi grant and by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research and Innovation. § REFERENCES unsrtnat
http://arxiv.org/abs/2307.04231v1
20230709171314
Mx2M: Masked Cross-Modality Modeling in Domain Adaptation for 3D Semantic Segmentation
[ "Boxiang Zhang", "Zunran Wang", "Yonggen Ling", "Yuanyuan Guan", "Shenghao Zhang", "Wenhui Li" ]
cs.CV
[ "cs.CV" ]
Multi-spin probes for thermometry in the strong-coupling regime Dvira Segal August 12, 2023 =============================================================== Existing methods of cross-modal domain adaptation for 3D semantic segmentation predict results only via 2D-3D complementarity that is obtained by cross-modal feature matching. However, as lacking supervision in the target domain, the complementarity is not always reliable. The results are not ideal when the domain gap is large. To solve the problem of lacking supervision, we introduce masked modeling into this task and propose a method Mx2M, which utilizes masked cross-modality modeling to reduce the large domain gap. Our Mx2M contains two components. One is the core solution, cross-modal removal and prediction (xMRP), which makes the Mx2M adapt to various scenarios and provides cross-modal self-supervision. The other is a new way of cross-modal feature matching, the dynamic cross-modal filter (DxMF) that ensures the whole method dynamically uses more suitable 2D-3D complementarity. Evaluation of the Mx2M on three DA scenarios, including Day/Night, USA/Singapore, and A2D2/SemanticKITTI, brings large improvements over previous methods on many metrics. § INTRODUCTION 3D semantic segmentation methods <cit.> often encounter the problem of shift or gap between different but related domains (e.g. day and night). The task of cross-modal domain adaptation (DA) for 3D segmentation <cit.> is designed to address the problem, which is inspired by 3D datasets usually containing 2D and 3D modalities. Like most DA tasks, labels here are only available in the source domain, whereas the target domain has no segmentation labels. Existing methods, i.e. xMUDA <cit.> and its heirs <cit.>, extract 2D and 3D features through two networks and exploit the cross-modal complementarity by feature matching to predict results. However, as lacking supervision in the target domain, the robustness of this complementarity is not good. As shown in the left part of Fig.<ref>, if the domain gap is large and both networks underperform on the target domain, these methods appear weak. The problem of lacking supervision once constricted the visual pre-training task and has been solved by methods with masked modeling <cit.>, which has been proved to belong to data augmentation <cit.>. Its core solution is simple: removing a portion of inputs and learning to predict the removed contents. Models are fitted with sufficient data in this way, so that learn more inner semantic correspondences and realize self-supervision <cit.>. For this DA task, this way of data augmentation and then the self-supervision can enrich the robustness and reduce the gap. Hence the idea is natural: if we introduce masked modeling into the task, the lacking supervision on the target domain and then the large gap are solved. Nevertheless, two problems are the key to introducing masked modeling. a) The core solution ought to be re-designed to fit for this task, where there are two modalities. b) For the cross-modal feature matching, we should explore a new way to suit the joining of masked modeling. Given these observations, we propose a new method Mx2M utilizing masked cross-modality modeling to solve the problem of lacking supervision for the DA of 3D segmentation. Our Mx2M can reduce the large domain gap by adding two new components to the common backbone for this task, which correspond to the above two problems. For the first one, we design the core solution in the Mx2M, cross-modal removal and prediction (xMRP). As the name implies, we inherit the 'removal-and-prediction' proceeding in the core solution of masked single-modality modeling and improve it with the cross-modal working manner for this task. During removal, the xMRP has two changes. i) Our CNN backbone cannot perform well with highly destroyed object shapes <cit.>, so the masked portion is less. ii) To guarantee the existence of full semantics in this segmentation task, we do not mask all inputs and ensure at least one modality complete in each input. We can obtain the different xMRP by controlling the removal proceeding, which makes the Mx2M adapt to various DA scenarios. During prediction, to learn more 2D-3D correspondences beneficial to networks <cit.>, we mask images/points and predict the full content in points/images by two new branches. In this way, cross-modal self-supervision can be provided for the whole method. As for the second problem, we propose the dynamic cross-modal filter (DxMF) to dynamically construct the cross-modal feature matching by locations, which is inspired by impressive gains when dynamically establishing kernel-feature correspondences in SOLO V2 <cit.>. Similarly, in our DxMF, we structure the 2D-3D kernel-feature correspondences. Kernels for one modality are generated by features from the other, which then act on features for this modality and generate the segmentation results by locations. With the joining of the DxMF, the Mx2M can dynamically exploit the complementarity between modalities. As is shown in the right part of Fig.<ref>, with these two components, our Mx2M gains good results even in the scenario with a large domain gap. To verify the performance of the proposed Mx2M, we test it on three DA scenarios in <cit.>, including USA/Singapore, Day/Night, and A2D2/SemanticKITTI. Our Mx2M attains better results compared with most state-of-the-art methods, which indicates its effectiveness. In summary, our main contributions are as follows: * We innovatively propose a new method Mx2M, which utilizes masked cross-modality modeling to reduce the large domain gap for DA of 3D segmentation. To our knowledge, it is the first time that masked modeling is introduced into a cross-modal DA task. * Two components are specially designed for this task, including xMRP and DxMF, which ensures the Mx2M effectively works and deals with various scenarios. * We achieve high-quality results on three real-to-real DA scenarios, which makes the Mx2M the new state-of-the-art method. The good results demonstrate its practicality. § RELATED WORK Domain Adaptation for 3D Segmentation.Most works pay attention to DA for 2D segmentation <cit.>, which are hard to be applied to unstructured and unordered 3D point clouds. The DA methods for 3D segmentation <cit.> are relatively few, but they also do not fully use the datasets that often contain both images and points. Hence, xMUDA <cit.> and its heirs <cit.> with cross-modal networks are proposed, which achieve better adaptation. Our Mx2M also adopts cross-modal networks, which has the same backbone as xMUDA. Masked Modeling.The masked modeling was first applied as masked language modeling <cit.>, which essentially belongs to data augmentation <cit.>. Nowadays, it has been the core operation in self-supervised learning for many modalities, such as masked image modeling <cit.>, masked point modeling <cit.>, and masked speech modeling <cit.>. Their solutions are the same: removing a portion of the data and learning to predict the removed content. The models are fitted with sufficient data in this way so that the lacking of supervision is satisfied. Our Mx2M designs the masked cross-modality modeling for DA in 3D segmentation that uses point and image. Cross-modal Learning. Cross-modal learning aims at taking advantage of data from multiple modalities. For visual tasks, the most common scene using it is learning the 3D task from images and point clouds <cit.>. The detailed learning means are various, including 2D-3D feature matching <cit.>, 2D-3D feature fusion <cit.>, 2D-3D cross-modal supervision <cit.>, etc. Besides, there are also some works conducting cross-modal learning on other modalities, such as video and medical image <cit.>, image and language <cit.>, as well as video and speech <cit.>. Cross-modal learning is also exploited in our M2xM: the core procedure xMRP leverages the cross-modal supervision, while the DxMF works in the way of 2D-3D feature matching. § METHOD Our Mx2M is designed for DA in 3D segmentation assuming the presence of 2D images and 3D point clouds, which is the same as xMUDA <cit.>. For each DA scenario, we define a source dataset 𝒮, each sample of which contains a 2D image X^2D,S, a 3D point cloud X^3D,S, and a corresponding 3D segmentation label Y^3D,S. There also exists a target dataset 𝒯 lacking annotations, where each sample only consists of image X^2D,T and point cloud X^3D,T. The images and point clouds in 𝒮 and 𝒯 are in the same spatial sizes, i.e. X^2D∈ℝ^H×W× 3 and X^3D∈ℝ^N× 3. Based on these definitions, we will showcase our Mx2M. §.§ Network Architecture The architecture of the Mx2M is shown in Fig.<ref>. For a fair comparison with previous methods <cit.>, we also use the same backbone to extract features: a SparseConvNet <cit.> for the 3D network and a modified version of U-Net <cit.> with ResNet-34 <cit.> pre-trained on ImageNet <cit.> for the 2D one. Their output features, H^2D and H^3D, have the same length N equaling the number of 3D points, where H^2D is gained by projecting the points into the image and sampling the 2D features at corresponding pixels. H^2D and H^3D are then sent into two groups of the same three heads, each group of which is for one modality. During these heads, the ones that predict masked 2D/3D contents M^2D → 3D and M^3D → 2D belong to xMRP. We will introduce them and the proceeding of masking inputs in Sec.<ref>. Besides them, the other heads all participate in feature matching. The heads that predict final segmentation results P^2D and P^3D are our DxMFs (detailed in Sec.<ref>). The heads that mimick the outputs from cross-modality are the linear layers inherited from xMUDA <cit.>, where the outputs are P^2D → 3D and P^3D → 2D. As for the information flow, we illustrate it in Fig.<ref>(b). The whole network is alternately trained on the source and the target domain. When the models are trained on the source domain, all six heads work. The heads for xMRP are respectively self-supervised by the origin image/point. The two DxMF heads that predict the segmentation results are both supervised by Y^3D,S. The two mimicking heads are internally supervised by the outputs from the cross-modal DxMF heads (e.g. P^3D → 2D supervised by P^2D). When the models are trained on the target domain, the DxMFs heads cannot be supervised because of lacking annotations. The other heads normally work as above. The loss functions of segmentation and mimicking heads are the same as previous methods <cit.> for convenience, where the positions are like in Fig.<ref>(b). The CE(·) and KL(·) are loss functions of cross-entropy and KL divergence, respectively. §.§ xMRP The core solution of the Mx2M, xMRP, removes a portion of the data in one modality and learns to predict the full content in the other one, which is related but different from the core solution in masked single-modality modeling. As the name implies, this procedure is divided into two steps. For the step of removal, we randomly select some patches of the image/points and mask them inspired by the way in MAE <cit.>. Considering that 3D points are hard to mask by patches, we first project them into the image. We use two hyper-parameters to control the masking proceeding: the p indicating the size of each patch, and the mr representing the masking ratio of the whole image/points (i.e. masking mr of all patches). The mr cannot be as high as that in <cit.> because the CNN backbone in our method cannot perform well if the shape of objects is highly destroyed <cit.>. Besides, due to our segmentation task, the inputs cannot always be masked and at least one modality is complete to guarantee the existence of full semantics. Thus we use another two hyper-parameters to define the ratio when masking each modality: m_2D meaning the ratio when masking 2D and m_3D indicating when masking 3D (i.e. masking images at times of m_2D, masking points on times of m_3D, and no masking when (1-m_2D-m_3D)). We can control the inputs by (p, mr, m_2D, m_3D) to make the model adapt to different DA scenarios. As is shown in Fig.<ref>, X^2D and X^3D processed by these hyper-parameters (denoted as the new X^2D and X^3D) are sent into the networks as inputs. The next step is the cross-modal prediction that provides self-supervision. Inspired by the conclusion in <cit.> about the good effect of MLP on unsupervised tasks, we use the same MLP heads with middle channels of 4096 for both 2D and 3D to generate the results M^2D→3D and M^3D→2D for 3D and 2D, respectively. Motivated by <cit.>, the losses are correspondingly calculated as follows: ℒ_2D=L_2(X^3D||M^2D→3D), and ℒ_3D=L_2(X^2D||M^3D→2D) . The X^3D means the original 3D point clouds. The X^2D indicates the sampled pixels when X^3D projects into the original image. L_2(·) signs the mean squared error. It is noteworthy that we predict the full contents rather than the removed ones in masked single-modality modeling. The model can learn more 2D-3D correspondences from non-masked parts because the masked modality is different from the predicted one, which is not available in methods of masked single-modality modeling. Herein we finish the core proceeding of our Mx2M. The (p, mr, m_2D, m_3D) are set as (16, 0.15, 0.2, 0.2), (4, 0.3, 0.1, 0.3), and (4, 0.25, 0.3, 0.1) for scenarios of USA/Singapore, Day/Night, and A2D2/SemanticKITTI, respectively. The experiments for USA/Singapore are reported in Sec.<ref> and the other two are in Appendix.A. Our network can learn sufficient 2D-3D correspondences on different DA scenarios in this way, which fixes the lacking of supervision and then reduces the domain gap. §.§ DxMF The whole network can learn more complementarity between modalities by feature matching, so it is still important for our Mx2M. Inspired by SOLO V2 <cit.> which gains great progress compared with SOLO <cit.> via kernel-feature correspondences by locations, our DxMF constructs cross-modal kernel-feature correspondences for feature matching. The pipeline is shown in Fig.<ref>(a). Compared with simple final linear layers in xMUDA <cit.>, we use dynamic filters to segment the results. We make the procedure of segmenting the 2D results as an example to illustrate our DxMF and so do on 3D. The kernel weights W^2D∈ℝ^N× F^2D×C of the filter for 2D segmentation are generated from 3D features H^3D by a linear layer (similarly, W^3D∈ℝ^N× F^3D×C from H^2D). As the 2D features H^2D have a spatial size of (N,F^2D), the result of one point is got: P^2D_i=W^2D_i ∗ H^2D_i, where i ∈N . The ∗ indicates the dynamic convolution. We can get the segmentation results P^2D after all the P^2D_i joined together. As we dynamically construct the 2D-3D correspondences for feature matching, by which the model learns more suitable complementarity compared with the ways in previous methods <cit.>. We provide experiments on this comparison and ones on the scheme of the dynamic feature matching about other heads, where the results are shown in Sec.<ref>. § EXPERIMENTS §.§ Implementation Details Datasets.We follow three real-to-real adaptation scenarios in xMUDA <cit.> to implement our method, the settings of which include country-to-country, day-to-night, and dataset-to-dataset. The gaps between them raise. Three autonomous driving datasets are chosen, including nuScenes <cit.>, A2D2 <cit.>, and SemanticKITTI <cit.>, where LiDAR and camera are synchronized and calibrated. In this way, we can compute the projection between a 3D point and the corresponding 2D pixel. We only utilize the 3D annotations for segmentation. In nuScenes, a point falling into a 3D bounding box is assigned the label corresponding to the object, as the dataset only contains labels for the 3D box rather than the segmentation. The nuScenes is leveraged to generate splits Day/Night and USA/Singapore, which correspond to day-to-night and country-to-country adaptation. The other two datasets are used for A2D2/SemanticKITTI ( i.e. dataset-to-dataset adaptation), where the classes are modified as 10 according to the alignments in <cit.>. Metrics.Like other segmentation works, the mean intersection over union (mIoU) is adopted as the metric for evaluating the performance of the models (both 2D and 3D) for all datasets. In addition, we follow the new mIoU calculating way in <cit.>, which jointly considers both modalities and is obtained by taking the mean of the predicted 2D and 3D probabilities after softmax (denoted as 'Avg mIoU'). Inputs & Labels.For easily conducting masked modeling, we resize images into the sizes that could be divisible by p. The images in nuScenes (i.e. Day/Night and USA/Singapore) are resized as 400×224, whereas the ones in A2D2 and SemanticKITTI are reshaped as 480×304. All images are normalized and then become the inputs/labels of the 2D/3D network. As for points, a voxel size of 5cm is adopted for the 3D network, which is small enough and ensures that only one 3D point lies in a voxel. The coordinates of these voxels are adopted as the labels for the 2D network. Training.We use the PyTorch 1.7.1 framework on an NVIDIA Tesla V100 GPU card with 32GB RAM under CUDA 11.0 and cuDNN 8.0.5. For nuScenes, the mini-batch Adam <cit.> is configured as the batch size of 8, β_1 of 0.9, and β_2 of 0.999. All models are trained for 100k iterations with the initial learning rate of 1e-3, which is then divided by 10 at the 80k and again at the 90k iteration. For the A2D2/SemanticKITTI, the batch size is set as 4, while related models are trained for 200k and so do on other configurations, which is caused by the limited memory. The models with '+PL' share the above proceeding, where segmentation heads are extra supervised with pseudo labels for the target dataset. As for these pseudo labels, we strictly follow the ways in <cit.> to prevent manual supervision, i.e. using the last checkpoints of models without PL to generate them offline. §.§ Ablation Studies To define the effectiveness of each component, we conduct ablation studies on them, respectively. As xMUDA <cit.> is the first method of cross-modal DA in 3D segmentation and is the baseline of all related methods <cit.>, we continue this habit and choose xMUDA as our baseline. By default, all results are reported based on the USA/Singapore scenario. For a fair comparison, we train models with each setting for 100k iterations with a batch size of 8. We also provide experiments on other scenarios, which are reported in Sec.6. §.§.§ Ablation on xMRP As mentioned in Sec.<ref>, in xMRP, we use four hyper-parameters (p, mr, m_2D, m_3D) to control the proceeding of masking inputs and two heads of MLP to predict the cross-modality. To validate the effectiveness of the masked cross-modality modeling strategy, we insert simple xMRPs into xMUDA. The (4, 0.15, 0.1, 0.1) are selected as the start point because of the low mask ratio and the low masking 2D/3D ratio, which are suitable for the task of segmentation. As for heads, we start from the simplest linear layers. The mIoU for (2D, 3D) in this setting are (60.0, 53.4), which are better than the segmentation results of (59.3, 52.0) in xMUDA. The good results demonstrate the significance of masked cross-modality modeling. We next explore the effectiveness of detailed settings. Ablation on Hyper-parameters. To determine the suitable input settings for the current scenario, we conduct ablation studies on (p, mr, m_2D, m_3D), respectively. We start from (4, 0.15, 0.1, 0.1) and first confirm p with fixed other numbers, where the mIoU of 2D and 3D are shown in Tab.<ref>(a). The networks gain the best metrics at p=16. The next job is to define mr, the results of which are illustrated in Tab.<ref>(b). Both metrics decrease with the raising of mr, but when mr=0.10 so do results. Hence the models have the best results when mr=0.15. Finally, we determine the m_2D and m_3D. As mentioned in Sec.<ref>, (1-m_2D-m_3D)>0 because of keeping the full semantics. We design plenty of combinations for these two hyper-parameters, where the details are shown in Tab.<ref>. The metrics are not good when m_2D and m_3D are too large, which matches the fact that our CNN backbones cannot integrate a high mask ratio like <cit.>. We get results of (61.4, 56.5) with suitable m_2D=0.2 and m_3D=0.2, and then appropriate hyper-parameters (16, 0.15, 0.2, 0.2) for the scenario. Ablation for Removal and Prediction. We obtain the results of (61.4, 56.5) with the simple linear layer. According to the conclusion in <cit.>, the network performs well when having an MLP layer. Therefore we compare the schemes of linear layer, a single MLP with mid channels of 4096, and two same MLPs with the 4096 mid channels. They are used to predict both modalities, where the results are shown in Tab.<ref>(c). A single MLP also does for our DA task. Besides, some other removal-prediction strategies are also attempted besides the cross-modal one. We illustrate the segmentation metrics in Tab.<ref>(a). We have tried respectively removing and predicting the content in single-modality (denoted as '2D+3D'), only in 3D point clouds, and only in 2D images. Here only removed portions are set as labels. We can see '2D+3D' has similar results as xMUDA <cit.>, because only rare patches work and bring about seldom information in this scheme. Similarly, the cross-modal scheme performs well thanks to 2D-3D correspondences from all contents, which is beneficial to this task <cit.>. Finally, we gain the (+2.0, +4.2) increase with our xMRP for 2D and 3D performance, respectively. §.§.§ Ablation on DxMF All above experiments are based on the same way of feature matching as xMUDA <cit.>, where the segmentation results are got based on two linear layers. We also conduct experiments on our DxMF, which achieves cross-modal feature matching and then the segmentation by dynamically constructing kernel-feature correspondences. The comparison is shown in the first two rows of Tab.<ref>(b), our DxMF performs the better 2D-3D complementarity and especially increases the 3D performance. We also try to combine the means of sparse-to-dense cross-modal feature matching, DsCML <cit.>, with the masked cross-modality modeling, where the metrics are illustrated in the last three rows in Tab.<ref>(c). The results with '†' or not denote that they are from the implementation of the official source code or from the paper. As our experiments are based on the official source code, we still gain the increase with the join of xMRP. In all, the good metrics prove the effectiveness of our DxMF. We also validate the results for only using DxMF, which is reported in the first two rows of Tab.<ref>(c). Besides, our Mx2M has three output heads for each modality according to Sec.<ref>. We also conduct experiments on DxMF on them besides the above experiments on prediction heads. Like adding DxMF to prediction heads, we add DxMF to other ones on both modalities. The results are reported in Tab.<ref>(c). Both mimicking heads and ones for xMRP do not match the DxMF. We may infer that the former is not involved in segmentation and the latter is like respective single-modal prediction in Tab.<ref>(a). Both situations are not suitable for our DxMF. After all experiments, our Mx2M outperforms the Baseline xMUDA (+4.8, +12.2) for 2D and 3D in total, which shows that the Mx2M does work. §.§ Limitations Considering previous works <cit.> attempt to introduce the adversarial learning (AL) into the DA in 3D semantic segmentation, we also add the extra heads for AL in both 2D and 3D. We use the simple AL in AUDA <cit.> and the CMAL in DsCML <cit.>. The results for 2D and 3D are not ideal, which are correspondingly (56.26, 51.76) and (49.75, 41.94) for AL in AUDA and CMAL. Compared with the metrics of (64.1, 64.2) in the scheme without AL, they decrease so much. We think it is the limitation in our Mx2M that our method does not match AL. §.§ Comparison with The State-of-the-art We evaluate our Mx2M on the above three real-to-real DA scenarios and compare the results with some methods. First, we train the backbones on source only and on target only (except on the Day/Night, where the batches of 50%/50% Day/Night are used to prevent overfitting). The two results can be seen as the upper and the lower limit of the DA effectiveness. Next, some representative uni-modal DA methods are compared. These uni-modal methods are correspondingly evaluated on U-Net with ResNet-34 for 2D and SparseConvNet for 3D, which are the same as our backbones. We use the results from <cit.> for convenience. Finally, We also compare our method with some cross-modal methods, including xMUDA <cit.>, AUDA <cit.>, and DsCML <cit.>. These cross-modal methods and our Mx2M are also trained with the data with pseudo labels on the target domain, where the proceeding can be seen in Sec.<ref>. All comparison results for 3D segmentation are reported in Tab.<ref>. We can see that the Mx2M gains the (2D mIoU, 3D mIoU) on average of (+5.4, +7.6) compared with the baseline xMUDA, which proves the DA performance of our method. Specifically, for the USA/Singapore scenario, the bare Mx2M even surpasses xMUDA with PL. In Day/Night, though the metric without PL looks normal, the result with PL shows a surprising increase that is close to the upper limit. As for the A2D2/SemanticKITTI, the Mx2M outperforms all methods on 2D and 3D metrics with a 0.9 less Avg mIoU compared to the DsCML. In total, our Mx2M gains state-of-the-art performance on most metrics. We also provide some visual results, which are shown in Fig.<ref>. More visual results can be seen in Sec.7. § CONCLUSION In this paper, we propose a method named Mx2M for domain adaptation in 3D semantic segmentation, which utilizes masked cross-modality modeling to solve the problem of lacking supervision on the target domain and then reduce the large gap. The Mx2M includes two components. The core solution xMRP makes the Mx2M adapts to various scenarios and provides cross-modal self-supervision. A new way of cross-modal feature matching named DxMF ensures that the whole method exploits more suitable 2D-3D complementarity and then segments results. We achieve state-of-the-art performance on three DA scenarios for 3D segmentation, including USA/Singapore, Day/Night, and A2D2/SemanticKITTI. Specifically, the Mx2M with pseudo labels achieves the (2D mIoU, 3D mIoU, Avg mIoU) of (67.4, 67.5, 67.4), (52.4, 56.3, 54.6), and (48.6, 53.0, 51.3) for the three scenarios. All the above results demonstrate the effectiveness of our method. § ABLATION STUDIES ON SCENARIOS OF DAY/NIGHT AND A2D2/SEMANTICKITTI We also conduct ablation studies on the scenarios of Day/Night and A2D2/SemanticKITTI. Similarly, the xMUDA <cit.> is also selected as the backbone. We train the models with each Day/Night setting for 100k iterations with a batch size of 8 and with each A2D2/SemanticKITTI setting for 200k iterations with a batch size of 4 because of limited resources. The proceedings are basically as those in USA/Singapore and as follows. §.§ Ablations on Day/Night We first validate the effectiveness of the masked cross-modality modeling strategy. The four hyper-parameters (p, mr, m_2D, m_3D) are set as (4, 0.15, 0.1, 0.1), which is the same as what we do for the USA/Singapore and for the same reason in Sec.4.2.1 of the body. As for the heads for predicting segmentation, we start from the simplest linear layers. The mIoU results for (2D, 3D) are (47.3, 45.9), which are better than the segmentation indexes of (46.2, 44.2) in our baseline xMUDA. The good results demonstrate the significance of the strategy for masked cross-modality modeling. We next explore the effectiveness of detailed settings. To determine the suitable input settings for the current scenario, we conduct ablation studies on (p, mr, m_2D, m_3D), respectively. We start from (4, 0.15, 0.1, 0.1) and first confirm p with fixed other numbers, where the mIoU of 2D and 3D are shown in Tab.<ref>(a). The start point is the most suitable one for this scenario, i.e. p=4. The next job is to define mr, the results of which are illustrated in Tab.<ref>(b). The models have the best results when mr=0.3. Finally, we determine the m_2D and m_3D. We design plenty of combinations for these two hyper-parameters, where the details are shown in Tab.<ref>. We get results of (48.9, 48.5) with suitable numbers, where m_2D=0.1 and m_3D=0.3, and then appropriate hyper-parameters (4, 0.3, 0.1, 0.3) for the scenario. We obtain the results of (48.9, 48.5) with the simple linear layer. According to the conclusion in <cit.>, the network performs well when having an MLP layer. Therefore we compare the schemes of linear layer, a single MLP with mid channels of 4096, and two same MLPs with the 4096 mid channels. They are used to predict both modalities, where the results are shown in Tab.<ref>(c). A single MLP also does for our DA task. Finally, the DxMF is added to the Mx2M, the results of which are illustrated in Tab.<ref>(d). With the join of the DxMF, we gain the mIoU of (49.7, 49.9) for the scenario of Day/Night. §.§ Ablations on A2D2/SemanticKITTI The proceeding of ablation studies on A2D2/SemanticKITTI is the same as what in Day/Night. First, the effectiveness of the masked cross-modality modeling strategy is validated. Our results are (37.9, 44.3) v.s. (36.8, 43.3) of the ones in xMUDA <cit.>. Next, we determine four hyper-parameters (p, mr, m_2D, m_3D), where the procedure is the same as the above. We report them in Tab.<ref>(a), Tab.<ref>(b), and Tab.<ref>, respectively. They are set as (4, 0.25, 0.3, 0.1) for the A2D2/SemanticKITTI scenario. We then define the prediction heads with results (41.5, 46.2), which is shown in Tab.<ref>(c). The MLP works again. Finally, we validate the effectiveness of the DxMF, the results of which are illustrated in Tab.<ref>(d). We gain the mIoU of (44.6, 48.2) for the A2D2/SemanticKITTI. § MORE VISUAL RESULTS ON THE MX2M As is mentioned in Sec.4.4 of the main body, we offer more visual results in Fig.<ref>. The images/points from the top row to the bottom correspondingly come from the A2D2/SemanticKITTI, USA/Singapore, and Day/Night scenarios, where every three rows belong to the same scenario. Our Mx2M has a balanced performance in all three scenarios.
http://arxiv.org/abs/2307.07447v2
20230714160923
Dynamical simulation of the injection of vortices into a Majorana edge mode
[ "I. M. Flor", "A. Donis Vela", "C. W. J. Beenakker", "G. Lemut" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "quant-ph" ]
Instituut-Lorentz, Universiteit Leiden, P.O. Box 9506, 2300 RA Leiden, The Netherlands Instituut-Lorentz, Universiteit Leiden, P.O. Box 9506, 2300 RA Leiden, The Netherlands Instituut-Lorentz, Universiteit Leiden, P.O. Box 9506, 2300 RA Leiden, The Netherlands Instituut-Lorentz, Universiteit Leiden, P.O. Box 9506, 2300 RA Leiden, The Netherlands The chiral edge modes of a topological superconductor can transport fermionic quasiparticles, with Abelian exchange statistics, but they can also transport non-Abelian anyons: Majorana zero-modes bound to a π-phase domain wall that propagates along the boundary. Such an edge vortex is injected by the application of an h/2e flux bias over a Josephson junction. Existing descriptions of the injection process rely on the instantaneous scattering approximation of the adiabatic regime, where the internal dynamics of the Josephson junction is ignored. Here we go beyond that approximation in a time-dependent many-body simulation of the injection process, followed by a braiding of the mobile edge vortex with an immobile Abrikosov vortex in the bulk of the superconductor. Our simulation sheds light on the properties of the Josephson junction needed for a successful implementation of a flying Majorana qubit. Dynamical simulation of the injection of vortices into a Majorana edge mode G. Lemut July 2023 =========================================================================== § INTRODUCTION A remarkable property of topological superconductors is that two vortices winding around each other exchange a quasiparticle <cit.>. This “braiding” operation is a manifestation of the non-Abelian statistics of the Majorana zero-modes bound to the core of an Abrikosov vortex <cit.>. Because Abrikosov vortices are immobile, typically pinned to defects, winding them is a thought experiment that is not easily implemented <cit.>. A proposal to mobilize vortices by injecting them into the chiral edge mode of a topological superconductor was suggested in Ref. Bee08. If this edge vortex overtakes a bulk vortex it will braid “by itself”, without requiring any external manipulation. The fermion parity switch can be detected electrically, as an e/2 charge pulse when a pair of edge vortices is fused in a normal metal contact <cit.>. The key element of the braiding device is the vortex injector (see Fig. <ref>): It consists of flux-biased Josephson junction, connecting co-propagating chiral edge modes. Application of a flux bias of h/2e increments the superconducting phase φ by 2π. For the fermionic edge mode this amounts to a π-phase domain wall <cit.>, which moves away from the junction with the Fermi velocity v. The injection process takes a finite time t_ inj, that translates into a finite width vt_ inj of the domain wall. Given a rate of change dφ/dt, a junction width W, and a superconducting coherence length ξ_ J one has t_ inj=(2πξ_ J/W)(dφ/dt)^-1. A major simplification of the theoretical description of the injection process arises if t_ inj is large compared to the propagation time W/v, so for a sufficiently slow rate of change dφ/dt≪2π vξ_ J/W^2. This is the so-called adiabatic regime, in which one may rely on the instantaneous scattering approximation. Ref. Bee08 applies to that regime. The purpose of the present paper is to relax the adiabatic approximation, to see how large (v/W)t_ inj should be for the braiding operation to succeed. Since an edge vortex is a collective degree of freedom, the dynamics involves the full many-body state. We study it numerically, by means of time-dependent Bogoliubov-de Gennes methods. Our main conclusion is that a factor of two between t_ inj and W/v is sufficient to avoid the excitations of internal degrees of freedom in the junction that would spoil the fermion parity switch. The outline of the paper is as follows: the simulated device and the time-dependent model are introduced in Sec. <ref>. In Sec. <ref>, we present the results of the braiding protocol which recover the main predictions from the adiabatic theory, namely the charge signature at the exit of the device and the fermion parity exchange of the edges with the bulk. Sec. <ref> describes the excitation dynamics of the junction in the alternative regime W> vt_inj where the braiding protocol cannot hold. The conclusion is presented in Sec. <ref>. § MODEL AND DEVICE §.§ Setup We consider the device shown in the top panel of Fig. <ref>. A topological superconductor (TSC) with two co-propagating Majorana edge modes (bottom left panel of Fig. <ref>) is divided in three by two Josephson junctions, each of length W and thickness w. The junctions are separated by a distance L. Two vortices of flux Φ_0=h/2e are created in the bulk by an external magnetic field, one of which is in the region between the two junctions. A time-dependent bias is applied such that the phase in the middle superconductor is φ(t) relative to the others. By increasing the phase φ(t) from 0 to 2π, the effective gap inside the Josephson junctions closes and a pair of edge-vortices (or π-phase boundary for Majorana edge modes) is released into the opposite edges <cit.> which propagate at the Fermi velocity v. The injection of edge-vortices inside each junction takes place over a characteristic time t_inj, during which the co-propagating edge-modes couple through the junction. The injection time is given by t_inj=(2πξ_J/W)(dφ(t)/d t)^-1 where ξ_J=ħ v/Δ_J <cit.> is the coherence length of the junction. Here Δ_J denotes the effective gap in the junction <cit.> (shown in the bottom right panel of Fig. <ref>). The edge-vortices of size v t_inj then propagate along the edges, braid (i.e. exchange parity) with the pair of bulk vortices and fuse at the exit of the superconductor where they produce charged quasiparticles. The fusion of edge-vortices produces a current pulse, with a quantized net charge e(N_vortex2) at the exit with N_vortex the number of vortices in between the junctions. §.§ Hamiltonian The device of Fig. <ref> is simulated using a tight-binding model of a quantum anomalous Hall insulator (QAH) proximitized with an s-wave superconductor. Its Hamiltonian is given by <cit.>: Ĥ(t) = 1/2∑_xΨ̂^†(x)H (k,x,t)Ψ̂(x) where Ψ̂(x) = (ψ̂_↑(x),ψ̂_↓(x),ψ̂^†_↓(x),-ψ̂^†_↑(x))^⊺ is the four component Nambu spinor and H is the Bogoliubov-de-Gennes (BdG) Hamiltonian matrix H(k, x, t) = [ H^e(k,x)-μ Δ_0 (x) e^iϑ(x,t); Δ_0(x) e^-iϑ(x,t) μ-𝒯H^e(k,x)𝒯^-1 ] with μ the chemical potential and 𝒯=iσ_y𝒦 the time-reversal operator (𝒦 denotes complex conjugation). The electronic block is given by: H^e(k,x) = ħ v/a(σ_xsin(k_xa)+σ_ysin(k_ya)) +(m_0(x)+M(k))σ_z where M(k)=2m_1/a^2(2-cos(k_xa)-cos(k_ya)). The simulated system is finite in the x-direction and anti-periodic in the y-direction to ensure that there are no zero-modes in the edges initially. The different Chern numbers in the regions of Fig. <ref> are achieved by different values of m_0 and Δ_0: m_0(x)=-0.5, Δ_0(x) = 0 : x∈QAH m_0(x)=-0.5, Δ_0(x) = 1: x∈TSC m_0(x)=+∞, Δ_0(x) = 0: x∈Ins in units of ħ v/a. The trivial insulating region (Ins) is realized by truncation of the lattice. Furthermore we fix the width of the junction to w=2a and the length to W=42a. This length ensures that the separation between edges and vortices is much larger than their respective localization lengths. The effective gap Δ_J inside the junctions is estimated numerically from the spectrum of an infinitely long junction (see Fig. <ref>), which yields Δ_J≈ 0.12Δ_0. In the TSC, ϑ(x,t)=η(x)+φ(x,t) is the pair potential phase with η describing the vortices by ∇×∇η=∑_x_vortex 2πδ(x-x_vortex); ∇·∇η=0, and φ(x,t) describing the time dependent bias which is only nonzero in the middle superconductor and given by: φ(t) = 2π( θ(τ-t) t/τ + θ(t-τ) ), t≥ 0 over a characteristic time τ. Here θ(t) denotes the Heaviside step function. For this profile, the estimated injection time is simply t_inj=τ/Δ_JW. §.§ Computation of observables in the evolved many-body state Before the injection, the system is assumed to be in the stationary ground state of Ĥ(0) denoted by |Ω⟩. Here, we consider the evaluation of single-particle operators in the evolved many-body state Û(t)|Ω⟩ with the time-evolution operator Û(t) = Texp(-(i/ħ)∫_0^tĤ(t')dt'), T being the time-ordering operator. Relative to the initial ground state, the net change in the expectation value of a single-particle operator  is denoted: ⟨Â(t)⟩ - ⟨Â(0)⟩:=⟨Ω|Û^†(t)ÂÛ(t)|Ω⟩ - ⟨Ω|Â|Ω⟩ . The effective description of the superconductor can be reduced to a non-interacting model using the BdG formalism. In App. <ref>, we show how we can transform this many-body problem into single-particle problems which can be solved within the first quantization formalism. Eq. (<ref>) can be written as: ⟨Â(t)⟩ - ⟨Â(0)⟩ = 1/2∑_ E_α<0(⟨α(t)|A|α(t)⟩ - ⟨α|A|α⟩). Here A is the single-particle BdG operator associated with Â, |α⟩:=|α (0)⟩ denotes the α-th eigenstate of H(0) and |α (t)⟩ obeys iħ∂_t|α(t)⟩ = H(t)|α(t)⟩. The state |α(t)⟩ can be calculated numerically using Tkwant <cit.>. This approach has numerical complications as it requires to evolve all the states in order to achieve covergence. (see App. <ref>) We resolve this issue by writing A in terms of the basis of eigenstates of H(0): ⟨Â(t)⟩ - ⟨Â(0)⟩ =∑_ E_α<0∑_ E_μ>0 E_ν≶0⟨α (t)||μ⟩⟨μ|A|ν⟩⟨ν||α(t)⟩. In contrast with Eq. (<ref>) (see App. <ref>), this form only gives non-zero contributions in a finite range around E=0. This allows us to approximate this expression by truncating the sum and discarding all terms above some energy cut-off, i.e terms with | E_α,μ,ν|>E_max. § RESULTS In this section we present the main results of our simulation. We show the charge signature of the braiding protocol and calculate the corresponding parity switch. We consider a system where W is smaller, but comparable to the injection time vt_inj≈ 2W. While the theoretical description, relying on the adiabatic limit, no longer holds for this system we show that the main predictions remain unchanged. §.§ Quantized charge measurement We first consider the charge signature that can be measured at the exit of the device, after the fusion of the edge vortices. For this we evaluate the current density operator _y(x)=(ev/a)Ψ̂^†(x)ν_0σ_yΨ̂(x) in the y-direction using Eq. (<ref>). Defining the current as: I(t)=a∑_x|y=y_exit⟨_y(x,t)⟩-⟨_y(x,0)⟩ the net charge creation is given by the time integral: Q(t)=∫_0^tI(t')dt'. With this, we can calculate the charge pumped during the braiding protocol at the exit of the device (y_exit). The spatial separation L between the two Josephson junctions allows to distinguish between two characteristic charge signatures. When L≫ vt_inj, the injection events at each junction are well separated in space. In this case, the two pairs of edge-vortices produce separate signals of ±e/2 charge at the exit. The charge contribution of the second pair of edge vortices experiences a sign flip in the presence of bulk vortices, as a consequence of braiding <cit.>. The theoretical predictions from Refs. Bee08,Ada09 are compared with numerical results in the left panel of Fig. <ref>. On the other hand when L≲ vt_inj, the injection events at both junctions are close, so that the overlapping electrical signals add up, producing a unit charge signature (right panel of Fig. <ref>). The transferred charge is an indirect probe of the braiding event as it is a result of the fusion between the edge vortices. It is therefore only quantized if the path lengths of the two vortices between injection and fusion are the same <cit.>. In contrast, the parity exchange is topologically protected, it does not depend on microscopic details. We will check this numerically. §.§ Parity switch of edge-vortices The phase rotation φ(t):0→2π in the superconductor changes the parity locally carried by the two bulk vortices. Since parity must be globally conserved, then necessarily there must be an odd number of excitations elsewhere in the system, – namely carried by the edges. <cit.> This change of parity is a direct consequence of braiding between the bulk and edge vortices. To characterize this process we first identify the parity subsectors that correspond to the states in the bulk vortices and the edges. The full parity operator can be written –up to the sign of the initial ground state parity– in terms of the Bogoliubov operators as: P̂ = ∏_α = 1^N(1-2d^†_α d_α) . We provide a further explanation for this form in App. <ref>. In our device, P̂ can be split in a product of two terms, the first one corresponding to the vortex excitation (i.e. the fermionic superposition of the two vortex Majorana zero-modes) and the second one containing all other excitations: P̂ = (1-2d^†_α_v d_α_v) ·∏_α≠α_v(1-2d^†_α d_α) :=P̂_vortices·P̂' where α_v is the index of the fermionic state bound to the vortices. This can be done if the vortex state is well isolated from the rest (i.e. there is no hybridization between vortex and edge states). P̂' can be evolved in the Heisenberg picture and expressed in terms of the Bogoliubov operators of the initial Hamiltonian {d_β}_β = 1^2n <cit.>. As we show in App. <ref>, the time evolution of each d_α can be expanded as Û^† d_αÛ = ∑_β = 1^2nχ_αβd_β with χ(t)_αβ =⟨α(0)||β(t)⟩ The time evolution of P̂' can then be expressed as a sum of terms of different orders in d operators Û^†P̂'Û =(1-2∑_ E_α>0∑_μνχ^*_αμχ_ανd^†_μ d_ν +4∑_ E_β> E_α>0∑_μνστχ^*_αμχ_ανχ^*_βσχ_βτd^†_μ d_ν d^†_σ d_τ+⋯). Its expectation value in the ground state |Ω⟩ can then be calculated making use of Wick's theorem up to all orders. The final equation can be found in App. <ref> (Eq. (<ref>)). In our numerical calculation we neglect correlators of order higher than four, and only include states within an energy window E_max. This energy window is chosen to match the maximum excitation energy in order for the parity calculation to converge (see App. <ref>). Since edge and junction states are hybridized, P̂' cannot be decomposed similarly in edge and junction sectors. However, after the bias pump, the expectation value ⟨P̂'⟩ can be identified with the parity carried by the edges ⟨P̂_edge⟩ as long as the filling of junction states – which only exist for energies E ≥Δ_J – is negligible. The different intensities of red in Fig. <ref> show the value obtained for P̂' as we increase E_max. We see that convergence is achieved before we need to include any states with energies around Δ_J. This identification of ⟨P̂'⟩≈⟨P̂_edge⟩ is further supported in Sec. <ref> and App. <ref>. Fig. <ref>, shows that the parity expectation of the edges is unchanged when there are no vortices, but it switches in the presence of bulk vortices. This demonstrates that, for this set of parametes, the braiding of edge-vortices holds dynamically, and that the internal degrees of freedom in the junction do not spoil the exchange of parity. This implies that neither the adiabatic nor the point junction limits need to be satisfied for braiding to be realised. §.§ Topological protection of the edge vortices The phase domain wall created during the quench corresponds to a pair of edge vortices that propagate along the edges. As one of them surrounds the bulk vortex it picks up a phase that realises the parity switch <cit.>. Since a π domain wall cannot be unwound, this mechanism is protected from all local sources of disorder. In this part, we verify that the dynamically injected vortices are topologically protected by introducing irregularities in the spatial profile of Δ_0(x). We show how an additional path-length δ x in the upper edge (see the top panel of Fig. <ref>) influences the charge signature, fully spoiling the quantization discussed in Sec. <ref> in agreement with the predictions in Ref. Ada09. In contrast, our calculation of parity (see the bottom panel of Fig. <ref>) remains unaffected by the local changes in the system, demonstrating the topological protection of the edge-vortex excitations. This confirms that even for a finite junction, edge-vortices can be used to encode protected quantum information. § LONG JUNCTION DYNAMICS Our results so far have considered the particular case vt_inj∼2W where the injection process is not spoiled by the excitation of junction modes. In this section, we consider the more general case where the ratio vt_inj/W is varied. In particular, we investigate how trapped excitations can influence the creation of edge-vortices for sufficiently long-junctions. §.§ Quasi-particle excitation spectrum To understand the behaviour in the junction we first study the quasi-particle excitation spectrum E(φ). Within the superconducting gap, this spectrum consists of states localized in the bulk vortices, junction and edges. The injection process is characterized by the gap closing at φ=π with the dispersion E_J=±Δ_Jcosφ/2 seen before in Fig. <ref>. In our case, the junction states couple with the edge states, forming hybridized bands seen in Fig. <ref> (gray lines). We calculate the occupation number of these energy levels: N̂(φ)=∑_ E_μ(φ)>0 d^†_μ(φ)d_μ(φ) where each term d^†_μ(φ)d_μ(φ) counts the quasi-particle occupation within a single energy level μ. The expectation value in the evolved state Û(t)|Ω⟩ is then given by: ⟨N̂(φ,t)⟩ = ∑_ E_μ>0∑_ E_α<0 E_ν≶0⟨α (t)||μ^φ⟩⟨μ^φ|N|ν^φ⟩⟨ν^φ||α(t)⟩ where |μ^φ⟩ denotes an eigenstate of H(φ) and N = 1. The occupation of each level through-out the quench is shown by thick lines in Fig. <ref>, where the color is used to distinguish between edge (red) and junction (blue) states <cit.>. The slow injection case (left panel) treated in Sec. <ref> shows that the junction states are only occupied near values of φ=π and fully emptied in the edges at the end of the injection. In the right panel, the injection is short enough to create excitations in the levels E>Δ_J. Note that, in this case, the approximation ⟨P̂'⟩ made in Sec. <ref> fails because of nonzero occupation in the junction. This means that the parity switch is no longer fully carried by the edge modes, which we attribute to trapped excitations in the Josephson junction. §.§ Trapped excitations In the presence of a finite Josephson junction the coupling between the two edges is mediated by the chiral states in the Josephson junction. This chiral propagation is only supported for a duration t_inj around φ=π, when the junction is effectively gapless. We have shown that when vt_inj∼ 2W the travel time W/v is short enough to allow the excitations to escape the junctions before the gap re-opens. Here we show that in the alternative regime vt_inj<W, the excitation is partially trapped in the gapped bound state of the junction. In order to describe the quasi-particles inside the junction, we define an excitation density via a spatial projection of the quasi-particle number N(x)=P(x) N P(x). This is done similarly to our description of charge (i.e. ⟨x'|N(x)|x”⟩= σ_0τ_0δ_x',x”δ_x,x') arriving to the expression: ⟨ρ̂_φ(x,t)⟩ = ∑_ E_α<0∑_ E_μ>0 E_ν≶0⟨α (t)||μ^φ⟩⟨μ^φ|N(x)|ν^φ⟩⟨ν^φ||α(t)⟩. Note that when integrated over the whole system, the Eq. (<ref>) is recovered. Integrating this density locally gives the number of quasi-particle inside junctions ⟨N̂_junc(t)⟩ and edges ⟨N̂_edges(t)⟩. In Fig. <ref> we show how the quasi-particle changes with time for two different systems. When the injection is slow (left panel) the quasi-particle number in the junction is fully transferred to the edges as anticipated. In the alternate case when the injection is very fast (right), the particle number slowly decays towards a constant residual value in the junction corresponding to quasi-particles occupying the bound state in the Josephson junction. As this trapped excitation can carry a part of the parity exchange it can spoil the injection protocol as well as the characteristic charge signature (shown in App. <ref>). For this reason it is important to find a bound when the trapped excitations in the junction can be neglected. §.§ Particle number in the junction In the adiabatic theory Ref. Bee08, the total particle number produced in the edges at final time is equal to 1.037. The non-quantized number is due to particle-hole pairs production during the injection process. At slow injection, we find a comparable value ⟨N̂_junc⟩+⟨N̂_edge⟩=1.049 as indicated in the left of Fig. <ref>, close to the adiabatic theory. For the fast injection in Fig. <ref>, this is ⟨N̂_junc⟩+⟨N̂_edge⟩=2.033 instead. We therefore turn to a quantitative description of the residual particle number in the junction ⟨N̂_junc⟩ for different values of vt_inj/W. We achieve this by simulating different values of τ in Fig. <ref>. In the left panel, the particle number is shown as a function of time for different values of vt_inj/W, where we distinguish between the two regimes vt_inj>W and vt_inj<W by two colors. In the right panel, we show that the residual excitation number in the junction decreases fast as the injection time becomes long. We match this with an exponential shown in Fig. <ref>. After vt_inj>2W, this value has nearly decayed to zero. In an experimental setting, this provides us with an upper bound on the flux bias |dΦ/dt|<Φ_0v/2W^2Δ_J when the parity exchange is fully carried by the edges corresponding, ensuring a successful injection of edge vortices. § CONCLUSION In this work we have shown how a braiding protocol introduced in <cit.> can be dynamically simulated as tight-binding many-body system. With this setup we were able to fully probe the braiding process away from the limitations of the effective model. This allowed us to investigate the relevant scales in the system as well as compare the current signature with analytical predictions. We were able to study dynamically the local parity switch present in the edge states and show the topological protection of this exchange. We have shown that the injection and braiding of edge-vortices is uncompromised by a finite junction when vt_inj>2 W, so that all the parity exchange is contained in the edge states. Additionally we studied this system away from this limit and investigated the excitations in the junction. Here, we showed that the lowest bound state of the junction remains excited long after the quench for sufficiently fast injections. While the parity switch ⟨P̂'⟩ is still protected in this limit we can no longer conclude that it is fully carried in the edge states, therefore providing a limitation for the use of such device as a topological qubit. For this reason we show the interplay of scales vt_inj, W to find a parameter regime, where the injection of edge vortices is well defined. We see that the adiabatic condition vt_inj≫ W discussed in previous works can be relaxed into vt_inj≳ W, while keeping the braiding predictions intact. This is helpful for future experimental work as it allows large deviations from the point junction limit. We thank A. R. Akhmerov for helpful discussions. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program. 99 Rea01 N. Read and D. Green, Paired states of fermions in two dimensions with breaking of parity and time-reversal symmetries and the fractional quantum Hall effect, Phys. Rev. B 61, 10267 (2000). Iva02 D. A. Ivanov, Non-Abelian statistics of half-quantum vortices in p-wave superconductors, Phys. Rev. Lett. 86, 268 (2001). Nay03 C. Nayak, S. H. Simon, A. Stern, M. Freedman, and S. Das Sarma, Non-Abelian anyons and topological quantum computation, Rev. Mod. Phys. 80, 1083 (2008). Das04 S. Das Sarma, M. Freedman, and C. Nayak, Majorana zero modes and topological quantum computation, npj Quantum Inf. 1, 15001 (2015). Ma05 X. Ma, C. J. O. Reichhardt, and C. Reichhardt, Braiding Majorana fermions and creating quantum logic gates with vortices on a periodic pinning structure, Phys. Rev. B 101, 024514 (2020). Ma06 Hai-Yang Ma, Dandan Guan, Shiyong Wang, Yaoyi Li, Canhua Liu, Hao Zheng, and Jin-Feng Jia, Braiding Majorana zero mode in an electrically controllable way, J. Phys. D 54, 424003 (2021). Vla07 V. K. Vlasko-Vlasov, A. Rydh, R. Divan, D. Rosenmann, A. Glatz, and W.-K. Kwok, Magnetic circuit for Abrikosov vortices: Vortex motion in a periodic labyrinth of magnetic T and I-shaped elements under a superconducting film, J. Magn. Magn. Mater. 557, 169476 (2022). Bee08 C. W. J. Beenakker, P. Baireuther, Y. Herasymenko, I. Adagideli, Lin Wang, and A. R. Akhmerov, Deterministic creation and braiding of chiral edge vortices, Phys. Rev. Lett. 122, 146803 (2019). Ada09 I. Adagideli, F. Hassler, A. Grabsch, M. Pacholski, and C. W. J. Beenakker, Time-resolved electrical detection of chiral edge vortex braiding, Scipost Phys. 8, 013 (2020). Has10 F. Hassler, A. Grabsch, M. J. Pacholski, D. O. Oriekhov, O. Ovdat, I. Adagideli, and C. W. J. Beenakker, Half-integer charge injection by a Josephson junction without excess noise, Phys. Rev. B 102, 045431 (2020). Nayak07 P. Fendley, M. P. A. Fisher and C. Nayak, Edge states and tunneling of non-Abelian quasiparticles in the ν=5/2 quantum Hall state and p+ip superconductors, Phys. Rev. B 75, 045431 (2007). Fu11 L. Fu and C.L. Kane, Superconducting Proximity Effect and Majorana Fermions at the Surface of a Topological Insulator, Phys. Rev. Lett. 100, 096407 (2008). Qi12 X-L. Qi, T.L. Hughes, and S-C. Zhang Chiral topological superconductor from the quantum Hall state Phys. Rev. B 82, 184516 (2010). Gro13 C. W. Groth, M. Wimmer, A. R. Akhmerov, and X. Waintal, Kwant: a software package for quantum transport, New J. Phys. 16, 063065 (2014). Klo14 T. Kloss, J. Weston, B. Gaury, B. Rossignol, C. Groth, and X. Waintal, Tkwant: a software package for time- dependent quantum transport, New J. Phys. 23, 023025 (2021). Wes14 J. Weston and X. Waintal, Towards realistic time-resolved simulations of quantum devices, J Comput. Electron. 15, 1148 (2016). Bau14 T. Bautze, C. Süssmeier, S. Takada, C. Groth, T. Meunier, M. Yamamoto, S. Tarucha, X. Waintal, and C. Bäuerle, Theoretical, numerical, and experimental study of a flying qubit electronic interferometer, Phys. Rev. B 89, 125432 (2014). Ros18 B. Rossignol, T. Kloss, and X. Waintal, Toward flying qubit spectroscopy, arXiv:1802.05924. note1There are only n electronic modes in our system. Here we are making use of the notation d_β+n = d^†_β note3The color at a value φ and band μ is proportional to the value ∑_x∈junctions|⟨μ^φ|x⟩|^2 note2The index μ+n denotes the particle-hole partner of μ, i.e. |μ+n⟩=|𝒞μ⟩. § TIME-EVOLUTION OF SINGLE-BODY OPERATORS IN BDG §.§ Tight-binding description of BdG The Hamiltonian of Eq. (<ref>) can be explicitly written in tight-binding form as Ĥ= 1/2ψ^† Hψ where H=[ H^e Δ; Δ^† -σ_yH^e*σ_y ] ψ := (ψ̂_1↑ ψ̂_1↓⋯ ψ̂_n/2↑ ψ̂_n/2↓ ψ̂^†_1↓ -ψ̂^†_1↑⋯ ψ̂^†_n/2↓ -ψ̂^†_n/2↑)^⊺ where σ_y only acts on spin (σ_y:=1_x⊗σ_y). Here H^e is the matrix form of the tight-binding Hamiltonian of Eq. (<ref>) and Δ = diag(Δ(x_1),⋯,Δ(x_n)). The matrix H is hermitian, so there exists a unitary matrix V that diagonalises it H = VEV^† (being E = diag(E_1,⋯,E_2n)). The matrix V describes the Bogoliubov transformation that defines the Bogoliubov operators {d_α}_α = 1^2n as ψ = V 𝐝 with 𝐝 = (d_1,⋯,d_2n)^⊺ By construction, H has particle-hole symmetry, being 𝒞 = iσ_yν_y𝒦 (where 𝒦 denotes complex conjugation) the charge congujation operator. This implies that the {d_α}_α = 1^2n operators can be labeled in pairs (α,α+n) that satisfy d_α^† = d_α+n so that the first n operators have ordinary fermionic algebra. If we input the Bogoliubov transformation into Eq. (<ref>), it becomes apparent that {d_α}_α = 1^n represent the elementary excitations of our Hamiltonian: Ĥ= 1/2𝐝^† E𝐝 = ∑_α = 1^n E_α d^†_α d_α-1/2∑_α = 1^n E_α The ground state of this Hamiltonian, denoted |Ω⟩, is annihilated by all d_α with E_α >0. §.§ From second to first quantization In a tight-binding system, any single-body operator  can be written as  = ∑_α,β =1^nA^e_αβψ̂^†_αψ̂_β, A^e_αβ = ⟨0|ψ̂_αÂψ̂^†_β|0⟩, where |0⟩ denotes the vacuum of electrons, which can be rewritten into the BdG form as  = 1/2ψ^† A ψ+1/2 A^e with A=[ A^e 0; 0 -σ_yA^e*σ_y ] . We can evolve this operator in the Heisenberg picture to obtain Û^†ÂÛ = 1/2ψ(t)^† A ψ(t)+1/2 A^e where we defined ψ̂_α(t) = Û^†ψ̂_αÛ. Since we intend to evaluate this operator in the ground state |Ω⟩ of the initial Hamiltonian, we need to write it in terms of the Bogoliubov operators {d_β}_β = 1^2n of Ĥ(0). It is possible to prove (see App. <ref>) that the {ψ̂_α(t)}_α = 1^2n operators can be written as linear combinations of these Bogoliubov operators as ψ̂_α(t) = ∑_β = 1^2nΦ_αβ(t)d_β i.e. ψ(t) = Φ(t) 𝐝 where Φ(0) is the matrix that diagonalises the BdG Hamiltonian at t = 0, (i.e. Φ(0) = V(0)) and Φ(t) is the solution of iħ∂_tΦ(t) = H(t)Φ(t). Notice that this means that the columns of Φ are none other than the eigenstates of H(0) evolved according to the Schrödinger equation for H(t). With this, we can express Û^†ÂÛ = 1/2𝐝^†Φ^† A Φ𝐝+1/2 A^e Finally, using the fact that by definition ⟨Ω|d_α^† d_β|Ω⟩ = δ_αβ if E_α < 0 and ⟨Ω|d_α^† d_β|Ω⟩ = 0 otherwise, we obtain ⟨Â(t)⟩ - ⟨Â(0)⟩ = 1/2∑_α | E_α<0(Φ^†(t) AΦ(t) - Φ^†(0) AΦ(0))_αα, which in Dirac notation becomes ⟨Â(t)⟩ - ⟨Â(0)⟩ = 1/2∑_ E_α<0(⟨α(t)|A|α(t)⟩ - ⟨α|A|α⟩). With this, we have mapped our original problem of evolving many-body states in a Hilbert space of dimension 2^n into n first quantization problems in a Hilbert space of dimension 2n. §.§ Convergence The fact that Eq. (<ref>) involves all n negative energy eigenstates of H poses two problems. First, we only aim at describing the system accurately at low energies. Any realistic system will not share the specific high-energy behaviour of our tight-binding description far from the Fermi energy. Secondly, we should be able to understand our system by considering only states close to the Fermi energy, so evolving all of them is a waste of computational resources. Unfortunately we have no reason to belive that the contribution of both terms in Eq. (<ref>) will cancel out as we go away from the Fermi energy. This was actually studied numerically and it was verified that the value of ⟨_y(x,t)⟩-⟨_y(x,0)⟩ as given by Eq. (<ref>) does not converge –instead it oscillates– as we increase the amount of states evolved (see Fig. <ref>). This section is devoted to rewrite this equation in a form that solves this issue. To do so, let us explicitly make use of basis of the eigenstates of H(0) and introduce the completeness relation around A in the first term of Eq. (<ref>) to obtain 1/2∑_ E_α<0 ⟨α(t)|A|α(t)⟩ = 1/2∑_ E_α<0∑_μ,ν = 1^2n ⟨α(t)||μ⟩⟨μ|A|ν⟩⟨ν||α(t)⟩ = 1/2∑_ E_α<0∑_ E_μ, E_ν<0 ( ⟨α(t)||μ⟩⟨μ|A|ν⟩⟨ν||α(t)⟩ + ⟨α(t)||𝒞μ⟩⟨𝒞μ|A|ν⟩⟨ν||α(t)⟩ + ⟨α(t)||μ⟩⟨μ|A|𝒞ν⟩⟨𝒞ν||α(t)⟩ + ⟨α(t)||𝒞μ⟩⟨𝒞μ|A|𝒞ν⟩⟨𝒞ν||α(t)⟩). Since  is a single-particle operator, it satisfies 𝒞A𝒞 = -A. Given that {|α(t)⟩: E_α≶0} is a complete basis we can write the first term of Eq. (<ref>) as 1/2∑_ E_α<0∑_ E_μ, E_ν<0 ⟨α(t)||μ⟩⟨μ|A|ν⟩⟨ν||α(t)⟩= 1/2∑_ E_μ<0⟨μ|A|μ⟩ -1/2∑_ E_α>0∑_ E_μ, E_ν<0 ⟨𝒞α(t)||μ⟩⟨μ|A|ν⟩⟨ν||𝒞α(t)⟩ If we plug this in Eq. (<ref>) and then in Eq. (<ref>), a few simplifications happen. The first term of this equation will cancel with the second term of Eq. (<ref>), and the second term of Eq. (<ref>) is real and equal to the last term of Eq. (<ref>) (this follows from the properties of 𝒞). In addition, the second and third terms of Eq. (<ref>) are each other's complex conjugate. Taking all of this into account we can write down Eq. (<ref>) as ⟨Â(t)⟩ - ⟨Â(0)⟩ = ∑_ E_α<0∑_ E_μ, E_ν>0( ⟨α(t)||μ⟩⟨μ|A|ν⟩⟨ν||α(t)⟩ + ⟨α(t)||𝒞μ⟩⟨𝒞μ|A|ν⟩⟨ν||α(t)⟩) which we write more simply in the main text as ⟨Â(t)⟩ - ⟨Â(0)⟩ =∑_ E_α<0∑_ E_μ>0 E_ν≶0⟨α (t)||μ⟩⟨μ|A|ν⟩⟨ν||α(t)⟩ This formula includes overlaps between positive energy and evolved negative energy states which ensures non-zero contributions to only exist around E=0. In Fig. <ref> we show how the contribution of the terms in the sum vanishes as we go further away from the Fermi energy, which lets us avoid having to evolve all negative energy states. §.§ Proof of time evolution method In this section we prove the following statement: Let ψ be the Nambu spinor of fermion creation and annihilation operators as defined in Eq. (<ref>) satisfying {ψ̂_α,ψ̂^†_β} = δ_α,β and {ψ̂_α,ψ̂_β} = δ_α,𝒞β where 𝒞α is the index of (ψ̂_α)^† in ψ (i.e. ψ̂_𝒞α = ψ̂^†_α). Let Ĥ(t) = 1/2ψ^† H(t)ψ be the time-dependent Hamiltonian describing a tight-binding superconducting system of fermions as defined in Eq. (<ref>). Let Û(t) be its corresponding evolution operator. Let 𝒞 be the antiunitary charge conjugation operator satisfying 𝒞^2=1 and {𝒞,H}=0. Let V(t) be a matrix that diagonalises H(t) and let 𝐝 be the spinor of Bogoliubov operators as defined in Eq. (<ref>) diagonalising Ĥ(0). Then, the time evolution of ψ can we written as ψ(t):=Û(t)^†ψÛ(t) = Φ(t) 𝐝 where Φ obeys iħ∂_tΦ(t) = H(t)Φ(t), Φ(0) = V(0) According to Heisenberg's picture evolution equation we have i∂_t ψ̂_α(t) = [ ψ̂_α(t),Û(t)^†Ĥ(t) Û(t) ]. Since Ĥ is quadratic in ψ, we know that ψ̂_α(t) can be expanded in terms of the initial ψ's as ψ̂_α(t) = ∑_βζ_αβ(t) ψ̂_β, or in matrix notation ψ(t)=ζψ. Notice that the unitarity of Û imposes that the operators in ψ(t) satisfy the same commutation algebra as the initial ones. In turn, this imposes unitarity on ζ. We can use Eq. (<ref>) to write the commutator in Eq. (<ref>) as [ ψ̂_κ(t),Û(t)^†Ĥ(t) Û(t) ] = 1/2∑_αβμνλH_αβζ^*_αμζ_βνζ_κλ [ψ̂_λ,ψ̂^†_μψ̂_ν] It is easy to check that [ψ̂_λ,ψ̂^†_μψ̂_ν] = ψ̂_νδ_λ,μ-ψ̂^†_μδ_λ,𝒞ν, so we get [ψ̂_κ(t),Û(t)^†Ĥ(t) Û(t) ] = 1/2∑_αβμνH_αβζ^*_αμζ_βνζ_κμψ̂_ν - 1/2∑_αβμνH_αβζ^*_αμζ_βνζ_κ𝒞νψ̂^†_μ. Using ψ̂^†_μ = ψ̂_𝒞μ and relabeling in the last term we can rewrite [ ψ̂_κ(t),Û(t)^†Ĥ(t) Û(t) ] = 1/2∑_αβμνH_αβζ^*_αμζ_βνζ_κμψ̂_ν- 1/2∑_αβμνH_αβζ^*_αμζ_βνζ_κ,𝒞νψ̂_𝒞μ= 1/2∑_αβμνH_αβζ^*_αμζ_βνζ_κμψ̂_ν- 1/2∑_αβμνH_αβζ^*_α,𝒞νζ_β,𝒞μζ_κμψ̂_ν. Comparing with the left-hand side of Eq. (<ref>) we can deduce that i∂_tζ_κν = 1/2∑_αβμH_αβζ^*_αμζ_βνζ_κμ- 1/2∑_αβμH_αβζ^*_α,𝒞νζ_β,𝒞μζ_κμ From ψ̂^†_α(t)=ψ_𝒞α(t), we have ζ_αβ = ζ^*_𝒞α,𝒞β so the previous equation becomes i∂_tζ_κν = 1/2∑_αβμH_αβζ^*_αμζ_βνζ_κμ- 1/2∑_αβμH_αβζ_𝒞α,νζ^*_𝒞β,μζ_κμ The particle-hole symmetry of H (𝒞H𝒞 = -H) can be expressed element-wise as H_𝒞α,𝒞β=-H_β,α. After some relabeling on the last term, this lets us rewrite the previous equation as i∂_tζ_κν = ∑_αβμH_αβζ^*_αμζ_βνζ_κμ The unitarity of ζ implies ∑_μζ^*_αμζ_κμ=δ_ακ so the previous expression becomes i∂_tζ_αβ = ∑_μH_αμζ_μβ or in matrix notation i∂_tζ = Hζ, ζ(0) = Now notice that we can compose Eq. (<ref>) with ψ = V(0)𝐝 and define Φ(t) = ζ(t) V(0) that satisfies Eq. (<ref>). Since V(0) is time independent, Eq. (<ref>) follows immediately from Eq. (<ref>). § PARITY §.§ Time evolution of the parity operator The parity operator is defined as: P̂ = (-1)^∑_α = 1^nc^†_α c_α = ∏_α=1^n(1-2c^†_α c_α). Since it commutes with Ĥ, its ground state is an eigenstate of parity. This, together with the fact that the BdG operators switch the parity of a state, implies that we can also write down our parity operator in terms of them: P̂ = p_Ω∏_α = 1^n(1-2d^†_α d_α) where p_Ω=±1 stands for the parity of the ground state. In general, we can express the parity of a set of quasi-particle states S as P̂_S = ∏_α∈ S(1-2d^†_α d_α) The time evolution of this operator is given by substituting each d_α for d_α(t) = Û^† d_αÛ. From the results of App. <ref>, it is straightforward to obtain the expression of d_α(t) in terms of {d_α}_α = 1^2n: Û^†𝐝Û = Û^† V(0)^†ψÛ = V(0)^†ψ(t) = V(0)^†Φ(t) 𝐝 Thus, if we define χ(t) = V(0)^†ψ(t) we have Û^† d_αÛ = ∑_βχ(t)_αβd_β χ(t)_αβ =⟨α||β(t)⟩ We can expand the product in Eq. (<ref>) and use Wick's theorem to obtain an expression for the time evolution of ⟨P̂_S⟩ ⟨P̂_S(t)⟩ =∑_m = 1^n_S (-2)^m∑_0< α_1 <…< α_m∑_c∈ C_m(-1)^s(c)∏_k = 1^mΘ^X_k(c)Y_k(c)_α_i_k(c)α_j_k(c). This formula contains several elements. First, we have a sum over all orders 0<m<n_S (the term corresponding to m=0 is equal to 1). For each order m we sum over all unordered choices of m states among n_S. For every such choice, we sum over all possible Wick contractions of that order (C_m denotes the set of all Wick contractions of order m). For some order m, each contraction (c denotes a specific contraction) in this sum results in a specific product of m numbers of the form Θ^XY_αβ <cit.> defined as Θ^00_αβ = ∑_μ=1^nχ^*_α,μ+nχ_β,μ+n Θ^01_αβ = ∑_μ=1^nχ^*_α,μ+nχ^*_β,μ= ∑_μ=1^nχ^*_α,μ+nχ_β+n,μ+n Θ^10_αβ = ∑_μ=1^nχ_α,μχ_β,μ+n=Θ^01*_βα Θ^11_αβ = ∑_μ=1^nχ_α,μχ^*_β,μ = δ_αβ-Θ^00*_αβ Each contraction c of order m corresponds to a permutation of the numbers {1,2,⋯,2m} under the following restriction: when the elements of the permutation are split in pairs {(a_k(c),b_k(c))}_k = 1^m they must satisfy a_k(c)< b_k(c)∈{1,…,2m} and a_1(c)<a_2(c)<⋯<a_m(c). Each pair yields i_k(c) = ⌊ (a_k(c)+1)/2⌋, j_k(c) = ⌊ (b_k(c)+1)/2⌋, X_k(c) = (a_k(c)+1) mod 2 and Y_k(c) = b_k(c) mod 2. The overall sign s(c) is the sign of the permutation. It is possible to write a script that procedurally generates all valid permutations and calculates the indices X_k(c), Y_k(c), α_i_k(c) and α_j_k(c) corresponding to every contraction c. §.§ Convergence of parity The amount of terms in equation Eq. (<ref>) is 1+∑_m=1^n_Sn_S m(2m-1)!!. This number is out of reach in practice, so we are forced to truncate the sums. It was checked that restricting ourselves to order m_max=4 is sufficient to get an accurate result. In addition, the operator ⟨ P'⟩ defined in <ref> in principle contains n_S=n-1 Bogoliubov operators, but in practice we must truncate the product to a maximum number of states n_max, or equivalently, a cut-off energy E_max. In Sec. <ref>, we have argued that it is necessary to keep E_max<Δ_J so that ⟨P̂'⟩ represents the parity of the edges. This is true for the case where vt_inj/W=2.3 studied in Sec. <ref>. We show this explicitly in Fig. <ref>, where convergence is reached approximately at 0.85Δ_J, ensuring that no junction states participate in the calculation of the parity. We also show a few other cases with smaller values of vt_inj/W. For these values, convergence of parity requires including up to 35 states with energies above Δ_J. In this case the calculation includes the hybridized edge and bound states of the junction, which does not allow us to isolate the edge parity sector from the junction. § SUPPLEMENTAL RESULTS In this section, we present the results of our simulation for variable quenching times, supplementing the results in the main text. §.§ Local representation of observables The calculations of current and quasi-particle number made in the main text have been integrated over specific areas. Here we show a few snapshots of the local current density and the local excitation density for two values of vt_inj/W (left and right panels of Fig. <ref>). We show three different times in which the injection and fusion can be observed. In the left panels, for long injections, the excitation entirely leaves the junction. In the right panel (which corresponds with Fig. <ref>), the excitation density slowly decays from the junction, at times even after t>τ=50a/v when the quench is over. The current density is zero in the superconducting region as the Majorana fermions are chargeless. Only upon fusion, the excitations produce charge. Here, the charge production at short injection times is much smaller, which is shown quantitatively in the next part. It is worth noting that while the excitations can remain trapped in the junction, they do not carry charge. §.§ Current density in the long junction regime For completeness, we include the calculations of charge at the exit for the different quenching times. In Fig. <ref> we show the excitation spectrum, quasi-particle number, current and charge for different value of vt_inj/W discussed in Sec. <ref>. We can see how the the occupancy of the junction increases at when the injection time becomes shorter. As the contribution of the excitations in the junction became sufficient the predictions for quantized charge are no longer valid. This can be seen in the bottom part of Fig. <ref> charge is no longer quantized. Additionally as shown in Fig. <ref>, a fast injection causes a large path-length difference as the junction traps the excitations and leaks them into the top and bottom edges at different rates. This results in further interference effects upon fusion.
http://arxiv.org/abs/2307.04547v1
20230710132556
Spectral Observables and Gauge Field Couplings in Causal Dynamical Triangulations
[ "Giuseppe Clemente", "Massimo D'Elia" ]
hep-th
[ "hep-th" ]
http://arxiv.org/abs/2307.04789v2
20230710180003
Anomalous Andreev Bound States in Non-Hermitian Josephson Junctions
[ "Chang-An Li", "Hai-Peng Sun", "Björn Trauzettel" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall" ]
[email protected] Institute for Theoretical Physics and Astrophysics, University of Wrzburg, 97074 Wrzburg, Germany Wrzburg-Dresden Cluster of Excellence ct.qmat, Germany Institute for Theoretical Physics and Astrophysics, University of Wrzburg, 97074 Wrzburg, Germany Institute for Theoretical Physics and Astrophysics, University of Wrzburg, 97074 Wrzburg, Germany Wrzburg-Dresden Cluster of Excellence ct.qmat, Germany We propose a non-Hermitian Josephson junction composed of two s-wave superconductors separated by a non-Hermitian barrier. We discover that the Andreev bound states in such a non-Hermitian Josephson junction exhibit several anomalous features: (i) the spectrum of Andreev bound states becomes complex-valued; (ii) the spectrum exhibits a Josephson gap, a finite phase window in which the existence of Andreev bound states is forbidden; and (iii) the Andreev bound states give rise to a complex supercurrent, the imaginary part of which signifies dissipative supercurrent. Moreover, in the case where the two superconductors are of p-wave type, we observe the destruction of Majorana zero modes due to the complex nature of Andreev bound state spectrum. Our predictions should be observable in Josephson junctions coupled to the environment. Anomalous Andreev Bound States in Non-Hermitian Josephson Junctions Bjrn Trauzettel August 12, 2023 =================================================================== Introduction.—Recently, there has been growing interest in non-Hermitian systems <cit.>. Non-Hermitian Hamiltonians emerge in a variety of fields such as disordered or correlated solids <cit.>, open or driven systems <cit.>, and resonance phenomena <cit.>. Distinct from the Hermitian case, the eigenvalues of non-Hermitian Hamiltonians are in general complex. The complex-valued nature of the spectrum gives rise to new features such as point gaps, non-Hermitian skin effects, and enriched topological classifications <cit.>. Many of these non-Hermitian properties have been theoretically studied and experimentally observed <cit.>. Whereas the efforts on characterizing novel non-Hermitian systems have achieved impressive progress, the role of the complex-valued spectrum due to non-Hermiticity in quantum transport is largely unexplored despite some related investigations <cit.>. Josephson junctions (JJs), consisting of two superconductors separated by a weak link, provide salient platforms for studying superconducting transport <cit.>. Hermitian JJs have been extensively studied for decades. Phase-coherent quantum transport in Hermitian JJs gives rise to the dc Josephson effect: a phase-dependent supercurrent without bias. In the short junction regime, the Josephson effect is entirely determined by the phase-dependence of Andreev bound states (ABSs) within the superconducting gap while the contribution from bulk states can be neglected <cit.>. Recently, a JJ formed by a 𝒫𝒯-symmetric non-Hermitian superconductor as a weak link has been considered, revealing discrete ABSs at specific Josephson phases <cit.>. It is fair to say that the influence of non-Hermiticity on fundamental properties of JJs, such as ABSs and supercurrent, is barely understood. However, this analysis is of high experimental relevance because all JJs are somehow coupled to the environment. In this work, we consider a short JJ constituted of two superconductors mediated by a non-Hermitian barrier potential, referred to as a non-Hermitian Josephson junction (NHJJ) [see Fig. <ref>(a)]. It is an effective model for the setup shown in Fig. <ref>(b), in which a dissipative lead is coupled to the interface of the junction <cit.>. We find that the ABSs in such a NHJJ show anomalous behavior as compared to their Hermitian counterparts. Their spectrum shows complex values and exhibits Josephson gaps. The Josephson gaps indicate phase windows in which ABSs do not exist [purple regions in Fig. <ref>(c)]. As a result, the Josephson current becomes complex and its imaginary part signifies dissipative supercurrent. In addition, in p-wave JJs with a non-Hermitian barrier potential, we show that Majorana zero modes and the fractional Josephson effect are prohibited due to the complex-valued nature of ABS spectrum. Model and particle-hole symmetry.—We consider a minimum 1D NHJJ model as sketched in Fig. <ref>(a): Two superconductors are separated by a non-Hermitian barrier. The superconductors can either be of s-wave or p-wave type. The barrier potential is characterized by an imaginary potential U(x)=-iVδ(x), V>0, which represents physically an on-site loss. The strength V measures the magnitude of the loss. We first analyze the particle-hole symmetry (PHS) constraint on the effective Hamiltonian such that the non-Hermitian barrier potential can be properly taken into account. The NHJJ is described by a Bogoliubov-de Gennes (BdG) Hamiltonian H= where Ψ^†(x) and Ψ(x) are the field operators. Due to the non-Hermiticity of the Hamiltonian H^*≠ H^T considered here, PHS acting on ℋ_BdG can be considered in two distinct ways <cit.> Uℋ_BdG^*U^-1 =-ℋ_BdG, I, Uℋ_BdG^TU^-1 =-ℋ_BdG, II. Here, T is the transpose operation and U is a unitary matrix. In the presence of PHS, the energy eigenvalues of ℋ_BdG always come in pairs. For the above two types of PHSs, the relating energy pairs are E⟷-E^* for type I PHS and E⟷-E for type II PHS, respectively. To determine which type of PHS acting on ℋ_BdG is more relevant, we further consider the PHS constraint on the retarded Green's function. We find that the retarded Green's function transforms as U^†[G^R(E)]^*U=-G^R(-E) under PHS (see <cit.> for more details). This transformation (founded in causality) is consistent with type I PHS stated in Eq. (<ref>). Therefore, we conclude that type I PHS acting on ℋ_BdG is physically more relevant. In the following, we only consider the influence of type I PHS on the spectrum of ABSs. The BdG Hamiltonian to describe the NHJJ takes the form ℋ_BdG=([ [-ħ^2∂_x^2/2m-μ]+U(x) Δ̂(x); Δ̂^†(x) -[-ħ^2∂_x^2/2m-μ]+U(x) ]), where m is the effective mass of the electrons, μ the chemical potential, and Δ̂(x) the pairing potential which can be either of s-wave or p-wave type in our model. The corresponding Nambu basis is chosen as Ψ^†(x)=(Ψ_↑^†(x),Ψ_↓(x)). Considering the spectrum of the system, electron and hole excitations are described by the BdG equation ℋ_BdGψ(x)=Eψ(x), where the wave function ψ(x)=(u(x),v(x))^T is a mixture of electron and hole components, and E is the excitation energy measured relative to the Fermi energy. Complex ABS spectrum.—We first consider an s-wave NHJJ, where the two superconductors are both of s-wave type. Without loss of generality, we assume the left and right superconductors have a constant pairing potential of the same magnitude but different phases, i.e., Δ̂(x<0)=Δ and Δ̂(x>0)=Δ e^iϕ where ϕ is the phase difference across the junction. This pairing potential together with Eq. (<ref>) constitutes a simple but nontrivial model of a NHJJ. We are interested in the ABSs existing within the superconducting gap. We may write the wave function of bound states as <cit.> ψ_B(x)=∑_η,αA_ηαe^iασ_ηk_ηx([ e^iθ_ηα/2; e^-iθ_ηα/2 ]) with η=e/h, σ_e≡1, σ_h≡-1, and α=±. The angles are defined as θ_η+≡σ_ηarccos(E_B/Δ)+ϕ and θ_η-≡σ_ηarccos(E_B/Δ) with the energy of bound states E_B. A_ηα is the coefficient of different evanescent modes. The bound states should decay exponentially for |x|→∞ with a finite decay length λ. Therefore, the necessary condition for the existence of ABSs is given by λ=ħ v_F/Δ1/√(1-E_B^2/Δ^2), Re(λ)>0, where v_F is the Fermi velocity. This condition also determines the range of Josephson gaps as we explain below. The boundary conditions of ψ_B(x) at x=0 yield the secular equation to determine the ABSs as <cit.> (Z^2+1)+2iZ√(Δ^2/E_B^2-1)=Δ^2/E_B^2[Z^2+cos^2(ϕ/2)]. The dimensionless non-Hermitian barrier strength Z for the non-Hermitian barrier potential is defined as Z≡mV/ħ^2k_F, where k_F is the Fermi wavevector. Noticing the imaginary term 2iZ√(Δ^2/E_B^2-1) in Eq. (<ref>), the spectrum of ABSs can be complex, fundamentally different from Hermitian JJs <cit.>. At Z=0, the spectrum of ABSs reduces to the Kulik-Omel'yanchuk (KO) limit E_B^±(ϕ)=±Δcos(ϕ/2) for a bare JJ <cit.>. For Z≫1, the spectrum becomes E_B^±(ϕ)=±Δ, merging to the superconducting gap edges (without bound states). Therefore, we mainly focus on the most interesting regime with 0<Z<1 in the following [For the case Z≥1, the condition for a bound state Re(λ)>0 is not satisfied. ]. Let us consider special values of ϕ first and subsequently present the general solution of Eq. (<ref>). At ϕ=2nπ with integer n, a trivial solution E_B^±(ϕ=2nπ)=±Δ is possible, merging into the bulk continuum. At ϕ=(2n+1)π, we find that the spectrum of ABSs is purely imaginary taking E_B^±[(2n+1)π]=-iZΔ/√(1-Z^2), consistent with constraint on the spectrum from type I PHS. For general phase difference ϕ∈[ϕ_b,ϕ_t], solving the secular equation Eq. (<ref>), this leads to E_B^±(ϕ)/Δ=±ζcos(ϕ/2)-iZ√(sin^2(ϕ/2)-Z^2)/1-Z^2, where we define the sign function ζ=sgn[cos(ϕ/2)], bottom phase edge ϕ_b≡2nπ+ϕ_0(Z), and top phase edge ϕ_t≡(2n+1)π-ϕ_0(Z) with ϕ_0(Z)≡2arcsin(√(2)Z/√(1+Z^2)). Note that the ABSs cannot reach zero excitation energy for nonzero Z. We plot the spectrum of ABSs in Fig. <ref>(c). Notably, the spectrum of ABSs becomes complex, indicating the coupling to the environment. Josephson gap.—In general, the spectrum of ABSs is a continuous function with respect to the Josephson phase ϕ. However, we find that within a finite phase window Φ_W≡[2nπ-ϕ_0(Z),2nπ+ϕ_0(Z)], ABSs are not allowed in the NHJJ. We call such phase windows Josephson gaps, where ϕ_0(Z) denotes the Josephson gap edge. At ϕ_0(Z), the ABSs exhibit E_B^±(ϕ_0)≈Δ(-iZ^2) for Z≪1. Analytically, the Josephson gap is a direct consequence of the constraint condition of bound states in Eq. (<ref>). We now explain the appearance of Josephson gaps as a consequence of the competition between finite phase difference ϕ across the junction and phase-breaking scattering at the junction. In Hermitian JJs, in response to a finite phase difference across the junction, a supercurrent carried by ABSs flows from one superconductor to the other one <cit.>. This mechanism applies similarly to the non-Hermitian case. However, the non-Hermitian barrier potential introduces decoherence of quasi-particles. As a result, supercurrent and thus ABSs do not appear unless the phase difference ϕ is large enough to overcome the non-Hermitian barrier strength Z. To see this point more clearly, let us focus on the case ϕ∼0 and Z≪1, where we obtain the Josephson gap edge increasing with Z as ϕ_0(Z)≃2√(2)Z. Indeed, the Josephson gap increases almost linearly with increasing barrier potential strength Z even for larger Z, as shown in Fig. <ref>(a). The Josephson gap can be described by 2ϕ_0(Z)=4arcsin(√(2)Z/√(1+Z^2)), which is controllable by tuning Z. Note that it approaches 2π in the limit Z→1, indicating the total suppression of ABSs. In the Hermitian case, the spectrum of ABSs can be written as E_B^±(ϕ)=±Δ√(1-Tsin^2(ϕ/2)) with T the transmission probability through the junction in the normal state <cit.>. However, the complex ABS spectrum of NHJJ does not fulfill a similar relation. The reason is that the scattering matrix becomes non-unitary and particle number (modulo 2) is not conserved any more <cit.>. Dissipative supercurrent.—The current-phase relation is an important characterization of JJs. We obtain the supercurrent by employing the formula I_s(ϕ)=2e/ħdℱ(ϕ)/dϕ where ℱ(ϕ) is the free energy of the system <cit.>. At zero temperature, the free energy of the JJ is determined by ABSs. Therefore, the supercurrent flowing across the junction is carried by ABSs. In the non-Hermitian case, the supercurrent can be complex. It is given by I_s(ϕ)=-2e/ħdRe[E_B^+(ϕ)]/dϕ+i2e/ħdIm[E_B^+(ϕ)]/dϕ. Explicitly, within the Josephson phase region ϕ∈[ϕ_b,ϕ_t], it reads I_s(ϕ)=Δ e/ħ[ζsin(ϕ/2)/1-Z^2-iZsin(ϕ)/2(1-Z^2)√(sin^2(ϕ/2)-Z^2)]. The current-phase relation is plotted in Fig. <ref>(b). We note that it is 2π-periodic and the supercurrent does not appear within the Josephson gap Φ_W. The supercurrent exhibits a jump at the Josephson gap edges. In particular, the real part Re[I_s(ϕ)] jumps from 0 to √(2)Δ Ze/ħ for Z≪1, at the Josephson gap edges ϕ_0(Z). The supercurrent is related to phase-coherent multiple Andreev reflections. Hence, ABSs appear and carry the supercurrent across the junction by Cooper pairs transfer <cit.>. In the non-Hermitian case, the spectrum of ABSs stated in Eq. (<ref>) is complex with a negative imaginary part. Indeed, the resonant condition for the Andreev reflection coefficient A_h-(E)=Δ/E[cos^2ϕ/2+√(Δ^2-E^2)(sinϕ+i2Z)/2E]/(Z^2+1)E^2-2iZE√(Δ^2-E^2)-Δ^2[Z^2+cos^2ϕ/2] inherits the same spectrum <cit.>. Therefore, quasi-particles (electrons and holes) obtain a finite life time such that they partially lose their phase memory when traveling to the interfaces, as sketched in Fig. <ref>(c). This introduces dissipation to the multiple Andreev reflections process <cit.>. Hence, the supercurrent becomes complex. In the dissipative multiple Andreev reflections, quasi-particles with finite life time escape into the external bath. Correspondingly, the imaginary part of Eq. (<ref>) may effectively be interpreted as a leakage of supercurrent to the environment. Its magnitude indicates how strong it leaks out of the JJ. We note that imaginary currents in normal states have been employed to describe delocalized behavior or dissipation of eigenstates <cit.>, while to the best of our knowledge, a complex supercurrent has not been addressed before. Hence, the above anomalous features, including the complex ABS spectrum, Josephson gaps, and the dissipative supercurrent, constitute defining properties of NHJJs, with no counterparts in the Hermitian limit. These features may have potential applications. For example, in the presence of Josephson gaps, the NHJJ can in principle work as a supercurrent switch by tuning the Josephson phase: “on” state with ϕ located outside of Josephson gaps; “off” state with ϕ inside of Josephson gaps. Anomalous ABSs in p-wave NHJJs.—Next, we examine the existence of anomalous ABSs in p-wave NHJJs, where the two superconductors are both of p-wave type. The BdG Hamiltonian is the same as in Eq. (<ref>) but with the pairing potential changed to p-wave pairing as Δ̂(x<0)=-iΔ∂_x/k_F and Δ̂(x>0)=-iΔ e^iϕ∂_x/k_F <cit.>. Following the same procedure as before, we arrive at a secular equation to determine ABSs as (Z^2+1)+2iZ√(Δ^2/E_B^2-1)=Δ^2/E_B^2cos^2(ϕ/2) for p-wave NHJJs <cit.>. At Z=0, the ABSs reduce to E_B^±(ϕ)=±Δcos(ϕ/2). We focus on the regime 0<Z<1 hereafter. In the Josephson phase region ϕ∈[ϕ_b^p,ϕ_t^p], the general solution of the basic equation yields E_B^+(ϕ)/Δ=√(cos^2(ϕ/2)-Z^2)-iZϑsin(ϕ/2)/1-Z^2 with E_B^-(ϕ)=-[E_B^+(ϕ)]^* by type I PHS. The bottom and top phase edges are defined as ϕ_b^p≡2nπ+ϕ_0^p(Z) and ϕ_b^p≡(2n+1)π-ϕ_0^p(Z) with ϕ_0^p(Z)=2arcsin(Z√(1-Z^2/1+Z^2)), and the sign function is given by ϑ=sgn[sin(ϕ/2)]. At the special Josephson phase ϕ=(2n+1)π, the energy modes are located at E_B^±[(2n+1)π]=Δ-2iZ/1-Z^2, deviating from the well-known Majorana zero-energy modes in p-wave Josephson junctions <cit.>. Due to type I PHS in the p-wave case, if (u,v)^T is an eigenstate with eigenenergy E, then (v^*,u^*)^T is also an eigenstate with eigenenergy -E^*. Since there is no zero-energy solution in the non-Hermitian case, the Majorana condition (u,v)^T∝(v^*,u^*)^T cannot be fulfilled, such that the topological protection of Majorana zero modes is destroyed by the non-Hermitian coupling. A similar conclusion has been drawn in normal-superconductor (NS) junctions formed with 1D topological superconductors <cit.>. We plot the spectrum of ABSs in p-wave NHJJ in Fig. <ref>(a). The spectrum is complex and it exhibits Josephson gaps Φ_W^p=[2nπ-ϕ_0^p(Z),2nπ+ϕ_0^p(Z)] with a gap value of 2ϕ_0^p(Z)=4arcsin(Z√(1-Z^2/1+Z^2)). It shows 2π periodicity in the presence of Josephson gaps, different from the Hermitian topological Josephson junctions <cit.>. In the limits ϕ∼0 and Z≪1, the Josephson gap edge is ±ϕ_0^p(Z)=±2Z, proportional to Z. At ϕ=ϕ_0^p(Z), the ABSs become E_B^±(ϕ_0^p)≈Δ(-iZ^2), the same as in the s-wave case. However, the Josephson gap does not always increase with increasing Z, as shown in Fig. <ref>(b). It reaches a maximum value 4arcsin(√(2)-1) at Z_m=√(√(2)-1) and then decreases to zero gradually. Similar to the s-wave case, the complex nature of ABSs can also give rise to a complex supercurrent. Discussion and conclusion.—The non-Hermitian barrier potential mimics the coupling of the JJ to the environment through a dissipative lead <cit.>. Consequently, quasi-particles obtain a finite lifetime and partially leak from the junction to the environment. This loss process is reflected by the loss term in Eq. (<ref>) <cit.>. It gives rise to dissipation responsible for anomalous non-Hermitian physics in the junction. Alternatively, the relevant dissipation may also be induced by shedding light at the interface <cit.>. Our results are obtained for 1D models, for simplicity, but they can be generalized to higher spatial dimensions in a straightforward way. In conclusion, we have analyzed the physical properties of NHJJs with anomalous features. In particular, Josephson gaps emerge in the ABSs due to the competition of the coherent superconducting phase with non-Hermitian decoherence effect; Due to the complex-valued nature of ABSs, the supercurrent carried by ABSs becomes dissipative. These characteristic properties of NHJJs have no Hermitian counterparts. We expect that such anomalous features can be observed in JJs coupled to the environment. C.A.L. and H.P.S. contributed equally to this work. We thank Jan Budich, Jian Li, and Chunxu Zhang for helpful discussions. This work was supported by the Wrzburg-Dresden Cluster of Excellence ct.qmat, EXC2147, project-id 390858490, the DFG (SFB 1170), and the Bavarian Ministry of Economic Affairs, Regional Development and Energy within the High-Tech Agenda Project “Bausteine fr das Quanten Computing auf Basis topologischer Materialen”. 92 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Bender(2007)]Bender07RPP author author C. M. Bender, title title Making sense of non-hermitian hamiltonians, 10.1088/0034-4885/70/6/r03 journal journal Rep. Prog. Phys. volume 70, pages 947 (year 2007)NoStop [El-Ganainy et al.(2018)El-Ganainy, Makris, Khajavikhan, Musslimani, Rotter, and Christodoulides]El-Ganainy18nphys author author R. El-Ganainy, author K. G. Makris, author M. Khajavikhan, author Z. H. Musslimani, author S. Rotter, and author D. N. Christodoulides, title title Non-Hermitian physics and pt symmetry, https://doi.org/10.1038/nphys4323 journal journal Nat. Phys. volume 14, pages 11 (year 2018)NoStop [Ashida et al.(2020)Ashida, Gong, and Ueda]Ashida20AP author author Y. Ashida, author Z. Gong, and author M. Ueda, title title Non-Hermitian physics, 10.1080/00018732.2021.1876991 journal journal Adv. Phys. volume 69, pages 249 (year 2020)NoStop [Bergholtz et al.(2021)Bergholtz, Budich, and Kunst]Bergholtz21rmp author author E. J. Bergholtz, author J. C. Budich, and author F. K. Kunst, title title Exceptional topology of non-Hermitian systems, 10.1103/RevModPhys.93.015005 journal journal Rev. Mod. Phys. volume 93, pages 015005 (year 2021)NoStop [Okuma and Sato(2023)]Okuma23arcmp author author N. Okuma and author M. Sato, title title Non-Hermitian topological phenomena: A review, 10.1146/annurev-conmatphys-040521-033133 journal journal Ann. Rev. Condens. Matter Phys. volume 14, pages 83 (year 2023)NoStop [Zyuzin and Zyuzin(2018)]Zyuzin18prb author author A. A. Zyuzin and author A. Y. Zyuzin, title title Flat band in disorder-driven non-Hermitian Weyl semimetals, 10.1103/PhysRevB.97.041203 journal journal Phys. Rev. B volume 97, pages 041203(R) (year 2018)NoStop [Papaj et al.(2019)Papaj, Isobe, and Fu]Papaj19prb author author M. Papaj, author H. Isobe, and author L. Fu, title title Nodal arc of disordered Dirac fermions and non-Hermitian band theory, 10.1103/PhysRevB.99.201107 journal journal Phys. Rev. B volume 99, pages 201107(R) (year 2019)NoStop [Michen and Budich(2022)]Michen22prr author author B. Michen and author J. C. Budich, title title Mesoscopic transport signatures of disorder-induced non-Hermitian phases, 10.1103/PhysRevResearch.4.023248 journal journal Phys. Rev. Res. volume 4, pages 023248 (year 2022)NoStop [Yoshida et al.(2019)Yoshida, Peters, Kawakami, and Hatsugai]Yoshida19prb author author T. Yoshida, author R. Peters, author N. Kawakami, and author Y. Hatsugai, title title Symmetry-protected exceptional rings in two-dimensional correlated systems with chiral symmetry, 10.1103/PhysRevB.99.121101 journal journal Phys. Rev. B volume 99, pages 121101(R) (year 2019)NoStop [Nagai et al.(2020)Nagai, Qi, Isobe, Kozii, and Fu]Nagai20prl author author Y. Nagai, author Y. Qi, author H. Isobe, author V. Kozii, and author L. Fu, title title DMFT reveals the non-Hermitian topology and Fermi arcs in heavy-fermion systems, 10.1103/PhysRevLett.125.227204 journal journal Phys. Rev. Lett. volume 125, pages 227204 (year 2020)NoStop [Zhang and Gong(2020)]Zhangx20prb author author X. Zhang and author J. Gong, title title Non-Hermitian floquet topological phases: Exceptional points, coalescent edge modes, and the skin effect, 10.1103/PhysRevB.101.045415 journal journal Phys. Rev. B volume 101, pages 045415 (year 2020)NoStop [Makris et al.(2008)Makris, El-Ganainy, Christodoulides, and Musslimani]Makris08prl author author K. G. Makris, author R. El-Ganainy, author D. N. Christodoulides, and author Z. H. Musslimani, title title Beam dynamics in 𝒫𝒯 symmetric optical lattices, 10.1103/PhysRevLett.100.103904 journal journal Phys. Rev. Lett. volume 100, pages 103904 (year 2008)NoStop [Guo et al.(2009)Guo, Salamo, Duchesne, Morandotti, Volatier-Ravat, Aimez, Siviloglou, and Christodoulides]GuoA09prl author author A. Guo, author G. J. Salamo, author D. Duchesne, author R. Morandotti, author M. Volatier-Ravat, author V. Aimez, author G. A. Siviloglou, and author D. N. Christodoulides, title title Observation of 𝒫𝒯-symmetry breaking in complex optical potentials, 10.1103/PhysRevLett.103.093902 journal journal Phys. Rev. Lett. volume 103, pages 093902 (year 2009)NoStop [Nakagawa et al.(2018)Nakagawa, Kawakami, and Ueda]Nakagawa18prl author author M. Nakagawa, author N. Kawakami, and author M. Ueda, title title Non-Hermitian Kondo effect in ultracold alkaline-earth atoms, 10.1103/PhysRevLett.121.203001 journal journal Phys. Rev. Lett. volume 121, pages 203001 (year 2018)NoStop [Rotter(2009)]Rotter09JA author author I. Rotter, title title A non-Hermitian Hamilton operator and the physics of open quantum systems, 10.1088/1751-8113/42/15/153001 journal journal J. Phys. A: Math. Theor. volume 42, pages 153001 (year 2009)NoStop [Li et al.(2019a)Li, Harter, Liu, de Melo, Joglekar, and Luo]Lij19nc author author J. Li, author A. K. Harter, author J. Liu, author L. de Melo, author Y. N. Joglekar, and author L. Luo, title title Observation of parity-time symmetry breaking transitions in a dissipative floquet system of ultracold atoms, 10.1038/s41467-019-08596-1 journal journal Nat. Commun. volume 10, pages 855 (year 2019a)NoStop [Xiao et al.(2019)Xiao, Wang, Zhan, Bian, Kawabata, Ueda, Yi, and Xue]Xiaol19prl author author L. Xiao, author K. Wang, author X. Zhan, author Z. Bian, author K. Kawabata, author M. Ueda, author W. Yi, and author P. Xue, title title Observation of critical phenomena in parity-time-symmetric quantum dynamics, 10.1103/PhysRevLett.123.230401 journal journal Phys. Rev. Lett. volume 123, pages 230401 (year 2019)NoStop [Moiseyev(2011)]Moiseyev11 author author N. Moiseyev, https://doi.org/10.1017/CBO9780511976186 title Non-Hermitian Quantum Mechanics (publisher Cambridge University Press, Cambridge, UK, year 2011)NoStop [Lee(2016)]Lee16prl author author T. E. Lee, title title Anomalous edge state in a non-Hermitian lattice, 10.1103/PhysRevLett.116.133903 journal journal Phys. Rev. Lett. volume 116, pages 133903 (year 2016)NoStop [Yao and Wang(2018)]Yao18prl author author S. Yao and author Z. Wang, title title Edge states and topological invariants of non-Hermitian systems, 10.1103/PhysRevLett.121.086803 journal journal Phys. Rev. Lett. volume 121, pages 086803 (year 2018)NoStop [Kunst et al.(2018)Kunst, Edvardsson, Budich, and Bergholtz]Kunst18prl author author F. K. Kunst, author E. Edvardsson, author J. C. Budich, and author E. J. Bergholtz, title title Biorthogonal bulk-boundary correspondence in non-Hermitian systems, 10.1103/PhysRevLett.121.026808 journal journal Phys. Rev. Lett. volume 121, pages 026808 (year 2018)NoStop [Gong et al.(2018)Gong, Ashida, Kawabata, Takasan, Higashikawa, and Ueda]GonZP18prx author author Z. Gong, author Y. Ashida, author K. Kawabata, author K. Takasan, author S. Higashikawa, and author M. Ueda, title title Topological phases of non-Hermitian systems, 10.1103/PhysRevX.8.031079 journal journal Phys. Rev. X volume 8, pages 031079 (year 2018)NoStop [Zhou and Lee(2019)]Zhou19prb author author H. Zhou and author J. Y. Lee, title title Periodic table for topological bands with non-Hermitian symmetries, 10.1103/PhysRevB.99.235112 journal journal Phys. Rev. B volume 99, pages 235112 (year 2019)NoStop [Kawabata et al.(2019a)Kawabata, Shiozaki, Ueda, and Sato]Kawabata19prx author author K. Kawabata, author K. Shiozaki, author M. Ueda, and author M. Sato, title title Symmetry and topology in non-Hermitian physics, 10.1103/PhysRevX.9.041015 journal journal Phys. Rev. X volume 9, pages 041015 (year 2019a)NoStop [Kawabata et al.(2019b)Kawabata, Bessho, and Sato]Kawabata19prl author author K. Kawabata, author T. Bessho, and author M. Sato, title title Classification of exceptional points and non-Hermitian topological semimetals, 10.1103/PhysRevLett.123.066405 journal journal Phys. Rev. Lett. volume 123, pages 066405 (year 2019b)NoStop [Zhang et al.(2020)Zhang, Yang, and Fang]ZhangK20prl author author K. Zhang, author Z. Yang, and author C. Fang, title title Correspondence between winding numbers and skin modes in non-Hermitian systems, 10.1103/PhysRevLett.125.126402 journal journal Phys. Rev. Lett. volume 125, pages 126402 (year 2020)NoStop [Okuma et al.(2020)Okuma, Kawabata, Shiozaki, and Sato]Okuma20prl author author N. Okuma, author K. Kawabata, author K. Shiozaki, and author M. Sato, title title Topological origin of non-Hermitian skin effects, 10.1103/PhysRevLett.124.086801 journal journal Phys. Rev. Lett. volume 124, pages 086801 (year 2020)NoStop [Yao et al.(2018)Yao, Song, and Wang]Yao18prl2 author author S. Yao, author F. Song, and author Z. Wang, title title Non-Hermitian Chern bands, 10.1103/PhysRevLett.121.136802 journal journal Phys. Rev. Lett. volume 121, pages 136802 (year 2018)NoStop [Leykam et al.(2017)Leykam, Bliokh, Huang, Chong, and Nori]Leykam17prl author author D. Leykam, author K. Y. Bliokh, author C. Huang, author Y. D. Chong, and author F. Nori, title title Edge modes, degeneracies, and topological numbers in non-Hermitian systems, 10.1103/PhysRevLett.118.040401 journal journal Phys. Rev. Lett. volume 118, pages 040401 (year 2017)NoStop [Shen et al.(2018)Shen, Zhen, and Fu]ShenH18prl author author H. Shen, author B. Zhen, and author L. Fu, title title Topological band theory for non-Hermitian Hamiltonians, 10.1103/PhysRevLett.120.146402 journal journal Phys. Rev. Lett. volume 120, pages 146402 (year 2018)NoStop [Yin et al.(2018)Yin, Jiang, Li, Lü, and Chen]Yin18prb author author C. Yin, author H. Jiang, author L. Li, author R. Lü, and author S. Chen, title title Geometrical meaning of winding number and its characterization of topological phases in one-dimensional chiral non-Hermitian systems, 10.1103/PhysRevA.97.052115 journal journal Phys. Rev. A volume 97, pages 052115 (year 2018)NoStop [Yokomizo and Murakami(2019)]Yokomizo19prl author author K. Yokomizo and author S. Murakami, title title Non-bloch band theory of non-Hermitian systems, 10.1103/PhysRevLett.123.066404 journal journal Phys. Rev. Lett. volume 123, pages 066404 (year 2019)NoStop [Lee and Thomale(2019)]LeeCH19prb author author C. H. Lee and author R. Thomale, title title Anatomy of skin modes and topology in non-Hermitian systems, 10.1103/PhysRevB.99.201103 journal journal Phys. Rev. B volume 99, pages 201103(R) (year 2019)NoStop [Longhi(2019)]Longhi19prl author author S. Longhi, title title Topological phase transition in non-Hermitian quasicrystals, 10.1103/PhysRevLett.122.237601 journal journal Phys. Rev. Lett. volume 122, pages 237601 (year 2019)NoStop [Lee et al.(2019)Lee, Ahn, Zhou, and Vishwanath]LeeJY19prl author author J. Y. Lee, author J. Ahn, author H. Zhou, and author A. Vishwanath, title title Topological correspondence between Hermitian and non-Hermitian systems: Anomalous dynamics, 10.1103/PhysRevLett.123.206404 journal journal Phys. Rev. Lett. volume 123, pages 206404 (year 2019)NoStop [Borgnia et al.(2020)Borgnia, Kruchkov, and Slager]Borgnia20prl author author D. S. Borgnia, author A. J. Kruchkov, and author R.-J. Slager, title title Non-Hermitian boundary modes and topology, 10.1103/PhysRevLett.124.056802 journal journal Phys. Rev. Lett. volume 124, pages 056802 (year 2020)NoStop [Budich and Bergholtz(2020)]Budich20prl author author J. C. Budich and author E. J. Bergholtz, title title Non-Hermitian topological sensors, 10.1103/PhysRevLett.125.180403 journal journal Phys. Rev. Lett. volume 125, pages 180403 (year 2020)NoStop [Franca et al.(2022)Franca, Könye, Hassler, van den Brink, and Fulga]Franca22prl author author S. Franca, author V. Könye, author F. Hassler, author J. van den Brink, and author C. Fulga, title title Non-Hermitian physics without gain or loss: The skin effect of reflected waves, 10.1103/PhysRevLett.129.086601 journal journal Phys. Rev. Lett. volume 129, pages 086601 (year 2022)NoStop [Jezequel and Delplace(2023)]Jezequel23prl author author L. Jezequel and author P. Delplace, title title Non-Hermitian spectral flows and Berry-Chern monopoles, 10.1103/PhysRevLett.130.066601 journal journal Phys. Rev. Lett. volume 130, pages 066601 (year 2023)NoStop [Qin et al.(2023)Qin, Shen, and Lee]QinF23pra author author F. Qin, author R. Shen, and author C. H. Lee, title title Non-Hermitian squeezed polarons, 10.1103/PhysRevA.107.L010202 journal journal Phys. Rev. A volume 107, pages L010202 (year 2023)NoStop [Guo et al.(2023)Guo, Chen, Ding, and Hu]GuoC23prl author author C.-X. Guo, author S. Chen, author K. Ding, and author H. Hu, title title Exceptional non-Abelian topology in multiband non-Hermitian systems, 10.1103/PhysRevLett.130.157201 journal journal Phys. Rev. Lett. volume 130, pages 157201 (year 2023)NoStop [Li et al.(2022)Li, Trauzettel, Neupert, and Zhang]LiCA22arxiv author author C.-A. Li, author B. Trauzettel, author T. Neupert, and author S.-B. Zhang, title title Enhancement of second-order non-Hermitian skin effect by magnetic fields, @noop (year 2022), http://arxiv.org/abs/2212.14691 arXiv:2212.14691 [cond-mat.mes-hall] NoStop [Sun et al.(2023)Sun, Li, Feng, and Guo]SunJ23arxiv author author J. Sun, author C.-A. Li, author S. Feng, and author H. Guo, title title Hybrid higher-order skin-topological effect in hyperbolic lattices, @noop (year 2023), http://arxiv.org/abs/2305.19810 arXiv:2305.19810 [cond-mat.mes-hall] NoStop [Zeuner et al.(2015)Zeuner, Rechtsman, Plotnik, Lumer, Nolte, Rudner, Segev, and Szameit]Zeuner15prl author author J. M. Zeuner, author M. C. Rechtsman, author Y. Plotnik, author Y. Lumer, author S. Nolte, author M. S. Rudner, author M. Segev, and author A. Szameit, title title Observation of a topological transition in the bulk of a non-Hermitian system, 10.1103/PhysRevLett.115.040402 journal journal Phys. Rev. Lett. volume 115, pages 040402 (year 2015)NoStop [Ding et al.(2016)Ding, Ma, Xiao, Zhang, and Chan]DingK16prx author author K. Ding, author G. Ma, author M. Xiao, author Z. Q. Zhang, and author C. T. Chan, title title Emergence, coalescence, and topological properties of multiple exceptional points and their experimental realization, 10.1103/PhysRevX.6.021007 journal journal Phys. Rev. X volume 6, pages 021007 (year 2016)NoStop [Xiao et al.(2020)Xiao, Deng, Wang, Zhu, Wang, Yi, and Xue]Xiao20np author author L. Xiao, author T. Deng, author K. Wang, author G. Zhu, author Z. Wang, author W. Yi, and author P. Xue, title title Non-Hermitian bulk–boundary correspondence in quantum dynamics, 10.1038/s41567-020-0836-6 journal journal Nat. Phys. volume 16, pages 761 (year 2020)NoStop [Ozturk et al.(2021)Ozturk, Lappe, Hellmann, Schmitt, Klaers, Vewinger, Kroha, and Weitz]Fahri21science author author F. E. Ozturk, author T. Lappe, author G. Hellmann, author J. Schmitt, author J. Klaers, author F. Vewinger, author J. Kroha, and author M. Weitz, title title Observation of a non-Hermitian phase transition in an optical quantum gas, 10.1126/science.abe9869 journal journal Science volume 372, pages 88 (year 2021)NoStop [Liang et al.(2022)Liang, Xie, Dong, Li, Li, Gadway, Yi, and Yan]LiangQ22prl author author Q. Liang, author D. Xie, author Z. Dong, author H. Li, author H. Li, author B. Gadway, author W. Yi, and author B. Yan, title title Dynamic signatures of non-Hermitian skin effect and topology in ultracold atoms, 10.1103/PhysRevLett.129.070401 journal journal Phys. Rev. Lett. volume 129, pages 070401 (year 2022)NoStop [Zhang et al.(2023)Zhang, Li, Sun, Liu, Zhao, Feng, Fan, and Qiu]ZhangQ23prl author author Q. Zhang, author Y. Li, author H. Sun, author X. Liu, author L. Zhao, author X. Feng, author X. Fan, and author C. Qiu, title title Observation of acoustic non-Hermitian bloch braids and associated topological phase transitions, 10.1103/PhysRevLett.130.017201 journal journal Phys. Rev. Lett. volume 130, pages 017201 (year 2023)NoStop [Liang et al.(2023)Liang, Tang, Xu, and Liu]LiangC23prl author author C. Liang, author Y. Tang, author A.-N. Xu, and author Y.-C. Liu, title title Observation of exceptional points in thermal atomic ensembles, 10.1103/PhysRevLett.130.263601 journal journal Phys. Rev. Lett. volume 130, pages 263601 (year 2023)NoStop [Xu et al.(2023)Xu, Zhou, Li, Cao, Chen, Xiao, Yang, and Qiu]XuG23prl author author G. Xu, author X. Zhou, author Y. Li, author Q. Cao, author W. Chen, author Y. Xiao, author L. Yang, and author C.-W. Qiu, title title Non-Hermitian chiral heat transport, 10.1103/PhysRevLett.130.266303 journal journal Phys. Rev. Lett. volume 130, pages 266303 (year 2023)NoStop [San-Jose et al.(2016)San-Jose, Cayao, Prada, and Aguado]Sanjose16sr author author P. San-Jose, author J. Cayao, author E. Prada, and author R. Aguado, title title Majorana bound states from exceptional points in non-topological superconductors, 10.1038/srep21427 journal journal Scientific Rep. volume 6, pages 21427 (year 2016)NoStop [Zhu et al.(2016)Zhu, Lü, and Chen]ZhuB16pra author author B. Zhu, author R. Lü, and author S. Chen, title title 𝒫𝒯-symmetry breaking for the scattering problem in a one-dimensional non-Hermitian lattice model, 10.1103/PhysRevA.93.032129 journal journal Phys. Rev. A volume 93, pages 032129 (year 2016)NoStop [Longhi(2017)]Longhiprb17 author author S. Longhi, title title Non-Hermitian bidirectional robust transport, 10.1103/PhysRevB.95.014201 journal journal Phys. Rev. B volume 95, pages 014201 (year 2017)NoStop [Chen and Zhai(2018)]ChenY18prb author author Y. Chen and author H. Zhai, title title Hall conductance of a non-Hermitian Chern insulator, 10.1103/PhysRevB.98.245130 journal journal Phys. Rev. B volume 98, pages 245130 (year 2018)NoStop [Bergholtz and Budich(2019)]Bergholtz19prr author author E. J. Bergholtz and author J. C. Budich, title title Non-Hermitian Weyl physics in topological insulator ferromagnet junctions, 10.1103/PhysRevResearch.1.012003 journal journal Phys. Rev. Res. volume 1, pages 012003(R) (year 2019)NoStop [Avila et al.(2016) Avila, Prada, San-Jose, and Aguado]Avila19cp author author J. Avila, author E. Prada, author P. San-Jose, and author R. Aguado, title title Non-hermitian topology as a unifying framework for the Andreev versus Majorana states controversy, 10.1038/s42005-019-0231-8 journal journal Commun. Phys. volume 2, pages 133 (year 2019)NoStop [Shobe et al.(2021)Shobe, Kuramoto, Imura, and Hatano]Shobe21prr author author K. Shobe, author K. Kuramoto, author K.-I. Imura, and author N. Hatano, title title Non-Hermitian Fabry-Pérot resonances in a 𝒫𝒯-symmetric system, 10.1103/PhysRevResearch.3.013223 journal journal Phys. Rev. Res. volume 3, pages 013223 (year 2021)NoStop [Kornich and Trauzettel(2022)]Viktoriia22prr author author V. Kornich and author B. Trauzettel, title title Andreev bound states in junctions formed by conventional and 𝒫𝒯-symmetric non-Hermitian superconductors, 10.1103/PhysRevResearch.4.033201 journal journal Phys. Rev. Res. volume 4, pages 033201 (year 2022)NoStop [Sticlet et al.(2022)Sticlet, Dóra, and Moca]Sticlet22prl author author D. Sticlet, author B. Dóra, and author C. P. Moca, title title Kubo formula for non-Hermitian systems and tachyon optical conductivity, 10.1103/PhysRevLett.128.016802 journal journal Phys. Rev. Lett. volume 128, pages 016802 (year 2022)NoStop [Geng et al.(2023)Geng, Wei, Zou, Sheng, Chen, and Xing]GengH23prb author author H. Geng, author J. Y. Wei, author M. H. Zou, author L. Sheng, author W. Chen, and author D. Y. Xing, title title Nonreciprocal charge and spin transport induced bys non-Hermitian skin effect in mesoscopic heterojunctions, 10.1103/PhysRevB.107.035306 journal journal Phys. Rev. B volume 107, pages 035306 (year 2023)NoStop [Isobe and Nagaosa(2023)]Isobe23prb author author H. Isobe and author N. Nagaosa, title title Anomalous Hall effect from a non-Hermitian viewpoint, 10.1103/PhysRevB.107.L201116 journal journal Phys. Rev. B volume 107, pages L201116 (year 2023)NoStop [Victoriia et al.(2023)Victoriia]Victoriia23arxiv author author V. Kornich title title Current-voltage characteristics of the N-I-PT-symmetric non-Hermitian superconductor junction as a probe of non-Hermitian formalisms, @noop (year 2023), http://arxiv.org/abs/2302.14802 arXiv:2302.14802 [cond-mat.mes-hall] NoStop [Likharev(1979)]Likharev79rmp author author K. K. Likharev, title title Superconducting weak links, 10.1103/RevModPhys.51.101 journal journal Rev. Mod. Phys. volume 51, pages 101 (year 1979)NoStop [Beenakker(1992)]Beenakker92proceed author author C. W. J. Beenakker, title title Three “universal” mesoscopic Josephson effects, in @noop booktitle Transport Phenomena in Mesoscopic Systems, editor edited by editor H. Fukuyama and editor T. Ando (publisher Springer Berlin Heidelberg, address Berlin, Heidelberg, year 1992) pp. pages 235–253NoStop [Golubov et al.(2004)Golubov, Kupriyanov, and Il'ichev]Golubov04rmp author author A. A. Golubov, author M. Y. Kupriyanov, and author E. Il'ichev, title title The current-phase relation in Josephson junctions, 10.1103/RevModPhys.76.411 journal journal Rev. Mod. Phys. volume 76, pages 411 (year 2004)NoStop [Tinkham(1996)]Tinkham author author M. Tinkham, @noop title Introduction to Superconductivity (publisher Dover Publications, Inc., Garden City, New York, year 1996)NoStop [Furusaki(1999)]Furusaki99sm author author A. Furusaki, title title Josephson current carried by Andreev levels in superconducting quantum point contacts, https://doi.org/10.1006/spmi.1999.0730 journal journal Superlattices Microst. volume 25, pages 809 (year 1999)NoStop [Beenakker and van Houten(1991)]Beenakker91prl2 author author C. W. J. Beenakker and author H. van Houten, title title Josephson current through a superconducting quantum point contact shorter than the coherence length, 10.1103/PhysRevLett.66.3056 journal journal Phys. Rev. Lett. volume 66, pages 3056 (year 1991)NoStop [Kwon et al.(2004)Kwon, Sengupta, and Yakovenko]Kwon04epj author author H.-J. Kwon, author K. Sengupta, and author V. M. Yakovenko, title title Fractional ac Josephson effect in p- and d-wave superconductors, 10.1140/epjb/e2004-00066-4 journal journal Eur. Phys. J. B volume 37, pages 349 (year 2004)NoStop [Fu and Kane(2009)]FuL09prb author author L. Fu and author C. L. Kane, title title Josephson current and noise at a superconductor/quantum-spin-hall-insulator/superconductor junction, 10.1103/PhysRevB.79.161408 journal journal Phys. Rev. B volume 79, pages 161408(R) (year 2009)NoStop [Dolcini et al.(2015)Dolcini, Houzet, and Meyer]Dolcini15prb author author F. Dolcini, author M. Houzet, and author J. S. Meyer, title title Topological josephson _0 junctions, 10.1103/PhysRevB.92.035428 journal journal Phys. Rev. B volume 92, pages 035428 (year 2015)NoStop [Beenakker et al.(2013)Beenakker, Pikulin, Hyart, Schomerus, and Dahlhaus]Beenakker13prl author author C. W. J. Beenakker, author D. I. Pikulin, author T. Hyart, author H. Schomerus, and author J. P. Dahlhaus, title title Fermion-parity anomaly of the critical supercurrent in the quantum spin-hall effect, 10.1103/PhysRevLett.110.017003 journal journal Phys. Rev. Lett. volume 110, pages 017003 (year 2013)NoStop [Li2()]Li2023SM @noop journal See Supplemental Material for details of (Sec. S1) particle-hole symmetry of the model, (Sec. S2) feasibilitty of the non-Hermitian Josephson junction model, (Sec. S3) complex Andreev bound states in s-wave case, (Sec. S4) the normal states transport, (Sec. S5) free energy and complex supercurrent, (Sec. S6) Andreev reflection coefficient, and (Sec. S7) complex Andreev bound states in p-wave case, which includes Refs. <cit.> NoStop [Furusaki and Tsukada(1991)]Furusaki91ssc journal author author A. Furusaki and author M. Tsukada, title title Dc Josephson effect and andreev reflection, https://doi.org/10.1016/0038-1098(91)90201-6 journal journal Solid State Commun. volume 78, pages 299 (year 1991)NoStop [Beenakker(1991)]Beenakker91prl author author C. W. J. Beenakker, title title Universal limit of critical-current fluctuations in mesoscopic Josephson junctions, 10.1103/PhysRevLett.67.3836 journal journal Phys. Rev. Lett. volume 67, pages 3836 (year 1991)NoStop [Kulik and Omel'yanchuk(1975)]Kulik75jetps author author I. O. Kulik and author A. N. Omel'yanchuk, title title Contribution to the microscopic theory of the Josephson effect in superconducting bridges, @noop journal journal JETP Lett. volume 21, pages 96 (year 1975)NoStop [Note1()]Note1 note For the case Z≥ 1, the condition for a bound state Re(λ )>0 is not satisfied.Stop [Mortensen et al.(2000)Mortensen, Jauho, and Flensberg]Mortensen99sm author author N. A. Mortensen, author A.-P. Jauho, and author K. Flensberg, title title Dephasing in semiconductor-superconductor structures by coupling to a voltage probe, https://www.sciencedirect.com/science/article/pii/S0749603600908905 journal journal Superlattices Microst. volume 28, pages 67 (year 2000)NoStop [Jiang et al.(2009)Jiang, Cheng, Sun, and Xie]JiangH09prl author author H. Jiang, author S. Cheng, author Q.-F. Sun, and author X. C. Xie, title title Topological insulator: A new quantized spin hall resistance robust to dephasing, 10.1103/PhysRevLett.103.036803 journal journal Phys. Rev. Lett. volume 103, pages 036803 (year 2009)NoStop [Li et al.(2019b)Li, Li, and Shen]LiCA19prb author author C.-A. Li, author J. Li, and author S.-Q. Shen, title title Majorana-Josephson interferometer, 10.1103/PhysRevB.99.100504 journal journal Phys. Rev. B volume 99, pages 100504(R) (year 2019b)NoStop [Bouganne et al.(2020)Bouganne, Bosch Aguilera, Ghermaoui, Beugnon, and Gerbier]Bouganne20np author author R. Bouganne, author M. Bosch Aguilera, author A. Ghermaoui, author J. Beugnon, and author F. Gerbier, title title Anomalous decay of coherence in a dissipative many-body system, 10.1038/s41567-019-0678-2 journal journal Nat. Phys. volume 16, pages 21 (year 2020)NoStop [Hatano and Nelson(1996)]Hatano96prl author author N. Hatano and author D. R. Nelson, title title Localization transitions in non-Hermitian quantum mechanics, 10.1103/PhysRevLett.77.570 journal journal Phys. Rev. Lett. volume 77, pages 570 (year 1996)NoStop [Zhang et al.(2022a)Zhang, Denner, Bzdu ššek, Sentef, and Neupert]ZhangSB22prb author author S.-B. Zhang, author M. M. Denner, author T. Bzdu ššek, author M. A. Sentef, and author T. Neupert, title title Symmetry breaking and spectral structure of the interacting Hatano-Nelson model, 10.1103/PhysRevB.106.L121102 journal journal Phys. Rev. B volume 106, pages L121102 (year 2022a)NoStop [Li et al.(2021)Li, Liu, and Zhang]LiQ22prb author author Q. Li, author J.-J. Liu, and author Y.-T. Zhang, title title Non-Hermitian Aharonov-Bohm effect in the quantum ring, 10.1103/PhysRevB.103.035415 journal journal Phys. Rev. B volume 103, pages 035415 (year 2021)NoStop [Lutchyn et al.(2010)Lutchyn, Sau, and Das Sarma]Lutchyn2010 author author R. M. Lutchyn, author J. D. Sau, and author S. Das Sarma, title title Majorana fermions and a topological phase transition in semiconductor-superconductor heterostructures, 10.1103/PhysRevLett.105.077001 journal journal Phys. Rev. Lett. volume 105, pages 077001 (year 2010)NoStop [Alicea(2012)]Alicea12rpp author author J. Alicea, title title New directions in the pursuit of Majorana fermions in solid state systems, http://stacks.iop.org/0034-4885/75/i=7/a=076501 journal journal Rep. Prog. Phys. volume 75, pages 076501 (year 2012)NoStop [Zhang et al.(2022b)Zhang, Wang, Pan, Li, Lu, Li, Zhang, Liu, Cao, Liu, Wen, Liao, Zhuo, Shang, Liu, Zhao, and Zhang]ZhangS22prl author author S. Zhang, author Z. Wang, author D. Pan, author H. Li, author S. Lu, author Z. Li, et al., title title Suppressing Andreev bound state zero bias peaks using a strongly dissipative lead, 10.1103/PhysRevLett.128.076803 journal journal Phys. Rev. Lett. volume 128, pages 076803 (year 2022b)NoStop [Liu et al.(2022)Liu, Zhang, Cao, Zhang, and Liu]LiuD22prl author author D. Liu, author G. Zhang, author Z. Cao, author H. Zhang, and author D. E. Liu, title title Universal conductance scaling of Andreev reflections using a dissipative probe, 10.1103/PhysRevLett.128.076802 journal journal Phys. Rev. Lett. volume 128, pages 076802 (year 2022)NoStop [Yan et al.(2023)Yan, Zhao, Zhou, Ma, Lyu, Chu, Hu, and Gong]YanQ23np author author Q. Yan, author B. Zhao, author R. Zhou, author R. Ma, author Q. Lyu, author S. Chu, author X. Hu, and author Q. Gong, title title Advances and applications on non-Hermitian topological photonics, doi:10.1515/nanoph-2022-0775 journal journal Nanophotonics volume 12, pages 2247 (year 2023)NoStop [Nazarov and Blanter(2006)]Nazarovbook author author Y. V. Nazarov and author Y. M. Blanter, @noop title Quantum Transport: Introduction to Nanoscience (publisher Cambridge University Press, year 2006)NoStop [Coleman(2015)]Coleman author author P. Coleman, https://doi.org/10.1017/CBO9780511976186 title Introduction to Many-Body Physics (publisher Cambridge University Press, Cambridge, UK, year 2015)NoStop [Roccati et al.(2022)Roccati, Palma, Ciccarello, and Bagarello]Roccati22OS author author F. Roccati, author G. M. Palma, author F. Ciccarello, and author F. Bagarello, title title Non-Hermitian physics and master equations, 10.1142/S1230161222500044 journal journal Open Syst. Inf. Dyn. volume 29, pages 2250004 (year 2022)NoStop [Brasil et al.(2013)Brasil, Fanchini, and Napolitano]Brrasil13bz author author C. Brasil, author F. Fanchini, and author R. Napolitano, title title A simple derivation of the Lindblad equation, https://www.scielo.br/j/rbef/a/dQpkcbLqQDrbWtWBfMj9HWn/?lang=en journal journal Rev. Bras. Ensino Fis. volume 35, pages 1303 (year 2013)NoStop [Breuer and Petruccione(2002)]Breuer02book author author H. P. Breuer and author F. Petruccione, @noop title The theory of open quantum systems (publisher Oxford University Press, address Great Clarendon Street, year 2002)NoStop equationsectionSfigure Ssection subsection Supplemental material for “Anomalous Andreev Bound States in Non-Hermitian Josephson Junctions” § PARTICLE-HOLE SYMMETRY OF THE MODEL In this section, we analyze the particle-hole symmetry that is physically relevant for a non-Hermitian Josephson junction. From field operator perspective, particle-hole symmetry (PHS) mixes creation operator Ψ^† and annihilation operator Ψ in a way CΨ_αC^-1=U_αβΨ_β^†, CΨ_α^†C^-1=Ψ_αU_αβ^T, where U represents a unitary matrix and C denotes the PHS operator. A Hamiltonian H=Ψ_α^†ℋ_αβΨ_β is particle-hole symmetric if CHC^-1=H. This implies that the first-quantized Hamiltonian transforms under PHS as Cℋ^TC^-1=-ℋ, where T is the transpose operation. In the Hermitian case, ℋ^T=ℋ^*. While in the non-Hermitian case, ℋ^T≠ℋ^* in general. Thus, the PHS comes in two kinds as <cit.>: Uℋ^*U^-1 =-ℋ, I; Uℋ^TU^-1 =-ℋ, II. In the presence of PHS, the excitation energy of ℋ always comes in pairs. For the two types of PHSs above, the corresponding energy pairs are E⟷-E^* for type I PHS and E⟷-E for type II PHS, respectively. To determine which one of the two types of PHS on ℋ is physically more relevant, we further consider the constraint of PHS on Green's function. On the one hand, the effective Hamiltonian ℋ is directly related with the retarded Green's function. Note that the poles of the retarded Green's function yeild the eigenvalues of the Hamiltonian, located in the lower half of tthe complex plane. On the other hand, the physical indication of the retarded Green's function as a propagator is consistent with causality. The retarded Green's function is defined in terms of field operators as G_αβ^R(t)=-iθ(t)⟨[Ψ_α(t),Ψ_β^†(0)]⟩, where t is the time domain and θ(t) is a Heaviside step function. Considering PHS on field operators in Green's function, it leads to G_αβ^R(t) =-iθ(t)⟨[Ψ_α(t),Ψ_β^†(0)]⟩ =-iθ(t)⟨C^-1C[Ψ_α(t),Ψ_β^†(0)]⟩ =-iθ(t)⟨C[Ψ_α(t),Ψ_β^†(0)]C^-1⟩ =-U_αγ^*⟨iθ(t)[Ψ_γ^†(0),Ψ_δ(-t)]⟩U_δβ^T =-U_αγ^*G_γδ^A(-t)U_δβ^T. By Fourier transformation and relating the retarded and advanced Green's function by [G_αβ^R(E)]^*=G_αβ^A(E), the retarded Green's function fulfills U^†[G^R(E)]^*U=-G^R(-E). This PHS constraint on retarded Green's function is consistent with type I PHS acting on ℋ: The eigenvalues of the system (poles of the Green's function) reside on the same side of the complex plane Im[E]≤0. In contrast, from the constraint of type II PHS acting on ℋ, eigenvalues distribute symmetrically in the upper and lower half of the complex plane. This symmetric eigenenergy distribution is not consistent with the eigenvalues obtained from the retarded Green's function. Thus, it is not compatible with the requirement of causality. Therefore, we argue that type I PHS acting on ℋ is physically more relevant to describe non-Hermitian systems. § FEASIBILITY OF THE NON-HERMITIAN JOSEPHSON JUNCTION MODEL In this section, we discuss the feasibility of the non-Hermitian Josephson junction model. The crucial part is the non-Hermitian barrier potential. It can be induced by coupling the system to the environment by a normal dissipative lead, as shown in Fig. 1(b) of the main text. We provide a general analysis for such an argument below. Assume there are three components: the system of interest, the environment, and the dissipative lead. For the interesting physics at low temperature, only several energy levels in a finite energy window matter. The Hamiltonian for the system can be written in its eigenbasis as H_s=∑_n_s(E_n_s-μ_s)|n_s⟩⟨ n_s| with eigenenergy E_n_s and eigenstate |n_s⟩. This is similar for the environment H_e=∑_n_e(E_n_e-μ_e)|n_e⟩⟨ n_e| assuming a lower chemical potential μ_e<μ_s. Moreover, the dissipative currentt I_r=∑_E_n_s>E_n_eV_es|n_e⟩⟨ n_s| characterizes non-reciprocal quasiparticle transitions from system to environment, accompanied by an energy relaxation process in the environment <cit.>. Such nonzero transition amplitude indicates an effective loss term -iΓ (Γ>0) for the system, such that the effective Hamiltonian of the system may be written as H_s^eff=∑_n_s(E_n_s-iΓ-μ_s)|n_s⟩⟨ n_s|. To capture the essential physics in a concise way, we employ a simplified non-Hermitian barrier potential for the Josephson junction as U(x)=-iVδ(x), V>0 to mimic the -iΓ contributions to Eq. (<ref>). Alternatively, we also provide an argument based on the Lindblad master equation, which describes the dynamics of an open quantum system interacting with the environment. Explicitly, the Lindblad master equation can be written as <cit.> dρ_s/dt =-i[H_s,ρ_s]+γ∑_ℓ(L̂_ℓρL̂_ℓ^†-1/2{L̂_ℓ^†L̂_ℓ,ρ_s}), where ρ_s is the density matrix for the system, L̂_ℓ the jump operator acting on the Hilbert space of H_s, and γ the coupling between system and the environment. To derive an effective non-Hermitian Hamiltonian for the system, we can rewrite the Lindblad master equation as dρ_s/dt =-i[H_s,ρ_s]+γ∑_ℓ(L̂_ℓρ_sL̂_ℓ^†-1/2{L̂_ℓ^†L̂_ℓ,ρ_s}) =-i(H_effρ_s-ρ_sH_eff^†)+γ∑_ℓL̂_ℓρ_sL̂_ℓ^†, where H_eff=H_s-i/2γ∑_ℓL̂_ℓ^†L̂_ℓ. By dropping the quantum jump terms γ∑_ℓL̂_ℓρ_sL̂_ℓ^†, one obtain the effective non-Hermitian Hamiltonian H_eff for describing the system of interest. This assumption may result in a non-Hermtian Hamiltonian as the stated in Eq. (<ref>). § SOLUTION OF COMPLEX ANDREEV BOUND STATES IN S-WAVE CASE In this section, we provide details for the solution of Andreev bound states (ABSs) in s-wave non-Hermitian Josephson junctions. The electron and hole excitations are described by the BdG equation ℋ_BdG(x)([ u(x); v(x) ]) =E([ u(x); v(x) ]), where ℋ_BdG(x) =([ [-ħ^2∂_x^2/2m-μ]+U(x) Δ̂(x); Δ̂^†(x) -[-ħ^2∂_x^2/2m-μ]+U(x) ]), U(x) =-iVδ(x), V>0. In the eigenstate (u(x),v(x))^T, the upper component u(x) represents electron-like and the lower component v(x) represents hole-like excitations, respectively. The pairing potential is introduced as Δ(x) = Δ, x<0, Δe^iθ, x>0. Specifically, type I PHS acts as Uℋ_BdG^*U^-1=-ℋ_BdG, where the matrix U takes the form U=([ 0 1; -1 0 ]). In the following, we focus on the ABSs within the superconducting gap. The wave function of a bound state should decay exponentially for |x|→∞. Then, the trial wavefunction can be taken as ψ_B(x)=([ u_B(x); v_B(x) ])= A_h-e^ik_hx([ v_0; u_0 ])+A_e-e^-ik_ex([ u_0; v_0 ]), x<0; A_e+e^ik_ex([ u_0e^iϕ/2; v_0e^-iϕ/2 ])+A_h+e^-ik_hx([ v_0e^iϕ/2; u_0e^-iϕ/2 ]), x>0, where the parameters are defined as ħk_e =√(2m(μ+i√(Δ^2-E^2))), k_h=k_e^*, and u_0^2 =1/2(1+i√(Δ^2-E^2)/E), v_0^2=1/2(1-i√(Δ^2-E^2)/E). Note that u_0/v_0=E+i√(Δ^2-E^2)/Δ=e^iθ with θ≡arccos(E/Δ). The bound state has a decay length λ given by λ=ħ v_F/Δ1/√(1-E^2/Δ^2). Therefore, the necessary condition for the existence of a bound state is Re(λ)>0. By enforcing continuity of the wave function ψ_B(x=0) and the jump condition for the ψ'_B(x=0), we obtain ψ_B(0+) =ψ_B(0-), ψ'_B(0+)-ψ'_B(0-) =-τ_z2mV/ħ^2ψ_B(0). Here, τ_z=diag(1,-1) is a Pauli matrix. The secular equation is written as ([ v_0 u_0 -u_0e^iϕ/2 -v_0e^iϕ/2; u_0 v_0 -v_0e^-iϕ/2 -u_0e^-iϕ/2; -z_1v_0 z_2u_0 u_0e^iϕ/2 -v_0e^iϕ/2; -z_2u_0 z_1v_0 v_0e^-iϕ/2 -u_0e^-iϕ/2 ])([ A_h-; A_e-; A_e+; A_h+ ])=0, where we have defined Z ≡mV/ħ^2k_F, z_1≡1-2Z, z_2≡1+2Z. Note that we have used the approximation k_e≈ k_h≈ k_F. The determinant of this coefficient matrix needs to be zero to have nontrivial solutions for the coefficients A_e/h±. After a cumbersome calculation, we arrive at 4[u_0^4(Z+1)^2+v_0^4(Z-1)^2-2u_0^2v_0^2(Z^2+cosϕ)] =0. Using Eq. (<ref>), this yields the basic equation to determine ABSs as (Z^2+1)+2Zi√(Δ^2/E^2-1)=Δ^2/E^2(Z^2+cos^2ϕ/2). Let us first discuss some special limits and then present the general solutions. We mainly focus on the regime 0<Z<1. Note that there is no bound state when Z>1. * For ϕ=2nπ, Eq. (<ref>) becomes (Z^2+1)+2Zi√(Δ^2/E^2-1) =Δ^2/E^2(Z^2+1). Assuming E≠0, we obtain +2Zi√(Δ^2/E^2-1) =(Δ^2/E^2-1)(Z^2+1). Defining x=Δ^2/E^2-1, we get +2Zi√(x)=x(Z^2+1). It gives the trivial solution E^2=Δ^2, corresponding to x=0. If x≠0, we analyze -4Z^2=x(1+Z^2)^2 with the solution E=±Δ1+Z^2/1-Z^2. This solution does not fulfill the condition of a bound state with Re(λ)>0. * When ϕ=(2n+1)π, we have (Z^2+1)+2Zi√(Δ^2/E^2-1) =Δ^2/E^2Z^2. It yields E^2=Δ^2Z^2/Z^2-1. For 0<Z<1, the energy is purely imaginary E=ZΔ/± i√(1-Z^2). If we substitute this result back into the original equation, we obtain E=ZΔ/+i√(1-Z^2) by fixing the sign convention of √(-1)=i. Therefore, only one branch of the imaginary energy is left. * We now analyze general solution for arbitrary ϕ. From the above discussion, we find that the energy E can not vanish. Thus we define Δ^2/E^2=1/y.Then, we obtain (Z^2+1)y+2Zi√(y(1-y))=Z^2+cos^2(ϕ/2). Defining A=(Z^2+1), B=Z^2+cos^2(ϕ/2), the equation can be simplified as (A^2-4Z^2)y^2+(4Z^2-2AB)y+B^2 =0. Solving this equation leads to y =Z^2[Z^2-sin^2(ϕ/2)]+cos^2(ϕ/2)±2Zsgn(cos(ϕ/2))cos(ϕ/2)√(Z^2-sin^2(ϕ/2))/(1-Z^2)^2 =(sgn(cos(ϕ/2))cos[ϕ/2]±Z√(Z^2-sin^2(ϕ/2))/1-Z^2)^2. This expression can be rewritten as E/Δ=±sgn(cos(ϕ/2))cos[ϕ/2]± Z√(Z^2-sin^2(ϕ/2))/1-Z^2. At Z=0, It reduces to the Kulik-Omel'yanchuk (KO) limit E_B^±(ϕ)=±Δcos(ϕ/2) <cit.>. Considering further the necessary condition for a bound state Re(λ)>0, this indicates sin^2(ϕ/2) >2Z^2/1+Z^^2. From this condition, we determine the Josephson gap edge at ϕ_0(Z)=2nπ±2arcsin(√(2)Z/√(1+Z^2)), n∈ℤ. Therefore, the spectrum of Andreev bound states is given by E_B^±(ϕ)/Δ=±ζcos[ϕ/2]-iZ√(sin^2(ϕ/2)-Z^2)/1-Z^2, ϕ∈[ϕ_b,ϕ_t], where the bottom phase edge is ϕ_b≡2nπ+ϕ_0(Z), the top phase edge is ϕ_t≡(2n+1)π-ϕ_0(Z), and ζ=sgn(cos(ϕ/2)). If we look at the ϕ∼0 with small Z, ϕ_0(Z)=2√(2)Z. Then, the energy can be approximately as E_B^±(ϕ_0)≈Δ(-iZ^2). § NORMAL STATES TRANSPORT In this section, we calculate the normal state transport in the presence of a non-Hermitian barrier potential. The normal states can be described by the Hamiltonian H=ħ^2k^2/2m-iVδ(x),V>0. Then the scattering state can be expressed as ψ(x<0) =e^ikx+re^-ikx, ψ(x>0) =te^ikx, where r and t are reflection and transmission amplitudes. Considering the continuity of wave functions and the derivatives, we obtain ψ(x=0^-) =ψ(x=0^+), ψ'(x=0^+) -ψ'(x=0^-)=-i2mV/ħ^2ψ(0). This equation leads to 1+r =t, t-1+r=-2Zt, where we have defined Z=mV/ħ k_F. Thus the transmission and reflection amplitudes are t =1/1+Z, r=-Z/1+Z. The corresponding transmission and reflection “probabilities”are T =1/(1+Z)^2, R=Z^2/(1+Z)^2. Note that T+R=1+Z^2/(1+Z)^2<1 when Z>0, which corresponds to the loss of quasiparticles to the environment due to the non-Hermitian barrier. In contrast, in the gain case, T+R≥1 in general. § FREE ENERGY AND COMPLEX SUPERCURRENT In this section, we discuss the supercurrent carried by complex ABSs. We obtain the supercurrent directly by I_s(ϕ)=2e/ħdℱ(ϕ)/dϕ, where ℱ(ϕ) is the free energy of the system <cit.>. Consider a general system of independent particles with many energy levels, each energy level can be regarded as a microcanonical ensemble <cit.>. In our case the relevant energy levels are the ABSs in the gap. The effective Hamiltonian can be regarded as a summation of independent Hamiltonians as H-μ N=∑_j(E_j-μ)n̂_j where n̂_j is the occupation number at level E_j. The partition function is then a product of the individual partition functions as 𝒵=Tr[Π_j⊗e^-β(E_j-μ)n̂_j]. The trace of a exterior product of matrices is equal to the product of their individual traces, thus the partition function is 𝒵 =Π_jTr[e^-β(E_j-μ)n̂_j]=Π_j𝒵_j=1+e^-β(E_j-μ) for fermions. Then the corresponding free energy is given by ℱ =-ln𝒵/β=-∑_jln(1+e^-β(E_j-μ))/β. Following the above equation, the supercurrent can be written as I_s(ϕ) =2e/ħdℱ/dϕ=2e/ħ∑_j=±dE_j/dϕf(E_j), where f(E_j) is the Fermi-Dirac function f(E_j)=1/1+e^β(E_j-μ). Note that we have two relevant energy levels E_B^±(ϕ). We parameterize E_B^±(ϕ)=± a(ϕ)+ib(ϕ) with a(ϕ)>0. Then, in the zero-temperature limit, we obtain I_s(ϕ) =-2e/ħda(ϕ)/dϕ+i2e/ħdb(ϕ)/dϕ =-2e/ħdRe[E_B^+(ϕ)]/dϕ+i2e/ħdIm[E_B^+(ϕ)]/dϕ. Substituting the spectrum of ABSs to this formula, we derive I_s(ϕ)=Δ e/ħ[ζsin(ϕ/2)/1-Z^2-iZsin(ϕ)/2(1-Z^2)√(sin^2(ϕ/2)-Z^2)], ϕ∈[ϕ_b,ϕ_t]. There is no supercurrent in the Josephson gap. § ANDREEV REFLECTION COEFFICIENT In this section, we obtain the specific form of the Andreev reflection coefficient. To this end, we take the trial wave function as ψ(x)=([ u(x); v(x) ])= e^ik_ex([ u_0; v_0 ])+A_h-e^ik_hx([ v_0; u_0 ])+A_e-e^-ik_ex([ u_0; v_0 ]), x<0; A_e+e^ik_ex([ u_0e^iϕ/2; v_0e^-iϕ/2 ])+A_h+e^-ik_hx([ v_0e^iϕ/2; u_0e^-iϕ/2 ]), x>0. The coefficients are determined by the boundary conditions: ψ(0+) =ψ(0-), ψ'(0+)-ψ'_B(0-)=-2imV/ħ^2τ_zψ(0). These boundary conditions can be rewritten as ([ v_0 u_0 -u_0e^iϕ/2 -v_0e^iϕ/2; u_0 v_0 -v_0e^-iϕ/2 -u_0e^-iϕ/2; -z_1v_0 z_2u_0 u_0e^iϕ/2 -v_0e^iϕ/2; -z_2u_0 z_1v_0 v_0e^-iϕ/2 -u_0e^-iϕ/2 ])([ A_h-; A_e-; A_e+; A_h+ ]) =([ -u_0; -v_0; -2Zu_0; 2Zv_0 ]). The Andreev reflection probability is then obtained as A_h-= u_0v_0[u_0^2e^-iϕ-v_0^2e^iϕ+(2z(u_0^2-v_0^2)-(u_0^2+v_0^2))]/2[u_0^4(Z+1)^2+v_0^4(Z-1)^2-2u_0^2v_0^2(Z^2+cosϕ)]. After a simplification, we arrive at A_h- =Δ/E[cos^2ϕ/2+√(Δ^2-E^2)/E(sinϕ/2cosϕ/2-iZ)]/(Z^2+1)E^2+2iZE√(Δ^2-E^2)-Δ^2[Z^2+cos^2ϕ/2]. The poles of A_h- yield the spectrum of ABSs, consistent with Eq. (<ref>). § SOLUTION OF COMPLEX ANDREEV BOUND STATES IN P-WAVE CASE In this section, we present the solution of ABSs the p-wave case. The BdG equation reads ([ [-ħ^2∂_x^2/2m-μ]+U(x) Δk̂_x/k_F; [Δk̂_x/k_F]^† -[-ħ^2∂_x^2/2m-μ]+U(x) ])([ u(x); v(x) ])=E([ u(x); v(x) ]). Following a similar procedure as in the s-wave case, we find the secular equation (Z^2+1)+2iZ√(Δ^2/E^2-1) =Δ^2/E^2cos^2(ϕ/2). Let us first present results for special limits and afterwords the general solutions. * Josephson phase ϕ=2nπ. The above equation becomes (Z^2+1)+2Zi√(Δ^2/E^2-1) =Δ^2/E^2. Assume E≠0 to get +2Zi√(Δ^2/E^2-1) =[Δ^2/E^2-(Z^2+1)]. Defining x=Δ^2/E^2-1, which yields -2Zi√(x)=(x-Z^2). The final result is E=±Δ/√(1-Z^2), which does not fulfill the bound state condition Re(λ)>0. * At Josephson phase ϕ=(2n+1)π, the above equation leads to (Z^2+1)+2Zi√(Δ^2/E^2-1) =0. When 0<Z<1, the energy is purely imaginary with E=Δ±2iZ/1-Z^2. Then, if we substitute this solution back to the original equation, we find E=Δ-2iZ/1-Z^2 by fixing the sign convention √(-1)=i. * General Josephson phase ϕ. We notice that the energy E cannot reach zero. Thus we define Δ^2/E^2=1/y, and the equation is simplified to be (Z^2+1)y+2Zi√(y(1-y))=cos^2(ϕ/2). Define A=(Z^2+1), B=cos^2(ϕ/2) to obtain (A^2-4Z^2)y^2+(4Z^2-2AB)y+B^2 =0. Thus, the general solution reads y =-2[Z^2sin^2(ϕ/2)+(Z^2-cos^2(ϕ/2))]±√(16Z^2sin^2(ϕ/2)[Z^2-cos^2(ϕ/2)])/2(A^2-4Z^2) =-(Zsgn(sin(ϕ/2))sin(ϕ/2)±√(Z^2-cos^2(ϕ/2))/1-Z^2)^2. Finally, we arrive at the result E/Δ=± iZsgn(sin(ϕ/2))sin(ϕ/2)±√(Z^2-cos^2(ϕ/2))/1-Z^2. At Z=0, we recover the well-known p-wave junction limit with E=±Δcos(ϕ/2). To determine the ABSs, we consider the necessary condition for a bound state Re(λ)>0, which indicates sin^2(ϕ/2) >Z^2(1-Z^2)/1+Z^^2. From this condition, we find the Josephson gap edges at ϕ_0^p(Z)=2nπ±2arcsin(Z√((1-Z^2)/1+Z^2)), n∈ℤ. Therefore, the spectrum of ABSs is given by E_B^+(ϕ)/Δ=√(cos^2(ϕ/2)-Z^2)-iZsgn(sin(ϕ/2))sin(ϕ/2)/1-Z^2, ϕ∈[ϕ_b^p,ϕ_t^p], and E_B^-(ϕ)=-[E_B^+(ϕ)]^* from type I PHS. The bottom phase edge is ϕ_b^p≡2nπ+ϕ_0^p(Z) and the top phase edge is ϕ_t^p≡(2n+1)π-ϕ_0^p(Z). If we focus around ϕ∼0 with small Z, it gives a simple relation ϕ_0^p(Z)=2Z. In this regime, the energy can be approximated as E_B^±(ϕ_0^p)≈Δ(-iZ^2). Interestingly, this value is exactly the same as in the s-wave case. The Josephson edge ϕ_0^p(Z) has a maximum value at Z^2 =1-Z^2/1+Z^2=√(2)-1. Thus the maximum Josephson gap value is 4arcsin(√(2)-1).
http://arxiv.org/abs/2307.05009v1
20230711043639
Enhancement of Superconductivity in the Fibonacci Chain
[ "Meng Sun", "Tilen Čadež", "Igor Yurkevich", "Alexei Andreanov" ]
cond-mat.supr-con
[ "cond-mat.supr-con" ]
Faculty of Science, Beijing University of Technology, Beijing, China, 100124 [email protected] School of Computer Science and Digital Technologies, Aston University, B4 7ET Birmingham, United Kingdom [email protected] We study the interplay between quasi-periodic disorder and superconductivity in a 1D tight-binding model with the quasi-periodic modulation of on-site energies that follow the Fibonacci rule and all the eigenstates are multifractal. As a signature of multifractality, we observe the power-law dependence of the correlation between different single-particle eigenstates as a function of their energy difference. By computing numerically the superconducting transition temperature, we find the distribution of critical temperatures, analyze their statistics and estimate the mean value and variance of critical temperatures for various regimes of the attractive coupling strength and quasi-periodic disorder. We find an enhancement of the critical temperature as compared to the analytical results that are based on strong assumptions of absence of correlations and self-averaging of multiple characteristics of the system, which are not justified for the Fibonacci chain. For the very weak coupling regime, we observe a crossover where the self-averaging of the critical temperature breaks down completely and a strong sample-to-sample fluctuations emerge. Enhancement of Superconductivity in the Fibonacci Chain Alexei Andreanov August 12, 2023 ======================================================= § INTRODUCTION Fractals are intricate geometric objects that are self-similar across different scales <cit.>. The concept of fractality has revolutionized the development of novel materials and devices, offering unique properties and applications. Materials featuring fractal structures showcase exceptional characteristics typically not observed in non-fractal counterparts. One remarkable example is the recent advancement in fractal graphene-based materials <cit.>. These materials display remarkable mechanical strength, electrical conductivity, and thermal stability, making them highly suitable for a diverse range of applications, including energy storage and sensing. The fractal structures have proven to be efficient in photovoltaic devices <cit.> since they enhance light absorption and significantly improve the efficiency of solar cells. For instance, the utilization of fractal-shaped nanowires in solar cells has led to heightened light trapping and absorption compared to conventional designs <cit.>. From a theoretical perspective, there have been notable efforts to explore the conditions under which fractal geometry can enhance a property critical for applications, such as superconductivity. Following the development of the microscopic theory of superconductivity by Bardeen, Cooper, and Schrieffer (BCS) <cit.>, the influence of disorder on superconductivity garnered considerable attention <cit.>. Early studies <cit.> suggested that a superconducting phase could emerge when the Fermi energy (E_F) resides in the region of the Anderson mobility edge due to strong correlations between fractal wavefunctions. Subsequent research predicted an increase in critical temperature even in quasi-one-dimensional (1D) wires <cit.>, quasi-2D materials <cit.>, and weakly disordered two-dimensional (2D) systems <cit.>. While many studies (see Ref. kravtsov2010superconducting and references there) focused on situation when transition in a clean system described by the standard BCS-type mean-field theory is modified by disorder inducinng significant overlap between multifractal wavefunctions with different eigenenergies, there are also quasiperiodic materials that possess these features intrinsically without extrinsic disorder. Quasiperiodic systems, readily realized experimentally in various structures like artificial atomic chains and quasi-2D semiconducting heterostructures <cit.>, serve as examples. The Fibonacci chain <cit.>, a one-dimensional quasiperiodic structure closely related to three-dimensional icosahedral quasicrystals <cit.>, offers an intriguing realm for superconductivity studies. The energy spectrum in this system exhibits a Cantor set-type fractal structure <cit.>, and the multifractal eigenfunctions demonstrate long-range power-law spatial and temporal correlations. It is therefore a natural testbed for the effect of fractality on the superconducting properties. Phenomenological arguments were put forward <cit.> suggesting multifractal correlations of wavefunctions enhance superconductivity. This was later tested close to the Anderson transition within a mean-field approximation <cit.>. The difficulty is that now one has to solve a disordered gap equation, without the simplifications brought in by translation invariance. A common approach is to average the gap equation and ignore the correlations <cit.>. As we demonstrate in this work, neglecting the correlations removes an important enhancement of the critical temperature. The outline of the paper is as follows. We define the model and study its spectral correlation function in Sec. <ref>. Then the mean-field approximation to superconductivity in the model and the behavior of the average critical temperature are studied in Sec. <ref>. The breakdown of self-averaging of the critical temperature and the crossover in the coupling strength are discussed in Sec. <ref>. This is followed by Conclusions. § MODEL & SPECTRAL CORRELATION FUNCTION We consider the 1D Fibonacci chain, which serves as a fundamental model representing quasicrystals. This chain exhibits several noteworthy properties, as outlined in a recent study by Jagannathan et al. <cit.>: (i) deterministic construction - the Fibonacci chain is constructed following a well-defined deterministic algorithm; (ii) finite number of possible configurations - despite its complexity, the Fibonacci chain possesses a finite number of possible configurations allowing detailed analysis; (iii) multi-fractal eigenstates - one of the remarkable features of the Fibonacci chain is that its eigenstates exhibit multifractal behavior for all values of the onsite potential h leading to intricate patterns with varying degrees of complexity and self-similarity, regardless of the specific values of the onsite potential. Here we focus on a chain model with on-site energies arranged according to the Fibonacci rule. The tight binding Hamiltonian, Ĥ_F = -∑_i ( ĉ_i^†ĉ_i+1 + ĉ^†_i+1ĉ_i + h_i ĉ_i^†ĉ_i) , describes particles hopping between lattice sites with dimensionless (measured in units of hopping amplitude) on-site potential h_i. The potential takes two values ± h which are arranged according to the Fibonacci sequence rule σ: A → AB , B → A. The nth Fibonacci word W_n is the concatenation of two previous ones W_n = [ W_n-1, W_n-2]. To construct the Fibonacci type potential for a system of size L, we first write down a long enough Fibonacci sequence, then cut a segment containing L consecutive letters, and make the substitution A → h and B → -h. The number of different segments N=L/2 (N=((L-1)/2) for L even (odd) <cit.>. In this way, we generate ensemble of N different realizations of on-site energy arrangements, each being a subset of the Fibonacci sequence. Some properties of the eigenstates of the Fibonacci chain have been studied recently <cit.>. For example, a perturbative renormalization group analysis was used to analytically determine fractal dimensions for the off-diagonal Fibonacci chain <cit.> in the weak potential strength limit (h ≪ 1). For the issue of superconducting transition, the most important property of multifractal systems <cit.>, is the overlap of different eigenstates described by the correlation of two single particle wavefunction <cit.>, C( ω) = L^d ∑_𝐫,n,m⟨ψ_n(𝐫)^2 ψ_m(𝐫)^2 δ( ϵ_m - ϵ_n -ω)⟩ , where L^d is the system volume, ψ_n (𝐫) and ϵ_n are the eigenstate and eigenenergy of the Hamiltonian (<ref>), respectively. This function demonstrates power-law decay at the Anderson transition <cit.>, C(ω) = ( E_0/ω)^γ , in some frequency domain δ_L < ω <E_0, where δ_L is the mean level spacing and E_0 is the energy scale related to the fractal length. The power-law exponent γ is connected to the multifractal dimension by a simple relation <cit.>: γ = 1 - d_2/d. We confirm the power-law decay of the correlation in the Fibonacci chain, see Fig. <ref>, for different disorder strengths, by numerical diagonalization of the Hamiltonian (<ref>) and averaging over different realizations, i.e. different slices of the length L cut from n-th Fibonacci word. Using the numerically computed correlator (<ref>), we further estimate the upper energy scale E_0 and the exponent γ from the power-law fits shown in Fig. <ref>. To the best of our knowledge, this correlation function has not been studied yet for the Fibonacci chain. § MEAN-FIELD SUPERCONDUCTIVITY The spinful fermions on a tight-binding chain with local attraction are described by the negative-U Hubbard Hamiltonian, Ĥ = ∑_σĤ_F,σ +U ∑_i=1^L n̂_i↑n̂_i↓ where the single-particle part Ĥ_F,σ is given by Eq. (<ref>) for each of spin components σ = ↑, ↓. The second term, with n̂_iσ=ĉ_iσ^†ĉ_iσ being the occupation number operator of electrons with spin σ on i-th site, is the attractive Hubbard interaction with dimensional coupling constant U. To investigate the superconducting properties we write the Hamiltonian in the single particle eigenbasis of Ĥ_F,σ ĉ_iσ=∑_n ψ_n(i) ĉ_nσ , following <cit.>, and keep only the terms most relevant for the superconductivity Ĥ = ∑_nσϵ_n ĉ^†_jσĉ_nσ + U ∑_nm M_nmĉ^†_n↑ĉ_n↓^†ĉ_m↑ĉ_m↓ M_nm = ∑_iψ_n(i)^2 ψ_m(i)^2, where ϵ_n is the single-particle energy of state n; and σ = {↓, ↑} is the spin label. The mean-field approach <cit.> leads to the gap equation Δ_n = |U|/2∑_m M_nmΔ_m/ε_mtanh( ϵ_m/2T). where ε_n=√(ϵ_m^2 + Δ_m^2), and the gap function is defined as an anomalous Green function, Δ_n=⟨ĉ_n↑ĉ_n↓⟩. The transition is signalled by the appearance of a non-zero Δ_n. The routine approach to analysing the transition is based on few assumptions <cit.>: * density of states and the wavefunctions are uncorrelated; * density of states ν_0 is self-averaging and energy-independent in the window of the Debye frequency ϵ_D around the Fermi energy; * all gap functions Δ_n are self-averaging, and, finallly, * there is no correlation between the wavefunctions overlap integral M_nm and the gaps Δ_n. Only under all the mentioned above conditions, the gap equations acquire the following form in the continuous limit after averaging over the realisations: Δ(ϵ) = λ/2∫^ϵ_D_-ϵ_Dd ϵ'/ε(ϵ') C(ϵ-ϵ') tanh( ε (ϵ')/2T) Δ(ϵ') . Here another dimensionless coupling constant is introduced λ=ν_0 |U|. Further assuming that all the gaps Δ_n vanish at the transition, i.e. in the continuous limit Δ(ϵ)=0, and that the Debye frequency is much larger than the fractal scale E_0, leads to the following equation for the critical temperature, 1 = λ∫_0^ϵ_DC(ϵ)/ϵtanh( ϵ/2T_c^A) dϵ , which admits the solution, T_c^A = ϵ_D 𝒟(γ) [ 1+γ/λ( ϵ_D/E_0)^γ]^-1/γ , with 𝒟(γ) = [ 2γ(2^γ+1-1) Γ(-γ) ζ(-γ)]^1/γ , and ζ(x) is the Riemann ζ function <cit.>. We extract the values of E_0 and γ from the correlation function (<ref>) which takes the power-law scaling form (<ref>), as we have verified in Sec. <ref>. With these averaged parameters, we can evaluate the critical temperature by Eq. (<ref>). We show in Fig. <ref> by the solid line the critical temperature computed via Eq. (<ref>) as the function of coupling strength for the Fibonacci chain system with different disorder strengths. However, this approach is based on at least four assumptions outlined above which are hard to justify. Instead, we compute the critical temperature numerically without a priory assumptions on statistics and correlations between various entries present in Eq. (<ref>). We solve the gap equation (<ref>) in the limit of vanishing gaps Δ_n, Δ_n = λ/2ν_0∑_m^ϵ_m < ϵ_DM_nm/ϵ_mtanh( ϵ_m/2T_c) Δ_m. to find the critical temperature T_c numerically for every realization of the Fibonacci potential, and then analyze the statistics of the ensemble of critical temperatures: their distribution function, mean value and variance. The results for the average critical temperature of the Fibonacci chain, computed along the above lines, are presented in Fig. <ref> for several system sizes L, disorder strengths h and couplings λ. For convenience of presentation, the points are manually shifted horizontally for fixed couplings λ. The vertical bars show the standard deviation of the critical temperature. For convenience we only show the error bars for the case h=0.30 – the error bars for other disorder strengths show similar behavior. At last, we estimate the critical temperature in the thermodynamic limit by the finite-size extrapolation. The results are labelled with black markers. We observe that over a wide range of couplings the average critical temperature is self-averaging with small variance. The variance increases significantly as the coupling strength is decreased, as seen in the bottom plot of Fig. <ref>. This suggests the existence of a crossover coupling strength below which the critical temperature starts to lose its self-averaging property and sample-to-sample fluctuations become important. Detailed discussion of this crossover and its properties is provided in the next section. The main result shown in Fig. <ref>, is the clear discrepancy between the two procedures - assuming self-averaging properties and absence of correlation followed by analytic solution of the Eq. (<ref>), and a straightforward numerical analysis of random critical temperatures found from the exact Eq. (<ref>) with no assumptions at all. That is although the critical temperature self-averages, this self-averaging value is different from the solution of Eq. (<ref>). In most regions of the coupling strength, we find an enhancement of the critical temperature compared to the analytical formula, Eq. (<ref>). By denoting the average critical temperature following from Eq. (<ref>) as T_c^N, we calculate the enhancement ratio R = T_c^N /T_c^A as shown in Fig. <ref>. For convenience we connected by lines the ratios for the coupling strengths above the self-averaging crossover, λ≥. As one can see, the enhancement ratio is suppressed by increasing the coupling strength, and both results, Eq. (<ref>) and Eq. (<ref>), converge to the mean-field theory. This behavior can be explained by the competition between the coupling λ and the disorder h. As λ > h, the coupling strength is dominant and the formation of the Cooper pairs is local, not affected by the realisation of the disorder potential. On the other hand, when λ < h, the potential takes the main role in defining which state and its time-reversal partner are to be coupled. This results in further enhancement in critical temperature due to the multifractality of the wavefunction and larger variance due to the sensitivity to disorder realization. § BREAKDOWN OF SELF-AVERAGING AND CROSSOVER IN THE COUPLING STRENGTH We have seen in Fig. <ref> that the variance of the average critical temperature increases significantly for small enough couplings λ. In this section, we discuss the breakdown of the self-averaging of the critical temperature and quantify the crossover coupling strength . In order to define the crossover coupling strength , we use the equation (<ref>) from which one extracts the critical temperature, λ W(T) Δ = Δ . It is an eigenproblem equation for the matrix W with the following matrix elements W_nk(T) = M_nk/2ν_0ϵ_k tanh( ϵ_k/2T) . Note that W depends explicitly on the disorder realization though the eigenvalues ϵ_k and eigenstates of the Fibonacci chain appearing in M (<ref>). It directly follows from the above equations, that for a given realization of the Fibonacci potential, the superconducting instability at some finite T exists only if the largest eigenvalue Λ(T=0) of W(T=0) is greater than 1/λ, or equivalently λ≥ 1/Λ(T=0). Based on this and the finite number of realizations of the Fibonacci potential for a given system size L, we define ^-1 = min_{h_i}Λ(T=0) , where the min is taken over the realizations of the Fibonacci potential. The coupling corresponds to the appearance of the first disorder realization without a supercondicting phase. We now demonstrate that provides a proper definition of the crossover coupling, below which the self-averaging property of T_c is lost. The naive argument is as follows: for λ < more and more disorder realizations stop having a superconducting phase, therefore increasing the sample to sample fluctuations, and making the average less well defined. In Fig. <ref>, we show the probability density distributions (PDF) of critical temperatures for several values of λ with λ̃≈ 0.16. We observe that as the coupling strength is decreased the average T_c becomes less representative. For λ > λ̃, the PDF has a bell shape and can be reasonably well approximated by a Gaussian, for instance for λ = 0.25. Closer to the crossover value λ̃, the distributions (green and yellow) spread out, and several close peaks appear in the PDF. For λ <, the distribution continues to spread, acquires a visible tail for smaller T_c and the trivial case T_c = 0 starts to accumulate (blue). As a consequence, the standard deviation of the critical temperature increase significantly. To further investigate the crossover coupling strength and the breakdown of self-averaging of the critical temperatures, we study the following metric of self-averaging: α = ⟨ T_c^2 ⟩/⟨ T_c ⟩^2-1, which quantifies the fluctuations around the average compared to the average itself: it is zero for perfectly self-averaging quantity (with δ-function distribution). The values of order 1 indicate that fluctuations around the average become comparable to the average itself, and the self-averaging property is lost. In Fig. <ref>, we show the values of α computed for the Fibonacci chain for different disorder strengths h at system size L=4000. The vertical dashed lines indicate the position of the crossover for several disorder strengths. The solid lines connect the points for couplings above the crossover . We observe from Fig. <ref> that the two definitions of the crossover from self-averaging to no self-averagaing, α and , agree well. The jump of the self-averaging parameter α between the left and right side of λ̃ from values of order 10^-2 to values of order 1, occurs precisely around the coupling . Lastly we extract the thermodynamic limit of by extrapolation as shown in Fig. <ref>: extrapolation suggests finite values of in the thermodynamic limit. Also the crossover decreases with decreasing the disorder strength. This behavior can be anticipated as follows: for h=0, the system is described by the BCS theory and T_c ∼exp(-1/λ). For a finite disorder, the non-zero values of indicate a transition from the superconducting phase to the insulator phase. The observed dependence of with disorder strength naturally connects the two limits. § CONCLUSIONS In this work, we considered an open 1D chain with the Fibonacci potential h and calculated correlation of two single particle wavefunction for different disorder strengths h. We found a power-law behavior of the correlation function, which reflects the multifractal character of the eigenstates of the Fibonacci chain. Using the single particle eigenstates, we used the mean-field theory to compute the critical temperature of the superconducting transition following two different procedures: * averaging Eq. (<ref>) first, then solving it for T_c, e.g. by averaging spatial correlation function C(ω) and estimating the multifractal related parameters γ and E_0, we analytically calculated the critical temperature via Eq. (<ref>) assuming self-averaging of all characteristic variables (as explained in the above text); * first solving Eq. (<ref>), then averaging, e.g. by solving Eq. (<ref>) numericaly for the critical temperature for a fixed realization of the Fibonacci potential, and analyzing the statistics – PDF, mean value and variance – of the ensemble of critical temperatures. We found a clear discrepancy between the results obtained with these two methods, which we attribute to neglecting correlations present in Eq. (<ref>) between T_c and the single-particle eigenfunctions in the kernel M, and eigenvalues ϵ_m. Our exact numerical approach clearly demonstrates the enhancement of the critical temperature in comparison to other approaches relying on neglecting the correlations in the equation (<ref>) for the critical temperature. We observe that for strong enough couplings, critical temperature is self-averaging, however that breaks for weaker couplings. We introduced the quantity to quantify the breakdown of the self-averaging property of the critical temperature. When λ >, the self-averaging is well preserved and the distribution of the critical temperature can be approximated by a Gaussian. On the other hand, when λ≤, the standard deviation start to spread significantly, indicating that the solution for the critical temperature becomes extremely sensitive to the disorder realization. TČ, AA acknowledge the financial support from the Institute for Basic Science (IBS) in the Republic of Korea through the project IBS-R024-D1. IVY gratefully acknowledges support from the Leverhulme Trust under the grant RPG-2019-317. While preparing this work we became aware of a closely related work, Ref. oliveira2023incommensurability.
http://arxiv.org/abs/2307.04308v1
20230710022738
CT-BERT: Learning Better Tabular Representations Through Cross-Table Pre-training
[ "Chao Ye", "Guoshan Lu", "Haobo Wang", "Liyao Li", "Sai Wu", "Gang Chen", "Junbo Zhao" ]
cs.LG
[ "cs.LG" ]
[1] Zhejiang University Hangzhou China [email protected] Chao Ye and Guoshan Lu are co-first authors of the article. Zhejiang University Hangzhou China [email protected] Zhejiang University Hangzhou China [email protected] Zhejiang University Hangzhou China [email protected] Zhejiang University Hangzhou China [email protected] Zhejiang University Hangzhou China [email protected] Junbo Zhao is the corresponding author. Zhejiang University Hangzhou China [email protected] Tabular data — also known as structured data — is one of the most common data forms in existence, thanks to the stable development and scaled deployment of database systems in the last few decades. At present however, despite the blast brought by large pre-trained models in other domains such as ChatGPT <cit.> or SAM <cit.>, how can we extract common knowledge across tables at a scale that may eventually lead to generalizable representation for tabular data remains a full blank. Indeed, there have been a few works around this topic. Most (if not all) of them are limited in the scope of a single table or fixed form of a schema. In this work, we first identify the crucial research challenges behind tabular data pre-training, particularly towards the cross-table scenario. We position the contribution of this work in two folds: (i)-we collect and curate nearly 2k high-quality tabular datasets, each of which is guaranteed to possess clear semantics, clean labels, and other necessary meta information. (ii)-we propose a novel framework that allows cross-table pre-training dubbed as . Noticeably, in light of pioneering the scaled cross-table training, CT-BERT is fully compatible with both supervised and self-supervised schemes, where the specific instantiation of   is very much dependent on the downstream tasks. We further propose and implement a contrastive-learning-based and masked table modeling (MTM) objective into CT-BERT, that is inspired from computer vision and natural language processing communities but sophistically tailored to tables. The extensive empirical results on 15 datasets demonstrate 's state-of-the-art performance, where both its supervised and self-supervised setups significantly outperform the prior approaches. CT-BERT: Learning Better Tabular Representations Through Cross-Table Pre-training Junbo Zhao ================================================================================= § INTRODUCTION With the extensive application of database management systems and the vigorous development of the internet industry, tabular data — also known as structured data — truly abounds. Indeed, the accumulation of scaled tables stored in databases has brought significant value to the industry or individuals, through tech stacks like data mining or the development of OLAP databases. Notably, over the past decade, various large-scale collections of tabular datasets have been proposed <cit.>, and they were used for tasks like tableQA <cit.>, table interpretation <cit.>, table expansion <cit.>, etc. Despite that, how to enable a large-scale, distributed, and cross-table pre-training very much remains untapped. This, unfortunately, is in stark contrast to the other communities such as computer vision and natural language processing. In both of these domains, techniques like pre-training followed by fine-tuning have long established a dominant methodological status, such as BERT <cit.>, CLip <cit.>, ChatGPT <cit.>, GPT4 <cit.>, SAM <cit.>, etc. In hindsight, the successes of these large-scale models lie in their ability to extract common semantic structure from the seen/unseen input and condense this knowledge/common sense into a vectorial representation. The emergence of this capacity stems from a scaled pre-training process on a gigantic amount of text or vision data across the domains. Recently, a few works have attempted to learn contextualized representation from tabular data through neural networks, or more specifically the transformer model  <cit.>, such as TabTransformer <cit.>, VIME <cit.>, TabNet <cit.>, SAINT <cit.>, etc. While the concept is truly promising, these approaches are limited to single-table training with a fixed form of a schema. Most closely related to our work are TransTab <cit.> and PTab <cit.>. Both approaches note the importance of cross-table learning. However, they process the table to a proximal form of text data, for instance by converting a sample row in the table into a sentence, without doing much adaption specifically to the structured data. This weakened coupling of the data values in the tables with the schema/meta/column names has arguably obstructed these approaches to scale and absorb common knowledge. §.§ Challenges In what follows, we identify the core challenges that remained in scaled and cross-table pre-training. C1. How can pre-training models accept inputs from heterogeneous tables as there are significant differences between different tables? For instance, the feature value "apple" appears under the column names "fruit" and "My_Laptop" in two different tables, conveying completely different meanings. C2. Unlike image or text data where the pixels and word/character tokens are ordered, arbitrarily permuting any tables' rows or columns does not change its semantic meaning. We dub this property as permutation invariance uniquely to tabular data. Thus, how can the pre-training mechanism be compatible with this nature of tabular data? C3. Still driven by the difference against common vision or text data, how to design a suitable cross-table pre-training task objective because there is no obvious context or spatial structure in the tabular data? §.§ Key Idea behind CT-BERT Ideally, in order for the pre-trained model to properly acquire the common knowledge from multiple heterogeneous tables, the model should be encouraged to learn the innate similarities or dissimilarities among the tabular data distribution. However, as we posited in the challenges, directly utilizing the original form of the data (or its corresponding embedding) may cause unentangleable confusion. Let us give a concrete example; given two tables with similar schema, two forms “10 meters" and “10 kg" are iconically identical. Despite that, directly converting them to embedding may inherently confuse and adversely impact the convergence or training difficulty. Abstracting away from this example, to cope with this challenge, the pre-training methodology must be capable to conform the different metrical systems or different notations. It is true that we can write heuristic rules to tackle this problem, but the amount of it would be surely insurmountable. In that regard, we outline the core idea behind CT-BERT. In a nutshell, provided with any table, it can always be decomposed to feature that denotes the data curated column-wise, together with token drawn from the schema information such as the column name or other textual meta-information. Instead of following a normal embedding-based encoding approach, we proactively combine the feature with the token information, by casting them into a form of textual representation. For example, we convert the feature value "apple" combined with the schema information to "fruit is apple", which we dub as a phrase, as the atomic representation of the cell value in tabular data. This allows to distinguish the same feature value "apple" in column "fruit" and "My_Laptop" respectively. We postulate that this manifests several merits. In particular, the challenge C1 can be both theoretically and empirically solved, and this formation is rid of many heuristic rules, except the template for sticking the feature and token together. §.§ Our Methodology:  Essentially, CT-BERT bases itself upon the phrase as the atomic representation of each unit in any provided table, in combination of the feature (column name/meta) with the feature value. We then process each atomic element similarly to word embedding in NLP. Towards the challenge C2 of the permutation invariance property, we propose a novel transformer <cit.> encoding architecture that is adapted to cater to this nature of tabular data. As a pioneer work to enable cross-table pre-training, we devise CT-BERT to be compatible with both supervised and self-supervised scenarios. In that regard, we profoundly categorized the available tables drawn from databases by a standard whether there exists a clear label column or not, that we direct it to supervised and self-supervised learning paradigms respectively. On one hand, for supervised learning, we propose a supervised contrastive learning-based objective to better cluster samples with the same label while allowing different labels to be uniformly distributed over the hypersphere of tabular representations. On the other hand, in order to take advantage of large-scale unsupervised data, we propose another pre-training method of masked table modeling (which we call MTM) — adapted from the MLM objective in the NLP community <cit.> — which facilitates to mask some features in the atomic then let the model predict the recovery (for challenge  C3). We believe that if the model can predict the masked features from the retained features, then the model can learn the underlying relationship between the features. Similar to CV or NLP, this relationship serves as the foundation to manifest the shareable knowledge that is migrated across tables. §.§ Contributions To wrap up, the contribution of this article is deemed two-fold. For one thing, we collect and curate nearly 2,000 tabular datasets, each of which is guaranteed to possess clear semantics, clean labels, and other necessary meta information. We treat these high-quality and labeled datasets as the foundation to launch large-scale pre-training. For another, we propose a generic and efficient cross-table pre-training solution, dubbed as  Cross-Table pre-Training framework (). CT-BERT promotes several novel development bullets including but not limited to: (i)-a novel paradigm compatible with both supervised and self-supervised objectives, (ii)-a contrastive learning and masked table modeling (MTM) objectives for pre-training tables, and a novel transformer architecture tailored to the permutation invariance nature of tabular data. Our pre-trained tabular model can support fine-tuning or few-shot learning for prediction on tables of any shape. The remainder of the paper is organized as follows. In Section  <ref>, we detail the table pre-training dataset   we contributed. In Section  <ref>, we present the proposed   cross-table pre-training framework. In Section  <ref>, we constructed extensive experiments to evaluate the effectiveness and superiority of  . § RELATED WORKS We provide a brief background on representation learning, models for tabular data, and self-supervised pre-training methods. §.§ Representation Learning In recent years, with the development of pre-trained large language models ("LLMs") like GPT-3 <cit.>, the pre-training then fine-tuning and prompting paradigms have attracted attention. These methods typically train models with self-supervised representation learning methods from large-scale unstructured text and structured knowledge bases, and then fine-tune them or use them for various downstream tasks. In early work in natural language, including Word2Vec <cit.> and GloVe <cit.>, pre-training distributed representations of words provided significant improvements over randomly initialized parameters. However, these methods cannot simulate the use of words in different linguistic contexts. This dilemma has prompted the development of vocabulary representations that can learn context and contextual relationships <cit.>, and these pre-trained language models have achieved tremendous success and produced state-of-the-art results in various NLP tasks <cit.>. Similarly, self-supervised representation learning can also be used for tabular data, such as knowledge bases (KB) and databases, where entities and relationships in the KB can be embedded into continuous vector spaces and then utilized for various downstream tasks, such as KB completion <cit.>, relation extraction <cit.>, entity resolution <cit.>, etc. Although representation learning on the text and KB has been successful, few works have explored directly learning self-supervised representations on large-scale tabular data for tabular modeling. In this work, we introduce , which is the first method for self-supervised pre-training on large-scale tabular data, and the pre-trained model can be fine-tuned for various downstream tabular prediction tasks. §.§ Models for Tabular Data For a long time, traditional machine learning (ML) methods such as tree-based methods  <cit.> have dominated this field and have been the preferred choice for most practitioners and data mining competitions (e.g., Kaggle) <cit.>. Recently, many researchers have proposed new neural network-based architectures <cit.> to model tabular data, attempting to challenge the dominance of tree-based models in this field. For example, TabNet <cit.> uses sequential attention to simulate the process of tree decision-making, TabTransformer <cit.> leverages transformers <cit.> to learn categorical features in tables, and AutoInt <cit.> utilizes attention mechanism <cit.> to model the relationship between user and item features in click-through rate prediction tasks. However, only very few of these neural network-based models work <cit.> attempt to investigate how to handle heterogeneous tabular inputs. This leads to the advantage that deep learning methods can be pre-trained on large-scale datasets that cannot be fully exploited. As described in Section <ref> and <ref>, our proposed   not only can accept inputs from heterogeneous tables but also achieves permutation invariance of feature columns, and leverage semantic knowledge from table headers and textual features. These advancements pave the way for   to be pre-trained on large-scale datasets for cross-table prediction. §.§ Self-supervised pre-training One of the key reasons for the great success of deep learning in computer vision and natural language processing is that knowledge on a large amount of unlabeled datasets is learned through a self-supervised pre-training task and then generalized to downstream tasks through fine-tuning. For instance, masked language modeling (MLM) self-supervised pre-text task <cit.> is employed to learn contextual relationships in natural language processing. In computer vision, masked image modeling (MIM) <cit.> and contrastive learning <cit.> have been used to train powerful image representations. Some studies have attempted to extend the success of self-supervised learning to tabular data. These approaches can be roughly categorized into three types: 1) reconstruction of masked inputs <cit.>; 2) contrastive learning similar to that in SimCLR <cit.>; 3) a combination of the first two. For example, VIME <cit.> utilizes autoencoders to reconstruct corrupted table inputs. SCARF <cit.> randomly selects and replaces certain features with corresponding empirical marginal distributions to construct different views of the same sample. We argue that contrastive learning methods similar to that in SCARF <cit.> are not applicable to large-scale unlabeled cross-table pre-training tasks. Assuming the existence of a priori true labels for these unlabeled samples, such contrastive learning methods are highly likely to distance samples with the same labels, especially for tables with unrich sample labels. We are more inclined to believe that methods like masked language modeling (MLM) and masked image modeling (MIM) have greater potential. Therefore, in this work, for the first time, we formalize this series of approaches as masked table modeling (MTM) tasks. Additionally, we propose a novel masked table modeling method that combines semantic cues from table headers, which is more suitable for learning cross-table knowledge. § PRELIMINARY §.§ Problem Formulation For a given tabular data D=(𝐱_i,y_i)_i=1^n where n refers to the number of samples. 𝐱_i={𝐱_i^cat, 𝐱_i^num} where 𝐱_i^cat={x_i^1, x_i^2, … ,x_i^a} denotes all a categorical features, and 𝐱_i^num∈ℝ^b denotes all b numerical features. y_i∈{1, 2, … , T} where T refers to the total classes of labels. All samples share the same table header descriptions (column names) 𝐂={c^1, c^2, …, c^a+b}. Our goal is to find the best possible prediction function f_θ to model the mapping between features and labels: f_θ(𝐱_i; 𝐂) = y_i, where θ refers to all trainable parameters of the function f. §.§ Pre-training then Fine-tuning Paradigm in Tabular Domain Given a generic architecture, often called a backbone such as Transformer, and projection head for mapping to specific tasks, the model is first pre-trained on a large dataset by self-supervised or unsupervised tasks (e.g., Contrastive Learning or MLM). The individual feature columns of the dataset {𝐱^cat, 𝐱^num} are converted to the input format 𝐱_i={𝐞^CLS, 𝐞^1, 𝐞^2 ...,𝐞^a+b}, which is sent to the Transformer model, and the model is further optimized using self-supervised or unsupervised objectives. Then, in the downstream task-specific fine-tuning stage, the pre-trained backbone module is retained, the pre-trained projection head is discarded and the classification head for the new task is constructed, and the output 𝐞^CLS is used for multi-classification and optimized via cross-entropy loss <cit.>, etc. § : A LARGE-SCALE SEMANTIC TABULAR DATABASE In recent years, the field of cross-table pre-training has been relatively underexplored. One major challenge lies in the lack of a clean and high-quality tabular dataset. Just as the proposal of ImageNet <cit.> has greatly propelled the advancement of computer vision representation learning and influenced various other domains, such as self-supervised learning and transfer learning, a similar catalyst is needed for the domain of tabular representation learning. Therefore, in this work we contribute a large-scale semantic tabular database, which we called , to better train our .   is a large-scale tabular database with high quality built on various public tabular dataset websites and through our strict data cleaning. These tabular datasets are collected from OpenML[https://www.openml.org/], UCI[https://archive.ics.uci.edu/datasets], CATALOG[https://catalog.data.gov/dataset], and Kaggle[https://www.kaggle.com/]. We have open-sourced  [https://drive.google.com/file/d/1-2m1tyejUV5_bZduqZw1ZXS1BUSkhzVl/view?usp=drive_link] and hope to facilitate future research in the field of tabular representation learning. With the advent of the Big Data era, the proliferation of database technologies has led to an explosion of tabular data on the Internet. These numerous tabular datasets can help more complex and powerful models and algorithms to learn more general tabular representations. And representations are the standard signal linking many machine learning applications in this day and age. This means that more novel AI techniques can be made accessible to databases, such as allowing large language models (e.g., ChatGPT <cit.>) to understand databases. However, the quality of tables in Internet databases is inconsistent greatly, which can significantly impact the learning performance of models. For example, column name information in some tabular datasets is usually anonymized or unclear to avoid compromising privacy (e.g., named f1, f2, etc.), which may lose important semantic knowledge to better understand the tabular data. In addition to this, some tabular datasets also suffer from too many missing values, redundant feature columns, lack of consistent formatting, etc. Therefore, in this work, we spent a lot of time filtering and cleaning the tabular data from the Internet database. Specifically for each table, our data cleaning includes the following steps: (1) Check the semantic degree of the column names for each feature. For example, the column names {user_age, weight, monthly_income} have high semantic information, while the column names {f1, f2, xyz} have almost no semantic information. We compute the cumulative semantic relevance score for each table. In our cleaning protocol, we discard such tables that have less than 50% of the features having actual semantic information in the column names. (2) Check the missing values. For example, the datasets with more than 40% missing values are discarded. Because too many missing values can easily lead to biased or inaccurate results. For the retained tables, we fill the missing values with the plural of the corresponding column. (3) For categorical features in the tables, we aim to restore them to their original textual values. As for numerical features, we employ min-max normalization. This is done to mitigate the impact of inconsistent measurement units across different tables (e.g., kilograms vs. grams). (4) For the table with labels and more than 100 features, feature filtering based on Random Forest importance <cit.> is performed, and the features with lower importance ranking are discarded. At present,  has contained about 17G datasets, including approximately 1000 labeled datasets and 1000 unlabeled datasets. Usually high-quality and semantically rich labeled datasets are more difficult to obtain, while unlabeled tabular datasets are easier to obtain. Therefore, in supervised pre-training, the theoretical upper bound of model performance is expected to be influenced by the quantity of available labeled tabular datasets at the data level. In contrast, self-supervised pre-training has the potential for a higher upper bound of performance. According to what is suggested in previous research <cit.>, contrastive learning will not be adapted to tables that are not rich in label classed due tothe differences and labels of the samples being more relevant the chances of sampling negative samples are low, which is why we propose a novel self-supervised masked table modeling (MTM) pre-training approach. We believe that the contrastive learning-based pre-training approach will be more suitable for lightweight labeled scenarios, and the upper limit of the model will be determined by the number of its available tabular datasets. On the other hand, the self-supervised pre-training approach may require a large amount of data for model training and would also theoretically have more room for improvement. § METHODS Previously proposed table pre-training methods <cit.> have all been pre-trained on an individual tabular task dataset. As a result, these pre-trained models exhibit notably poor generalization performance on downstream tasks involving other tables. In this section, we detail our proposed novel cross-table pre-training framework  , which improves the generalization ability of pre-trained models by learning shareable knowledge across different tables. The overall architecture is provided in Figure <ref>. As we have discussed before, cross-table pre-training needs to address three key challenges C1-C3. For C1, in Section  <ref> we propose to use a natural language-like approach to process the input of heterogeneous tables and enhance cross-table transfer learning by leveraging semantic knowledge in the schema. For C2, in Section  <ref> we use an adapted transformer encoder <cit.> without positional encoding to model feature-level interactions. For C3, in Section  <ref> we propose a novel masked table modeling (MTM) self-supervised pre-training task for large-scale unlabeled dataset scenarios and a contrastive learning-based supervised pre-training task for lightweight labeled dataset scenarios, respectively. At last, in Section  <ref> we introduce fine-tuning the pre-trained model on downstream tasks. §.§ Input Processor on Heterogeneous Tables Feature columns among tables from diverse domains often exhibit significant variations. Therefore the previous works  <cit.> often use the table-specific feature extractor which is also called "feature tokenizer" in their literature. This greatly hinders the model to perform cross-table learning. In  , we analyze that the table is essentially a multimodal structured data, which contains both text (e.g., column names and discrete categorical values) and continuous values. Based on this observation, we use a natural language-like approach and combine the column name schema information to convert all features into a uniformly formatted feature phrase, e.g. [column name] is [value]. This design has two advantages. First, our model can accept inputs from heterogeneous tables without any table-specific operation. This serves as a necessary condition for enabling cross-table pre-training. Second, the knowledge learned from pre-training can be maximized to transfer between similar features by semantic information in the schema across different tables. For example, gender features are recorded in both tables. In one table, the column name is gender and the value is "male", and in the other table, the column name is "sex" and the value is "man". Our model can encode the two feature phrases "gender is male" and "sex is man" into two distance proximity embeddings (e.g., cosine similarity is high) based on semantic information. For each feature phrase, we convert it into a low-dimensional embedding and employ it to model the feature interaction in the subsequent phase. The right part of Figure  <ref> illustrates the details about how we handle the categorical and numerical features separately to get the feature embedding. Categorical Feature. For each sample x_i, each discrete category will have a corresponding text description (e.g., 1 for a man, 2 for a woman). We concatenate the column name and the original categorical description to form a feature phrase. Then, we use a pre-trained BERT  <cit.> model to tokenize the phrase and generate the corresponding embedding for each token, where the pre-trained BERT model contains generic semantic knowledge. Further, we pool these token embeddings of the j-th feature into one feature embedding 𝐞_i^j∈𝐑^d. In our experiments, we tried average, self-attention <cit.> and other pooling methods. See Section  <ref> for ablation experiments on these pooling strategies. Among them, the average pooling strategy performs the best. Therefore, without a special explanation, average pooling is used by default. Numerical Feature. We know that at least for now pre-training token embedding of continuous values is ineffective <cit.>. For numerical features, we similarly process their column names as for categorical features to obtain the header embedding 𝐜^𝐣∈𝐑^d. Then we multiply the normalized numerical value with the corresponding header embedding to get the feature embedding𝐞_i^j=x_i^j×𝐜^𝐣∈𝐑^d. Note that the normalization of the numerical values is important here, as it helps the knowledge to transfer better across different tables. Because the same numerical features may have different measurement units across different tables. For example, the unit of height in one table is a meter, but in another table may be a centimeter. We note that previous works <cit.> have also tried to combine column names to convert each sample into a sequence of text tokens and the subsequent learning is built on the token-level. We think that such token-level interactions are more suitable for extracting textual semantic information from tables (e.g., TableQA task  <cit.>), but are not well-suited for our target column prediction task. For example, in a "work" column with the value "associate professor" in a table, this feature will first be converted into three token embeddings: [work], [associate] and [professor]. The subsequent model will learn the relationship between [associate] token and [professor] token in the same column, which is unreasonable. The experimental results in Section  <ref> also validate this observation. However, in our design, one column corresponds to one feature embedding, and the subsequent model learns at the feature-level. This is a straightforward but effective enhancement. At the same time, for tables with a large number of features, such a design can optimize computational efficiency and memory space usage. §.§ Feature Interaction There is no inherent order relationship among different columns in a table. In other words, tables possess permutation invariance in the column dimension. Previous tabular modeling works <cit.> often overlooked this aspect by directly employing the transformer architecture  <cit.>. Therefore, we have made certain modifications to the standard transformer encoder to adapt it to tabular data. Specifically, we 1) discard positional encoding and 2) use a shared-parameter fully connected feed-forward network at each transformer encoder block. Finally, our adapted transformer encoder block contains two sub-layers: a multi-head self-attention layer, and a shared-parameter fully connected feed-forward layer. In addition, a residual connection <cit.> is done for each sub-layer, followed by layer normalization <cit.>. The multi-headed self-attentive mechanism is the key to modeling feature interactions. It learns the relationship between features through Query, Key, and Value matrices. It is calculated as follows: MultiHead(𝐇^l) = Concat(head_1, …, head_i, …, head_h))𝐖^O, head_i = Attention(𝐇^l𝐖_i^Q,𝐇^l𝐖_i^K,𝐇^l𝐖_i^V), Attention(𝐐,𝐊,𝐕)=Softmax(𝐐𝐊^T/√(d))𝐕, where 𝐇^l∈ℝ^n × d is the input of the l-th layer; 𝐖^O∈ℝ^d × d is parameter matrix; 𝐖_i^Q, 𝐖_i^K and 𝐖_i^V ∈ℝ^d × d_head. d_head=d/h is the dimension of each attention head. Inspired by BERT <cit.>, we add a special classification token (𝐞^CLS∈ℝ^d) to the first position of the input sequence in each layer. This special token is used as the aggregate sample representation and is then served for the subsequent pre-training and downstream tasks. As described in Section  <ref>, we can obtain the processed feature embeddings 𝐄={𝐞^1, 𝐞^2, …, 𝐞^a+b} from the raw tabular data . So we have the first layer of input 𝐇^0=[𝐞^CLS, 𝐄]. Finally, we can model the higher-order feature interactions step by step through the following calculation: 𝐇^l+1=LayerNorm(𝐇̂+linear(𝐇̂)), 𝐇̂=LayerNorm(𝐇^l+MultiHead(𝐇^l)). §.§ Pre-training Across the Tables Our work is the first to explore large-scale cross-table pre-training. Supervised and self-supervised pre-training are two major approaches in the field of deep learning. As described in Section  <ref>, we contributed  a cross-table pre-training dataset which is collected from various domains and includes approximately 1000 labeled tables and 1000 unlabeled tables. In this work, based on the nature of the collected dataset  , we simultaneously explore supervised and self-supervised cross-table pre-training approaches. Firstly, for the relatively more easily learnable labeled tabular datasets, we propose a randomly subsampled supervised contrastive learning approach to adapt to the cross-table pre-training task. Secondly, for large-scale unlabeled tabular datasets, some studies have discussed the limitations of contrastive learning-based methods in unlabeled tabular scenarios  <cit.>. So in order to fully leverage the potential of shareable knowledge within unlabeled tabular data, in , we propose a novel masked table modeling (MTM) self-supervised cross-table pre-training method. Details of the two cross-table pre-training approaches are as follows: Supervised contrastive learning. In the labeled tabular scenario, we observe that samples with the same labels tend to have similar feature sets. Based on this observation we make a bold hypothesis: powerful representation should model the invariant factors of feature sets with the same label. We, therefore, propose a random overlapping subsampling method to construct positive and negative samples in contrastive learning. Figure <ref> illustrates how we randomly sample subsets and divide positive and negative pairs. Specifically, for each row (𝐱_i,y_i) we randomly sample k feature subsets {𝐬_i^1, 𝐬_i^2, …, 𝐬_i^k} and set all their labels to y_i. There will be a partial overlap of features between subsets. In this way, feature subsets with the same label form positive pairs, and subsets with different labels form negative pairs. Overall contrastive loss is: ℒ_pretrain^CL(𝐗,𝐲)=1/| B |∑_i∈ B1/| P(i) |∑_p ∈ P(i)Ψ(𝐳_i^CLS,𝐳_p^CLS), Ψ(𝐳_i^CLS,𝐳_p^CLS)=-log(exp(sim(𝐳_i^CLS,𝐳_p^CLS)/τ)/∑_i'∈ Bexp(sim(𝐳_i^CLS,𝐳_i'^CLS)/τ)), where B is the set of samples in a batch; P(i)={p|p∈ B, p≠ i, y_i=y_p}. The previous tabular contrastive learning work SCARF <cit.> focused only on constructing different views of the same samples, simply treating all different samples as negative pairs. This only applies when the sample label classes are very rich such that the sample labels in a batch are almost all different. Compared to the tabular vertical fixed-partitioned contrastive learning method <cit.>, our method can learn more robust sample representations in richer feature subsets by random sampling. Self-supervised MTM. For large-scale unlabeled scenarios, we propose a novel masked table modeling (MTM) self-supervised cross-table pre-training task. On each sample row in all tables, we mask some percentage of features, and then reconstruct them based on the retained features. We argue that if the model is able to successfully reconstruct the masked features from the retained features, then the model is able to learn the underlying relationships between features that can be transferred as shareable knowledge between different tables with similar feature columns, which will eventually indirectly bring closer the representations of samples with similar feature relationships. The middle part of Figure <ref> shows the overview of our self-supervised MTM pre-training method, which can be divided into three steps. First step we select the features that are masked. Given an input table, we first convert all features of each sample into feature embeddings, as described in Section <ref>. Then we mask approximately p^mask features for each row (p^mask is set to 35% in our experiments and further ablation results are shown in Section  <ref>). Specifically, we generate a binary mask vector 𝐦 = [m^1, m^2, …, m^a+b]∈{0, 1}^a+b where m_j is randomly sampled from a Bernoulli distribution with probability p^mask. The "1" in 𝐦 indicates a masked feature and "0" indicates keeping the original feature. Second step we replace the masked features with a shared, learnable vector 𝐞^mask∈ℝ^d, which is also called mask token. Note that here we will add additional header embedding, which is obtained by pooling the text token embeddings of the corresponding column name, for each mask token. Because there is no order relationship between the columns in tables. Here the role of header embeddings is like the position embeddings in masked language modeling (MLM) <cit.> and masked image modeling (MIM) <cit.> tasks. Third step we reconstruct these masked features. We feed the masked sample row 𝐱={𝐞^j|m^j=0}∪{𝐞^mask+𝐜^j|m^j=1} into the L-layer transformer encoder to get the encoded representations 𝐇={𝐡^𝐣}_j=1^a+b. For the masked numerical features, we pass it through a numerical projection matrix 𝐌_pro^num∈ℝ^d× 1 and then calculate the mean square error loss with the original feature values. For the masked categorical features, we pass it through a categorical projection matrix 𝐌_pro^cat∈ℝ^d× d and then compute the cosine similarity with the original feature embedding 𝐞_j. Here the feature embedding 𝐞^j is calculated in the same way as section <ref> but with the column names removed. We formulate the masked table modeling pre-training loss as follows: ℒ_pretrain^mask(𝐗)= 1/| B |∑_i∈ BΦ(𝐱_𝐢,𝐞_𝐢,𝐳_𝐢), Φ(𝐱_𝐢,𝐞_𝐢,𝐳_𝐢) = 1/N^num∑_j=1^N^num(x_i^j-z_i^j)^2 + 1/N^cat∑_j'=1^N^cat(1-sim(𝐞_i^j', 𝐳_i^j')) where B is the set of samples in a batch; z_i^j=𝐡_i^j𝐌_pro^num; 𝐳_𝐢^𝐣'=𝐡_i^j'𝐌_pro^cat; N^num refers to the number of numerical features; N^cat refers to the number of categorical features. We do not compute the traditional cross-entropy loss for categorical features because the same category in the same feature column may be inconsistently labeled in different tables, which can lead to confusion when cross-table pre-training. For example, for the "gender" column, one table may have "man" corresponding to label "1" and "woman" corresponding to label "2", while another table might be the exact opposite, with "man" corresponding to label "2" and "woman" corresponding to label "1". Rather than a completely random mask strategy, we think that the proportion of numerical and categorical features masked can be adjusted according to the downstream task. When the downstream scenario is a regression task, the model needs to predict a continuous value. In this case, the pre-training task to predict the masked numerical features will be more helpful. Similarly, for classification downstream tasks, it will be biased to mask more categorical features. The downstream tasks in our experiment are mainly classification prediction, so we set the mask ratio of categorical features and numerical features to 7:3 during pre-training. The overall mask rate is 35%. §.§ Fine-Tuning on Downstream Tabular Tasks After cross-table pre-training, we discarded the original projection header and added a new task layer on the Transformer encoder. We then fine-tuned the parameters on the downstream task datasets. The downstream scenario in our experiments is mainly classification prediction tasks. So, we employ a simple linear classifier as the task layer. We use softmax <cit.> to calculate the probability of each label category and use cross-entropy loss as our empirical supervised loss. ℒ_task(𝐗,𝐲)=-1/N∑_i=1^N∑_j=1^Ty_ijlog(f_θ(𝐱_i)), where label y_i uses one-hot encoding; T is the total number of all label categories. § EXPERIMENTS In this section, we evaluate the effectiveness and superiority of   on several benchmark tabular datasets. Specifically, we conducted extensive experiments to demonstrate the following two points: * How does our backbone, which can accept heterogeneous table inputs, compare with the current state-of-the-art tabular neural network framework when faced with a fixed single table downstream task without pre-training? * (key) Our large-scale cross-table pre-training can help improve the effectiveness of downstream tasks by self-supervised masked table modeling pre-training in large-scale unlabeled scenarios and supervised contrastive learning pre-training in lightweight labeled scenarios, respectively. §.§ Experimental Setup §.§.§ Datasets The experimental dataset consists of two parts: upstream large-scale cross-table pre-training datasets and downstream tabular tasks for evaluating the effectiveness of our model and pre-training. Large-scale cross-table pre-training dataset: We collected more than 2000 high-quality datasets with semantic information of column names and performed some data cleaning, including 1000 labeled datasets and 1000 unlabeled datasets. We call this dataset   and describe it in detail in Section  <ref>. Public downstream tabular tasks: We selected 15 common and high-quality tabular datasets from OpenML-CC18  <cit.> to evaluate the effectiveness of our model and pre-training method. These downstream datasets contain both binary and multi-class classification tasks. We included the details and source of each dataset in Table  <ref> &  <ref> in the Appendix  <ref>. §.§.§ Competing Methods We conduct experiments on the following "shallow" (eg. tree-based) and neural network-based methods to show the efficacy and efficiency of  on tabular learning. Shallow baselines: *  Logitic Regression <cit.> is a linear classification algorithm that models the relationship between input variables and a binary outcome using a logistic function. It is widely used due to its simplicity, interpretability, and ability to handle large datasets efficiently. *  Xgboost <cit.> is an advanced implementation of gradient boosting algorithms. It has gained great popularity in machine learning competitions (e.g., Kaggle) and has been considered the dominant approach to modeling tabular data for a long time. *  LightGBM <cit.> is another gradient boosting tree framework. It employs a novel approach called "Gradient-based One-Side Sampling" (GOSS) to achieve faster training speeds and lower memory usage. Neural network-based baselines: *  MLP (Multilayer Perceptron) <cit.> is a basic feed-forward fully connected artificial neural network architecture, but is considered a competitive neural network approach on tabular data. *  TransTab <cit.> is a newly proposed tabular framework that combines column description and table cells as the raw input to a transformer and is the current state-of-the-art tabular model. *  FT-Transformer <cit.> is a adaptation of the Transformer architecture  <cit.> for the tabular data (Feature Tokenizer + Transformer). *  TabNet <cit.> uses sequential attention to simulate the process of tree decision-making, enabling interpretability and more efficient learning on tabular data. *  VIME <cit.> is a self- and semi-supervised learning framework specifically designed for tabular data. *  SAINT <cit.> is a newly proposed hybrid deep learning approach to solving tabular data problems and performs attention over both rows and columns. *  DCN-v2 <cit.> is an improved version of Deep & Cross Network (DCN), and claimed to be able to automatically and efficiently captures feature interactions in tabular data. *  AutoInt <cit.> is a click-through prediction, which is a type of structured data task, model. It uses a multi-head self-attentive neural network to learn the high-order feature interactions of input features. §.§.§ Metrics We follow previous work <cit.> using AUC <cit.> as the main evaluation metric and improve on it using 5-fold cross-validation <cit.> as the final result. Note that within each fold of the training set, we partitioned 20% as a validation set, which was utilized for hyperparameter selection and early stopping. For the sake of fairness, we employed the identical dataset splitting setting for all baseline algorithms and   on all downstream task datasets. §.§.§ Implementation Details For details of all baseline implementations see Appendix <ref>, while the settings for all baselines remain consistent across all experiments unless otherwise specified. In the data pre-processing phase, we scale numerical features to [0, 1] by min-max normalization in all methods. For classification features, we use ordinal codes to represent them in all baselines. However, note that in our  , we use the raw textual values of the categorical features in order to better exploit their semantic information.   uses a 4-layer transformer, where the embedding dimension of the token is 128, the hidden dimension of the middle dense layer is 256, and the self-attention module has 8 heads. We use a dropout of 0.3 in all attention layers and feed-forward layers. We choose ReLU for all activation functions. The supervised pre-training method is trained on 1000 labeled datasets, and the self-supervised pre-training method is trained on all 2000 datasets. We train   using Adam <cit.> optimizer with a learning rate in {5e-5, 1e-4, 3e-4}, where the learning rate of the fine-tuning phase will be smaller than that of the pre-training phase. Batch size is in {64, 128, 256}. We use a pre-trained BERT-base-uncased  <cit.> model on Hugging Face[https://github.com/huggingface] to obtain token embeddings that are rich in semantic information. In the pre-training phase, we set the maximum training epoch to 500 for both the supervised contrastive learning and the self-supervised masked table modeling tasks. In the fine-tuning phase, the maximum training epoch is 200 and the patience value is set to 20 for early stopping. Experiments were conducted with 8 GPU V100, Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz, and 128GB RAM. We use the DeepSpeed  <cit.> framework for parallel computation acceleration. DeepSpeed offers a range of optimization techniques, including model parallelism, data parallelism, and mixed-precision training. It can improve the efficiency of our large-scale cross-table pre-training which occupies a large portion of the computational resources in our experiments. §.§ Overall Performance In this section, we report the overall performance of  . The results are shown in Table <ref>. §.§.§ Supervised Learning from Scratch As can be seen in Table <ref>, _NoPT outperforms all the existing works on standardized benchmarking datasets on average. Although TransTab <cit.> greatly outperforms the baseline method, _NoPT is still slightly higher than TransTab on avg. 0.8%. We analyze this due to the fact that  _NoPT models at the feature-level, while TransTab <cit.> models at the token-level, which may be not reasonable on tabular data. And the experimental results also show that TransTab's performance drops abruptly on some datasets, such as car and phishingweb. In addition, we found that  _NoPT is also comparable to FT-transformer <cit.> and SAINT <cit.>. We analyze and believe that  _NoPT is essentially the same as these methods on single tabular data, which extract features from table data and then model feature interactions using a similar transformer encoder. However, the difference is that  _NoPT can receive input from heterogeneous tables. This gives our approach a natural advantage in cross-table pre-training which is detailed in Section <ref>. §.§.§ Cross-table Pre-training. We mainly compare with the supervised learning from scratch of  . Supervised: In labeled scenarios, our supervised contrastive learning cross-table pre-training model _P_S has achieved state-of-the-art average performance. As evident from the results in Table  <ref>, _P_S outperforms the supervised training from scratch _NoPT by avg. 1.29% and achieves better performance on 10 out of 15 diverse downstream tabular tasks. Moreover, we observed that _P_S achieves a comparatively competitive performance than masked table modeling self-supervised cross-table pre-training method on average. We analyze that the reason lies in _P_S's ability to fully leverage the label information, enabling the model to learn more powerful sample representations. And self-supervised methods may require a larger amount of training data to achieve significant advancements. Self-supervised: In large-scale unlabeled scenarios, as can be seen in Table <ref>, our masked table modeling self-supervised cross-table pre-training model _P_M outperforms the supervised training from scratch _NoPT by avg. 1.2%. And _P_M achieves better performance on 13 out of 15 diverse downstream tabular tasks. It is noteworthy that our cross-table pre-trained model exhibits significant improvements on the cylinder-bands, higgs, and Amazon datasets. We hypothesize that this result can be attributed to the presence of certain tables in the pre-training data that bear close relevance to these downstream tasks. Therefore, we have reason to believe that masked table modeling cross-table pre-training approach on ultra-large-scale datasets is a highly promising approach on the path toward a comprehensive universal table model.   is the first attempt at such large-scale cross-table pre-training. Our experimental results demonstrate the feasibility of learning shareable knowledge across different tables through cross-table pre-training, which helps the model achieve better generalization on diverse downstream tasks. Both supervised pre-training and self-supervised pre-training methods achieved good performance. We believe that supervised training requires higher dataset requirements but may be better suited for specific scenarios, while self-supervised training has the potential for greater scalability through larger pre-training datasets in the future. §.§ Few-shot Learning As widely recognized, a significant advantage of pre-trained models is that they still work well when the downstream task dataset is relatively scarce, commonly referred to as few-shot learning  <cit.>. This capability stems from that the model can learn rich shareable knowledge from large-scale upstream datasets. In the domain of tabular tasks, there are numerous practical application scenarios characterized by limited data resources, such as medical diagnosis <cit.>. In such contexts, the exceptional few-shot learning ability of pre-trained models becomes invaluable. Therefore, we conducted extensive experiments to explore the practical effectiveness of   in the context of few-shot learning settings. Specifically, for each downstream classification tabular data, we randomly sampled 5/10/20 samples from each class to construct three new 5-shot/10-shot/20-shot tabular datasets. We then performed both supervised training from scratch and pre-training then fine-tuning on these new few-shot datasets. The experimental results are presented in Table  <ref>. The self-supervised and supervised pre-trained models significantly outperformed the baseline of learning from scratch in the few-shot learning setting. In 5-shot case, _P_S outperforms the training from scratch  _NoPT by avg. 8.4%, and _P_M also surpassed by avg. 3.58%. Furthermore, we can observe that the pre-trained model exhibits a greater improvement in performance when the number of samples is less. The improvement is most significant in the 5-shot case while is relatively weaker in the 20-shot case. We analyze this as a reasonable phenomenon. The shareable knowledge learned through cross-table pre-training is relatively more valuable when the training data is less. In conclusion, all these experimental results strongly demonstrate the tremendous potential of cross-table pre-training in the context of few-shot learning. §.§ Ablation Studies In order to demonstrate that modeling at the feature level is more effective than previously used word token-level modeling in tabular data, we conducted ablation experiments. Specifically, we do not pool all word token embeddings into one feature embedding but feed them directly into the transformer layer for learning. The experimental result is presented in Table  <ref> and proves that feature-level modeling is significantly better than word token-level modeling. Additionally, we further evaluated different pooling strategies: average pooling, max pooling, and self-attention <cit.> pooling. The results are shown in Table <ref>. Among these strategies, average pooling gives the best results. We tried to analyze the reason that max-pooling may not be able to distinguish between different feature values in some cases. For example, the max value may come from the word token embedding in the column name, which is the same for all the sample rows. The self-attention mechanism may be too complex relative to this simple information extraction. And average pooling can do this task simply and efficiently. §.§ Further Analysis §.§.§ Convergence Curves Figure <ref> compares the convergence curves of two paradigms: "training from scratch" and "pre-training then fine-tuning". We observed that pre-training and then fine-tuning leads to faster convergence and better results. This demonstrates that   has learned beneficial shareable knowledge for downstream tasks through cross-table pre-training. Furthermore, pre-training and then fine-tuning can achieve reasonable results within a short period of time. This significantly improves the efficiency of executing downstream tasks that do not require high precision. It also partially alleviates the longer training time issue associated with neural network training compared to traditional tree-based machine learning methods  <cit.>. §.§.§ Masking Ratio Previous research  <cit.> has suggested that a higher mask rate is required to achieve better performance in masked image modeling tasks, whereas a lower mask rate is sufficient for masked language modeling tasks. In this experiment, we further investigate the impact of mask rates on masked table modeling tasks, as shown in Figure <ref>. We found that the model has high performance between 30% and 50%, with an excessively high mask rate leading to a steep descent, while an excessively low mask rate leads to a more moderate descent. We analyze that table data exhibits high information density, where a change in a single feature value can significantly alter the meaning of a sample. So too high a mask rate will cause the model to have difficulty in learning the correct feature relationships. §.§.§ Hyperparametric Sensitivity Analysis. We analyzed the sensitivity of the number of randomly sampled partitions and the learning rate. We randomly selected some datasets to experiment with the _P_S method. The experimental results are shown in Fig. <ref>. The settings are consistent with Section <ref> except for the corresponding hyperparameters. It can be seen that  is robust to the hyperparameters. § CONCLUSION With CT-BERT and TabPretNet, we hope to initiate the scaled cross-table pre-training for the community of database and data mining community. Speaking humbly, we deem CT-BERT as a pioneer work to scale tabular data pre-training that it works in either a supervised and/or self-supervised manner. We empirically demonstrate that facilitating the pre-training procedure across large-scale tabular datasets indeed offers decent efficacy benefits. Perceiving it through the lens of the development of current LLMs, our model is still small (50M), which is roughly the same size as BERT-base  <cit.> in spite of CT-BERT being the largest-scaled pre-trained model in tabular modeling thus far. We think that for tabular data pre-training, we are still in the era of the BERT model in NLP tracking back a few years. That is to say, the size of the large model and the volume of the dataset still fall far behind the development of the LLMs, such as ChatGPT or its other rivals <cit.>. On the bright side, the volume of available tabular data is truly gigantic — wherever a database system is deployed there will be tabular data — but perhaps much more decentralized than the text and vision data. In the future, we hope to explore even further scaling CT-BERT and adapting it to more diversified data domains. § APPENDIX §.§ Baseline architecture and implementation The setup of our baseline follows the previous work <cit.> and includes the following methods: * Logistic Regression: Use the default setting of the package Scikit-Learn. The maximum number of estimators is set to 1000. * XGBoost: Implemented based on the XGBoost package. We set the maximum number of estimators in {50, 100, 300} and the max depth in {5, 8, 10}. * LightGBM: Implemented based on the LightGBM. We set the maximum number of estimators in {50, 100, 300} and the max depth in {5, 8, 10}. * MLP: Dense layers with hidden dimensions {256, 256}. Dropout with a rate of 0.1 is used. They are trained with batch size ∈ {16, 32, 64, 128}, learning rate ∈ {5e-5, 1e-4, 1e-3}, and early stopping patience of 5 with 100 maximum epochs. * TabNet: Use the official implementation with the default recommended parameters[https://github.com/dreamquark-ai/tabnet]. Trained with batch size ∈ {16, 32, 64, 128}, learning rate ∈{1e-4, 1e-3, 2e-2}, n_a,n_b ∈ {8, 16, 64, 128}, γ ∈ {1.3, 1.5, 1.8}, categorical embedding dimension ∈ {1, 8, 16} and early stopping patience of 5 with 100 maximum epochs. * DCN-v2: Use the implementation by paper <cit.>[https://github.com/Yura52/tabular-dl-revisiting-models]. The number of cross is 2. The dropout rate for the feedforward component is 0.1. MLP part has two dense layers of dimension {256, 128}. Trained with batch size ∈ {16, 32, 64, 128}, learning rate ∈ {5e-5, 1e-4, 1e-3}, and early stopping patience of 10 in 100 maximum epochs. * AutoInt: Use the implementation by paper <cit.><ref>. The attention layer number is set to 2. The attention head number is set to 2. MLP part has two dense layers of dimension 256, 128; dropout deactivated; trained with batch size ∈ {16, 32, 64, 128}, learning rate ∈ {5e-5, 1e-4, 1e-3}, and early stopping patience of 10 in 100 maximum epochs. * SAINT: Use the official implementation[https://github.com/somepago/saint]. The embedding size is 32 dimensions. 6 transformer layers are used. The number of heads of attention is ∈ {4, 8}. The dropout rate is 0.1 in all attention layers and feed-forward layers. Inside the self-attention layer, the q, k, and v vectors are of dimension 16, and in the intersample attention layer, they are of size 64. * FT-Transformer: Use the official implementation[https://github.com/Yura52/rtdl]. Feed-forward component has 128 dimensions. 2 transformer layers are used. The number of heads of attention is ∈ {2, 4, 8}. The dropout rate is 0.1. * VIME: We reproduce it by PyTorch <cit.> based on the original official implementation[https://github.com/jsyoon0823/VIME]. We train the model on all training data taking mask rate 0.3, batch size 128, learning rate 1e-4, and 10 epochs. During the fine-tuning phase, we add a classifier after the encoder with three dense layers of 100 dimensions and ReLU activations. Trained with batch size ∈ {16, 32, 64, 128}, learning rate ∈ {5e-5,1e-4,1e-3}, and early stopping patience of 10 in 100 maximum epochs. * TransTab: Use the official implementation[https://github.com/RyanWangZf/transtab]. Token embedding has 128 dimensions. 2 transformer layers are used. The number of heads of attention is 8. We train the model on all downstream task data taking batch size 64, learning rate 1e-4, dropout rate 0, and early stopping patience of 10 in 100 maximum epochs. We run the pre-training, transfer learning, and vanilla supervised training methods in the paper, and take the highest score. §.§ Details of the downstream task datasets The downstream task datasets are mainly from the OpenML-CC18 benchmark <cit.>. ACM-Reference-Format
http://arxiv.org/abs/2307.03921v1
20230708072624
Social-Mobility-Aware Joint Communication and Computation Resource Management in NOMA-Enabled Vehicular Networks
[ "Tong Xue", "Haixia Zhang", "Hui Ding", "Dongfeng Yuan" ]
eess.SP
[ "eess.SP" ]
Social-Mobility-Aware Joint Communication and Computation Resource Management in NOMA-Enabled Vehicular Networks Tong Xue, Haixia Zhang, Senior Member, IEEE, Hui Ding, and Dongfeng Yuan, Senior Member, IEEE T. Xue, H. Zhang, H. Ding and D. Yuan are all with Shandong Key Laboratory of Wireless Communication Technologies, Shandong University, Jinan, Shandong, 250061, China. T. Xue and H. Zhang are also with School of Control Science and Engineering, Shandong University, Jinan, Shandong, 250061, China (e-mail: [email protected]; [email protected]). August 12, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= The existing computation and communication (2C) optimization schemes for vehicular edge computing (VEC) networks mainly focus on the physical domain without considering the influence from the social domain. This may greatly limit the potential of task offloading, making it difficult to fully boom the task offloading rate with given power, resulting in low energy efficiency (EE). To address the issue, this letter devotes itself to investigate social-mobility-aware VEC framework and proposes a novel EE-oriented 2C assignment scheme. In doing so, we assume that the task vehicular user (T-VU) can offload computation tasks to the service vehicular user (S-VU) and the road side unit (RSU) by non-orthogonal multiple access (NOMA). An optimization problem is formulated to jointly assign the 2C resources to maximize the system EE, which turns out to be a mixed integer non-convex objective function. To solve the problem, we transform it into separated computation and communication resource allocation subproblems. Dealing with the first subproblem, we propose a social-mobility-aware edge server selection and task splitting algorithm (SM-SSTSA) to achieve edge server selection and task splitting. Then, by solving the second subproblem, the power allocation and spectrum assignment solutions are obtained utilizing a tightening lower bound method and a Kuhn-Munkres algorithm. Finally, we solve the original problem through an iterative method. Simulation results demonstrate the superior EE performance of the proposed scheme. VEC, NOMA, edge server selection, task splitting, spectrum assignment, power allocation. Social-Mobility-Aware Joint Communication and Computation Resource Management in NOMA-Enabled Vehicular Networks Tong Xue, Haixia Zhang, Senior Member, IEEE, Hui Ding, and Dongfeng Yuan, Senior Member, IEEE T. Xue, H. Zhang, H. Ding and D. Yuan are all with Shandong Key Laboratory of Wireless Communication Technologies, Shandong University, Jinan, Shandong, 250061, China. T. Xue and H. Zhang are also with School of Control Science and Engineering, Shandong University, Jinan, Shandong, 250061, China (e-mail: [email protected]; [email protected]). August 12, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § INTRODUCTION With the booming development of intelligent vehicles and wireless communications, a variety of advanced vehicular entertainment services such as high-definition map have emerged in vehicular networks. Quite a lot emerging vehicular entertainment services are computationally-intensive, but the vehicular users (VUs) with constrained computation capability can not satisfy the quality of service (QoS) of such services. To overcome this, it is paramount crucial to utilize vehicular edge computing (VEC) technology that leverages the abundant computation resources at proximity edge servers (i.e., road side units (RSUs) and idle service vehicular users (S-VUs)) <cit.>. However, when the VUs offload tasks to the edge servers, the power consumption increases significantly. Improving the transmission rate of offloaded tasks with limited power, i.e. energy efficiency (EE), has become a major concern in VEC networks. One feasible method is to optimize the communication resources, such as spectrum and power. In addition, designing appropriate task computation policies, such as determining where to offload the computational tasks, is another way to enhance the EE <cit.>. There are works focusing on joint optimizing communication and computation (2C) resource allocation strategies to maximize the EE in orthogonal multiple access (OMA)-enabled VEC networks <cit.>. In addition, non-orthogonal multiple access (NOMA) has also been regarded as a potential technology to further enhance the system EE <cit.>. With the help of successive interference cancellation (SIC) at the receiver, co-channel interference can be suppressed, which enhances the system sum-rate and finally achieves the significant improvement of the system EE. Therefore, there are works focusing on optimizing 2C resources by integrating NOMA into VEC networks<cit.>. For instance, Cheng et al. <cit.> proposed a joint optimization strategy for binary task splitting and power control to maximize EE, where the task VU (T-VU) can offload its computation task to the S-VU or RSU by NOMA. With the same goal, based on the minimum distance S-VU selection (MDSS) strategy, Wen et al. studied a NOMA-enabled three-sided matching theory to jointly optimize the task splitting and power control in cognitive vehicular networks <cit.>. Literature <cit.> focused on the 2C optimization strategies based on the physical domain without considering the influence from the social domain. This may greatly limit the potential of task offloading, making it difficult to fully boom the task offloading rate with given power, resulting in a low EE. Therefore, it is indispensable to improve the system EE by designing a social-mobility-aware 2C optimization strategy. Inspired by the aforementioned analysis, this work designs a social-mobility-aware VEC framework and proposes a novel EE-oriented 2C assignment scheme. In doing so, we assume the T-VU offloading computation tasks to S-VU and RSU by NOMA. Meanwhile, to improve the resource utilization, we enable T-VUs to reuse the spectrum resource with cellular users (CUs). An optimization problem is formulated to jointly allocate the 2C resource to maximize the system EE, while guaranteeing the QoS requirements of all CUs and T-VUs. The formulated optimization problem is a mixed integer non-convex. To solve this problem, we decompose it into separated computation and communication resource allocation subproblems. To deal with the computation subproblem, we propose a social-mobility-aware edge server selection and task splitting algorithm (SM-SSTSA) to determine edge server and task splitting. Then, by solving the communication subproblem, the power allocation and spectrum assignment solutions are obtained by using a tightening lower bound method and a Kuhn-Munkres algorithm. Finally, we solve the original problem by iteratively solve the two subproblems. Simulation results demonstrate the superiority of the proposed scheme in terms of the EE. § SYSTEM MODEL AND PROBLEM FORMULATION §.§ Physical and Social Domain Model This work studies a social-mobility-aware VEC network that utilizes NOMA technology to ensure the differentiated QoS requirements for each CU and T-VU, as shown in SystemModel. In the physical domain, a macro base station (MBS) is deployed to support high-rate data transmission of U CU indexed by u∈𝒰={1,2,...,U}, and S RSUs indexed by s∈𝒮={1,2,...,S} with coverage radius r are deployed to support the computationally-intensive services of M T-VU indexed by m∈ℳ={1,2,...,M}. Each RSU is equipped with a mobile edge computing (MEC) server. Given the limited computation capability of T-VUs, we allow the T-VUs to offload computational tasks to the proximity RSUs through vehicle-to-infrastructure (V2I) links and the idle S-VUs through vehicle-to-vehicle (V2V) links. It is assumed that there are N idle S-VUs indexed by n∈𝒩={1,2,...,N}. Based on the characteristic of task offloading, this work allows the T-VU offloading tasks to the RSU server and the S-VU by utilizing NOMA. In the social domain, leveraging social relationships can help build trustworthy V2V offloading links and improve the effective task offloading rate with limited power, i.e., EE <cit.>. In this work, the social relationship graph among VUs is denoted by 𝒢=(Z,δ), where Z denotes the set of all VUs with Z=ℳ∪𝒩, and δ_m,n∈δ={δ_1,1,δ_1,2,...δ_M,N} is a binary variable representing the social relationship between the mth T-VU and the nth S-VU. If the mth T-VU agrees to share computation task with the nth S-VU, then δ_m,n=1, otherwise, δ_m,n=0. §.§ Communication and Computation Model In the NOMA-enabled VEC network, it is assumed that there are totally F available sub-channels (SCs) indexed by f∈ℱ={1,2,…,F}. Without loss of generality, we assume F = U, and each CU uses a single SC. To improve the spectrum resource utilization, the CUs and the T-VUs are allowed to share the spectrum band. It is assumed that only one V2I link and one V2V link utilize NOMA mode to share the SC occupied by one CU. Therefore, the signal-to-interference-plus-noise ratio (SINR) of the uth CU at the time slot t, t∈𝒯={1,2,...,T}, can be expressed as R_u(t)=∑_f∈ℱBlog_2(1+ P_u^op(t)X_u,f(t)H_u(t)/∑_m∈ℳQ_1+σ^2), where B represents the bandwidth of each SC, Q_1=( ϵ_m,1(t)+ϵ_m,2(t))P_m^thX_m,f(t)H_m,u(t), with ϵ_m,1(t) and ϵ_m,2(t) represent the power allocation coefficients from the mth T-VU to the RSU and to the S-VU at the tth time slot, respectively, P_m^th is the maximum transmit power of the mth T-VU, P_u^op(t) denotes the optimal transmit power of the uth CU at the tth time slot, σ^2 is the noise power, the binary variable X_m,f(t)∈{0,1} is defined as the spectrum assignment factor. If the mth T-VU occupies the fth SC at the tth time slot, X_m,f(t)=1, otherwise, X_m,f(t)=0. Similarly, X_u,f(t) is also a spectrum assignment indicator of the uth CU at the tth time slot. H_u(t) and H_m,u(t) are the channel and interference channel power gain of the uth CU at the tth time slot. For each NOMA-enabled V2V link and V2I link's receiver, it assumes that each receiver is able to decode the received messages via SIC, and the decoding order is based on the increasing order of channel coefficients. If H_m,s<H_m,n, the mth T-VU tends to allocate higher power to the sth RSU than that of the nth S-VU, such that ϵ_m,1>ϵ_m,2. Through the NOMA protocol, the mth V2I receiver is firstly decoded. The mth V2V link is then decoded and the co-channel interference from the mth V2I link is removed[If H_m,s>H_m,n, the mth V2V link will be firstly decoded, and the SINR of receiver will be changed.] by SIC. Therefore, the SINR of the mth V2I link's receiver (i.e, the sth RSU) at the tth time slot can be expressed as R_m,s(t)=∑_f∈ℱBlog_2(1+ ϵ_m,1(t)P_m^thX_m,f( t)H_m,s(t)/Q_2+σ^2_γ_m,f(t) ), where Q_2=∑_u∈𝒰P_u^op(t)X_u,f(t)H_u,s(t)+ϵ_m,2(t)P_m^thX_m,f(t)H_m,s(t). The SINR of the mth V2V link's receiver (i.e., the nth S-VU) at the tth time slot can be expressed as R_m,n(t)=∑_f∈ℱBlog_2(1+ X_m,f(t)Ψ_m,n(t)Q_3/Q_4+σ^2_γ_m,n,f(t)), where Q_3=ϵ_m,2(t)P_m^thH_m,n(t), Q_4=∑_u∈𝒰P_u^op(t)X_u,f(t)Ψ_m,n(t)H_u,n(t), H_m,s(t) and H_m,n(t) are the channel power gains from the mth T-VU to the sth RSU server and to the nth S-VU at the tth time slot, respectively, H_u,s(t) denotes the interference channel power gain from the uth CU to the sth RSU at the tth time slot, H_u,n(t) is the interference channel power gain from the uth CU to the nth S-VU at the tth time slot. The binary variable Ψ_m,n(t) composed of both mobility and social relationships is denoted as Ψ_m,n(t)=k_m, n(t)·δ_m,n(t), where k_m, n(t) is the mobility relationship between the mth T-VU and the nth S-VU at the tth time slot, where k_m, n(t)= {[ 1, if ρ_m, n(t)<ζ_th,; 0, otherwise, ]. where ζ_th represents the threshold of physical domain, and ρ_m,n(t) is written as ρ_m,n(t)= ψ· f(Δ d_m,n(t))+(1-ψ) · f(Δ v_m,n(t)), where Δ d_m,n(t) is the distance between T-VU and S-VU, Δ v_m,n(t) represents the difference in velocity between T-VU and S-VU, ψ∈[0,1] is the weight of the distance, f(·) is the normalized function. We define a tuple (D_m(t), C_m,β_m(t)) to characterize the task of the mth T-VU at the tth time slot, where D_m(t) is the size of the computation task, C_m is the number of CPU cycles required for computing 1-bit data, β_m(t)={β_m,1(t),β_m,2(t)}∈[0,1], β_m,1(t) represents the computing task splitting factor from the mth T-VU to the RSU server. β_m,2(t) is the computed ratio by the S-VU. Thus, (1-β_m,1(t)-β_m,2(t)) denotes the portion of the computing task left for local executing (i.e., the mth T-VU). Therefore, the task executing delay at the mth T-VU is D_m(t)(1-β_m,1(t)-β_m,2(t)) C_m/y_m<T_tol, where y_m (in CPU cycle/s) is the assigned computing resource for executing local tasks, T_tol denotes the maximum tolerant delay of each T-VU. The task offloading and executing delay from the mth T-VU to the RSU server and to the nth S-VU can be expressed as D_m(t)β_m,1(t)/R_m,s(t)+D_m(t)β_m,1(t) C_m/y_m,s<T_tol, D_m(t)β_m,2(t)/R_m,n(t)+D_m(t)β_m,2(t)C_m/y_m,n<T_tol, where y_m,s (in CPU cycle/s) and y_m,n (in CPU cycle/s) are the computing resource allocated to the mth T-VU served by the RSU server and the nth S-VU, respectively. The EE of the NOMA-enabled VEC networks is expressed as ξ=∑_t∈𝒯R_total(t)/P_total(t)=∑_t∈𝒯∑_n∈𝒩∑_m∈ℳ∑_s∈𝒮R_m,s(t)+R_m,n(t)/P_cir+P̃_m,n,s(t), where P̃_m,n,s(t)=κ y_m^3+ϵ_m,1(t)P_m^th+κ y_m,s^3+ϵ_m,1(t)P_m^th+κ y_m,n^3, κ is the effective switched capacitance depending on the CPU architecture, and P_cir is the circuit power consumption. §.§ Problem Formulation In this work, our objective is to maximize the EE for task offloading of the NOMA-enabled VEC network by optimizing the edge server selection Ψ, the task splitting β, the spectrum assignment X and the power allocation ϵ. Notably, Ψ, β, X and ϵ are matrices composed of variables Ψ_m,n(t), {β_m,1(t),β_m,2(t)}, {X_u,f(t), X_m,f(t)} and {ϵ_m,1(t),ϵ_m,2(t)}, respectively. Mathematically, the problem is formulated as 𝒫1:max_Ψ_m,n(t),β_m,1(t),β_m,2(t),X_u,f(t), X_m,f(t),ϵ_m,1(t),ϵ_m,2(t)ξ s. t.  Ψ_m,n(t), X_m,f(t), X_u,f(t)∈{0,1},∀ m,s,n,u,f,t, 0≤β_m(t)≤1,∀ m,t, ϵ_m,1(t)≥ 0, ϵ_m,2(t)≥ 0,ϵ_m,1(t)+ϵ_m,2(t)≤ 1,∀ m,t, R_u(t)≥ R_th,u,∀ f,u,t, ∑_f∈ℱX_u,f(t)= ∑_f∈ℱX_m,f(t)=1, ∀ u,m,t, ∑_u∈𝒰X_u,f(t)=1, ∑_m∈ℳX_m,f(t)≤1,∀ f,t, (<ref>),(<ref>),(<ref>), where R_th,u is the minimum data rate thresholds for the uth CU, constraints (<ref>)-(<ref>) list the feasible task splitting and power allocation of the T-VUs, respectively, constraint (<ref>) represents the QoS requirements of CUs, constraint (<ref>) restricts that each user (T-VU and CU) can only access to one SC, each SC can be shared by one CU and at most one T-VU according to constraint (<ref>). It is obvious that (<ref>) is a fractional programming, which can be converted into a subtractive form <cit.>. Therefore, (<ref>) is reformulated as max_Ψ_m,n(t),β_m,1(t),β_m,2(t),X_u,f(t), X_m,f(t),ϵ_m,1(t),ϵ_m,2(t)∑_t∈𝒯(R_total(t)-ξ P_total(t)). § SOLUTION OF THE EE OPTIMIZATION PROBLEM Since the communication and computation resource decision of 𝒫1 is made in each time slot and there is no interdependence among time slots, we transform the optimization problem across the whole time slots into one time slot optimization problem. But, the obtained one time slot optimization problem is still non-convex, and it is difficult to obtain the global optimal solution. As an alternative, we decompose it into 1) computation resource optimization subproblem 𝒫2 and 2) communication resource optimization subproblem 𝒫3. 𝒫2 and 𝒫3 can be given by 𝒫2:  max_Ψ_m,n(t),β_m,1(t),β_m,2(t) (R_total(t)-ξ P_total(t)) s. t.   Ψ_m,n(t),Ψ_m,s(t)∈{0,1},∀ m,s,n,   (<ref>), (<ref>), 𝒫3: max_X_u,f(t),X_m,f(t),ϵ_m,1(t),ϵ_m,2(t)(R_total(t)-ξ P_total(t)) s. t.   X_m,f(t), X_u,f(t)∈{0,1},∀ m,u,f,   (<ref>), (<ref>), (<ref>)-(<ref>). It is seen that 𝒫2 is NP-hard. To find a tractable solution, we design a heuristic SM-SSTSA as shown in Algorithm 1. Then, to solve the communication resource allocation subproblem, we decouple 𝒫3 into a power allocation subproblem and a spectrum assignment subproblem, which can be solved iteratively. As proved in <cit.>, when the spectrum assignment variable is fixed, (<ref>) is written as R_m,s(t)+R_m,n(t)-ξ P_total(t) ≥ b_1log_2γ_m,f(t)+c_1 _Φ_1(t) + b_2log_2γ_m,n,f(t)+c_2 _Φ_2(t) -ξ P_total(t), where b_1, b_2, c_1 and c_2 are b_1=γ̃_m,f(t)/1+γ̃_m,f(t), b_2=γ̃_m,n,f(t)/1+γ̃_m,n,f(t), c_1=log_2(1+γ̃_m,f(t))-γ̃_m,f(t)/1+γ̃_m,f(t)log_2 γ̃_m,f(t), c_2=log_2(1+γ̃_m,n,f(t))-γ̃_m,n,f(t)/1+γ̃_m,n,f(t)log_2 γ̃_m,n,f(t). Then, the lower bound of the objective function in (<ref>) can be written as max_ϵ ( Φ_1(t) + Φ_2(t) -ξ P_total(t)). Denote ϵ_m,1(t)=2^w_m,1(t) and ϵ_m,2(t)=2^w_m,2(t), the power control subproblem can be rewritten as 𝒫4:  max_w_m,1(t),w_m,2(t)( Φ_1(t) + Φ_2(t) -ξ P_total(t)) s. t.    2^w_m,1(t)≥ 0, 2^w_m,2(t)≥ 0,∀ m,  2^w_m,1(t)+2^w_m,2(t)≤ 1,∀ m,   (<ref>),(<ref>),(<ref>). Since (<ref>) is a standard convex optimization problem, we adopt Lagrange dual decomposition to solve it. Given ϵ, the spectrum assignment subproblem is a complicated matching among CUs, T-VUs and SCs, which is proved to be NP-hard. From (<ref>)-(<ref>), the relationship between the cellular user and SC belongs to one-to-one match. To facilitate the solution, the complex match among CUs, T-VUs and SCs is transformed into the new match among CUs and T-VUs. The new spectrum assignment variable between the uth CU and the mth T-VU at the tth time slot is denoted as X_u,m(t). Therefore, the spectrum assignment subproblem can be rewritten as 𝒫5:  max_X_u,m(t)(R_total(t)-ξ P_total(t)) s. t.  X_u,m(t)∈{0,1},∀ u,m, ∑_m∈ℳX_u,m(t)≤1, ∀ u, ∑_u∈𝒰X_u,m(t)=1,∀ m, which can be solved by a Kuhn-Munkres algorithm. To solve 𝒫1, JCCRAA is proposed as shown in Algorithm 2, which composes of solving the computation resource allocation subproblem and the communication resource allocation subproblem. In Algorithm 2, by solving Algorithm 1, Ψ and β can be obtained. Then, the analytical expression of X and ϵ can be derived by using the tightening lower bound method and the Kuhn-Munkres algorithm. Next, by substituting the obtained assigned spectrum and allocated power into Algorithm 1, the S-VUs selection and task splitting strategies are updated. Repeat the process until convergence, the original problem is solved. § SIMULATION RESULTS AND ANALYSIS Intensive simulations are done to show the performance of the proposed algorithm. It is assumed that all the users are located within a target rectangular area 1000 m× 1000 m. The simulation parameters are set according to 3GPP TR 36.885 <cit.>, where a MBS is located at the center of the area and a number of RSUs with r= 150 m are located at the roadside in the area. The number of lanes is 6, and the width of each lane is 4 m. The average inter-VU distance driving in the same lane is 2.5v m with v representing the moving speed of vehicles in meter per second. Besides, we set P_u^op=20 dBm, P_m^th= [15, 30] dBm, D_m=[10^4,10^5] bits. The impact of the number of T-VUs, the size of the offloaded tasks and the number of SCs on the system EE are simulated, respectively. The obtained resluts are shown in Figs. <ref>-<ref>. To show the superiority of the proposed JCCRAA, three baselines are simulated and compared: 1) NOMA-MDSS-TSCRA algorithm, which is composed of the MDSS and the proposed task splitting and communication resource allocation algorithm. 2) RSU-SAPC algorithm, which is composed of the RSU-based offloading strategy and the proposed communication resource allocation algorithm. 3) OMA-JCCRA algorithm, which is adopted by the proposed JCCRA algorithm based on the orthogonal multiple access. The system EE for different number T-VUs is shown in Sim1, from which we see that the EE decreases as the number of T-VUs increases for all the simulated algorithms. The reason is that, as the number of T-VUs increases, the competition for limited communication resources intensifies, resulting in severe co-channel interference and a degradation in EE performance. In addition, we see that the proposed NOMA-JCCRAA performs best, and with the social-mobility-aware algorithm, a gain of approximately 17%-32% can be achieved. Sim2 shows the effect of the size of offloaded tasks at each T-VU on the EE performance when the size of the offloaded tasks at each T-VU varies. The simulation results reveal that as the size of the offloaded tasks increases, the EE decreases. This is attributed to an increase in task delay, making it difficult to satisfy the constraint of delay and ultimately reducing the EE performance of the VEC network. From Sim3, we see that the EE increases when the number of the available SCs increases from 30 to 60 for all the simulated algorithms. This is because when the number of the available SCs increases, more users can occupy the spectrum bands individually, improving the system EE. § CONCLUSIONS This letter focuses investigation on the social-mobility-aware EE maximization in VEC networks, where the T-VUs can offload the computation tasks to the S-VUs and the RSUs by NOMA. An EE maximization problem was formulated to jointly assign the 2C resources. Since the optimization turned out to be NP-hard, to solve it, an iterative JCCRAA was proposed. Simulation results have shown that the proposed JCCRAA not only can help appropriately allocate the communication and computation resources, but also can achieve a system EE gain of approximately 17%-32% by using the proposed social-mobility-aware strategy. IEEEtran
http://arxiv.org/abs/2307.04931v1
20230710224039
Modelling the effect of 3D temperature and chemistry on the cross-correlation signal of transiting ultra-hot Jupiters: A study of 5 chemical species on WASP-76b
[ "Joost P. Wardenier", "Vivien Parmentier", "Michael R. Line", "Elspeth K. H. Lee" ]
astro-ph.EP
[ "astro-ph.EP" ]
firstpage–lastpage Turán number for bushes Zoltán Füredi Alfréd Rényi Institute of Mathematics, Budapest, Hungary. E-mail: . Research partially supported by National Research, Development and Innovation Office NKFIH grants 132696 and 133819. Alexandr Kostochka University of Illinois at Urbana–Champaign, Urbana, IL 61801 and Sobolev Institute of Mathematics, Novosibirsk 630090, Russia. E-mail: . Research supported in part by NSF grant DMS-2153507 and NSF RTG grant DMS-1937241. =============================================================================================================================================================================================================================================================================================================================================================================================================================================================== Ultra-hot Jupiters are perfect targets for transmission spectroscopy. However, their atmospheres feature strong spatial variations in temperature, chemistry, dynamics, cloud coverage, and scale height. This makes transit observations at high spectral resolution challenging to interpret. In this work, we model the cross-correlation signal of five chemical species – Fe, CO, H_2O, OH, on WASP-76b, a benchmark ultra-hot Jupiter. We compute phase-dependent high-resolution transmission spectra of 3D SPARC/MITgcm models. The spectra are obtained with gCMCRT, a 3D Monte-Carlo radiative-transfer code. We find that, on top of atmospheric dynamics, the phase-dependent Doppler shift of the absorption lines in the planetary rest frame is shaped by the combined effect of planetary rotation and the unique 3D spatial distribution of chemical species. For species probing the dayside (e.g., refractories or molecules like CO and OH), the two effects act in tandem, leading to increasing blueshifts with orbital phase. For species that are depleted on the dayside (e.g., H_2O and TiO), the two effects act in an opposite manner, and could lead to increasing redshifts during the transit. This behaviour yields species-dependent offsets from a planet’s expected K_p value that can be much larger than planetary wind speeds. The offsets are usually negative for refractory species. We provide an analytical formula to estimate the size of a planet’s K_p offsets, which can serve as a prior for atmospheric retrievals. We conclude that observing the phase-resolved absorption signal of multiple species is key to constraining the 3D thermochemical structure and dynamics of ultra-hot Jupiters. radiative transfer – methods: numerical – planets and satellites: atmospheres – planets and satellites: gaseous planets § INTRODUCTION Ultra-hot Jupiters are an extreme class of exoplanet with equilibrium temperatures greater than ∼2000 K (). They offer a unique opportunity to study atmospheric physics and chemistry under conditions that do not prevail on any of the planets in our own Solar System. To date, the formation history of ultra-hot Jupiters is largely unknown, but constraining the elemental abundance ratios of their atmospheres can shed light on their origins, accretion mechanisms, and their migration through the protoplanetary disk (). Ultra-hot Jupiters are ideal targets for atmospheric characterisation in transmission, thanks to their extended atmospheres, short orbital periods (1-2 days), and simple chemical inventory. However, one aspect that complicates the interpretation of their spectra is their inherent “3D-ness” (). Ultra-hot Jupiters are tidally locked, which means that they have a permanent dayside and a permanent nightside with very different temperature structures and chemical compositions. On the hot, puffy dayside refractories and alkalis such as Fe, Mg, Ca, Ba, K, and Na exist in their atomic or ionised form, while molecules such as H_2, H_2O, and TiO get thermally dissociated. On the nightside, the temperature is much lower, allowing for cloud formation to occur (). The large day-night contrast results in steep thermochemical gradients and scale-height variations across the terminator region of the atmosphere, which is probed by transmission spectroscopy. Additionally, the day-night contrast drives fast winds in the order of (). The wind profile of ultra-hot Jupiters can be decomposed into two contributions: a day-to-night flow that carries material from the dayside to the nightside of the planet, and (depending on the drag conditions in the atmosphere) a superrotating jet around the equator (). Arguably the best technique for studying ultra-hot Jupiters is ground-based high-resolution spectroscopy (HRS – ). Thanks to its ability to resolve individual spectral lines and perform local measurements, HRS can shed light on atmospheric physics that is not accessible to low-resolution (i.e., HST and JWST) observations. As a planet orbits its star, its radial velocity changes and its spectral lines are periodically Doppler-shifted. This allows for the planet signal to be isolated from stellar and telluric contributions. Over the past few years, planets such as WASP-33b (e.g., ), WASP-76b (e.g., ), WASP-121b (e.g., ), KELT-9b (e.g., ), and KELT-20b (e.g., ) have been targeted by a large number of HRS observations, both in the optical and the infrared. These have enabled the detection of a plethora of chemical species[See Table 1 in <cit.> for a relatively recent overview of detected species in the atmospheres of gas giants.], as well as wind-speed measurements (e.g., ). Additionally, for various ultra-hot Jupiters, HRS observations revealed evidence for hot-spot shifts and thermal inversions on the dayside (e.g., ), cloud formation on the nightside, and asymmetries between the morning and evening limbs of the planet (). At high resolution, the “3D-ness” of ultra-hot Jupiters causes the absorption lines in their transmission spectrum to be shifted, broadened, and distorted (e.g., ). This is because stellar light rays encounter different pressures, temperatures, abundances, and line-of-sight velocities as they pass through the atmosphere. A few lines are strong enough to be seen directly, but the vast majority of the planet spectrum lies buried in stellar photon noise. One way to detect the planet signal is to cross-correlate the spectrum with a template model and combine the strengths of all the absorption lines (typically associated with a single chemical species). This results in a cross-correlation function (CCF – ), which is a measure for the similarity between the planet spectrum and the template as a function of radial velocity (i.e., Doppler shift). The total Doppler shift of the planet spectrum is induced by the systemic velocity V_sys of the star, the orbital velocity K_p of the planet, its rotation, and its atmospheric dynamics. However, since the (K_p, V_sys) values of a planet are known, it is possible to transform the CCF to a planetary rest frame, in which the only Doppler contributions are from rotation and dynamics. These “anomalous” Doppler shifts contain information about the 3D nature of the planet. Because ultra-hot Jupiters are tidally locked, they rotate by degrees during their transit (), assuming an edge-on orbit. This means that the transmission spectrum may probe different parts of the atmosphere at different orbital phases. At the start of the transit, the leading limb (or morning limb) is largely comprised of dayside atmosphere, while the trailing limb (or evening limb) mainly covers the nightside. Then, as the transit progresses, the dayside rotates into view on the trailing limb, and the nightside rotates into view on the leading limb (). Because the terminator regions of ultra-hot Jupiters are characterised by extreme spatial variations in temperature, chemistry, dynamics, and scale height, the rest-frame CCF can be expected to undergo substantial changes over the course of the transit. From an observational standpoint, however, “phase-resolving” the absorption signal of a species in a transiting exoplanet atmosphere is a challenge. To our knowledge, this has only been attempted for (), (), and (). For WASP-76b and WASP-121b, the CCFs of neutral iron (Fe or Fe i) show an increasing blueshift during the transit, with the peak position moving from about 0 km/s at ingress to about -10 km/s at egress (). In the case of WASP-76b, the absorption trail features a “kink” around mid-transit in the CCF map (e.g., Fig. 1 in ). Multiple mechanisms were suggested for this behaviour, including iron condensation on the leading limb of the planet (), a scale-height (temperature) difference between both limbs (), the presence of optically-thick clouds on the leading limb (), or a combination of these effects. More recently, <cit.> proposed that the planet's (spatially varying) magnetic field can also play a role. Using the VLT/ESPRESSO dataset from <cit.>, <cit.> went on to study the phase-dependent behaviour of a large number of other species in WASP-76b besides iron, namely H, Li, Na, Mg, K, Ca ii, V, Cr, Mn, Co, Ni, and Sr ii. They found that the CCFs of all species except atomic hydrogen and lithium were more blueshifted in the final quarter of the transit compared to the first quarter. Recent GEMINI-N/MAROOX-X observations of WASP-76b by <cit.> confirmed these trends. Moreover, <cit.> reported that the vast majority refractories and alkalis – species expected to be abundant on the dayside – give rise to absorption trails with the same “kink” feature as iron[Ionised calcium (Ca ii) is an exception, as its absorption originates from higher regions in the atmosphere which are likely subject to atmospheric escape.]. Based on these observations, the authors suggested that the iron signal of WASP-76b is shaped by a global mechanism that also affects other species in the optical, rather than condensation alone. In the infrared, <cit.> used CARMENES data to measure the H_2O and the HCN signals of WASP-76b. They also found substantial differences between the first and second half of the transit, both in terms of Doppler shift and CCF strength. Transit observations resolved with orbital phase are a powerful means to perform local measurements in an exoplanet atmosphere and thus obtain information about its “3D-ness”. For example, by dividing the VLT/ESPRESSO dataset from <cit.> into two halves, <cit.> were able to retrieve the temperature profile and iron abundance of WASP-76b at four different longitudes. Furthermore, they separately constrained the wind speeds on the trailing and leading limb of the planet. Performing similar retrieval studies in the infrared would be valuable for two reasons. Firstly, they allow to get a better handle on the planet's “3D-ness”, as different species probe different atmospheric regions. Secondly, measuring the abundances of molecules such as CO, H_2O, and OH allows to compute refractory-to-volatile ratios (e.g., Fe/O), which are important in the context of planet formation (). However, the fact that the planet is 3D will make it more difficult to make these inferences, as abundances vary spatially. Therefore, we need 3D forward models to understand how 3D effects manifest in high-resolution spectra and how to best parameterise these effects in 1D or pseudo-2D models used in retrievals. Also, we require 3D forward models to understand what we can really learn from multi-species observations. The aim of this work is to further explore the connection between the “3D-ness” ultra-hot Jupiters and their CCF signals in transmission. To this end, we build on earlier modelling work described in <cit.>. We use a 3D Monte-Carlo radiative transfer framework to simulate phase-dependent transmission spectra for different atmospheric scenarios of WASP-76b, based on outputs of a global circulation model (GCM). We then compute the CCF signals and K_p–V_sys maps for five different chemical species: Fe and TiO in the optical, and CO, H_2O, and OH in the infrared. The motivation for considering these species is that they all have distinct 3D spatial distributions across the planet. Therefore, the absorption lines associated with these species will probe different regions of the atmosphere, each with their own properties. Furthermore, the behaviour we identify for a certain species will be representative of other atoms and molecules with the same spatial distribution. For example, the signals we simulate for iron will be a good proxy for the signals of other refractories too. The structure of this manuscript is as follows. In Section <ref> we describe our WASP-76b models, our radiative-transfer framework, and methods for computing CCF signals, K_p–V_sys maps and absorption regions. In Section <ref>, we present, discuss and interpret our results. Finally, Section <ref> provides a conclusion. § METHODS §.§ Model atmospheres §.§.§ General overview In this work, we consider four different 3D models of the atmosphere of WASP-76b, based on outputs of the SPARC/MITgcm global circulation model (). For the setup of the GCM simulations, we refer the reader to <cit.> and <cit.>. All models assume solar values for metallicity and C/O ratio. We compute the abundances through chemical equilibrium, such that the number fraction of a species in a given atmospheric cell only depends on the local pressure and temperature. In addition, the GCM accounts for condensation through “rainout”, whereby a certain fraction of a species (e.g., Fe or Mg) is removed from a cell when the local temperature lies below the (pressure-dependent) condensation temperature of a condensate containing that species (e.g., ). The process is called rainout, because it assumes that condensates instantly settle to a deeper layer where they do not impact the radiation balance of the atmosphere. The four models are summarised in Table <ref>. Our nominal model is the same as the weak-drag model from <cit.>. It has a drag timescale τ_drag = 10^5 s, which has been found to provide a better match to the WASP-76b observations than a drag-free atmosphere (). The drag timescale represents the typical time it takes for an air parcel to lose a significant fraction of its kinetic energy. It encapsulates a number of different processes, such as turbulent mixing (), Lorentz-force braking of winds of charged particles due to the planet's magnetic field (), and Ohmic dissipation (). As a result of drag forces, the equatorial jet of the planet is suppressed, such that the atmospheric dynamics are dominated by the day-to-night flow. The second model we consider is the cold-morning-limb model from <cit.> (see “modification 2” in their Fig. 12), in which we artificially reduce the temperature of the leading limb. Consequently, the atmosphere features a strong thermal asymmetry between the (hotter) trailing limb and (cooler) leading limb. In <cit.>, we demonstrated that this model is able to reproduce the shape of the iron signal of WASP-76b (), as opposed to an atmosphere without an east-west asymmetry. In the cold-morning-limb model, the absorption lines undergo an increasing blueshift during the first half of the transit, but they retain a constant Doppler shift of about during the second half. Our third model is an optically-thick-clouds model à la <cit.>, who reported that the iron signal of can also result from an atmosphere with optically thick clouds. In one of their best-fitting models, they assume the presence of an optically thick cloud deck extending at most 10 scale heights (∼4.3 dex in pressure) above the intersection between the local temperature profile and the Al_2O_3 condensation curve. The vertical extent of the cloud is less than 10 scale heights in case the temperature profile and the condensation curve intersect again at some lower pressure. We add an optically thick cloud deck to our GCM output in exactly the same way (clouds are added post-hoc, so the temperature structure in the GCM is calculated without clouds). The rationale for selecting Al_2O_3 is that this is the cloud species with the highest condensation temperature. Hence, it will have the most drastic impact on the planet's transmission spectrum, as it can exist in hotter regions compared to other cloud species. Because cloud physics is complicated (e.g., ), this modelling approach is very much a simplification of reality. However, the model forms a good limiting case – it allows to assess the strongest impact that clouds can possibly have on the CCF signal. Our final model is the atmosphere without TiO and VO from <cit.>. It represents a scenario in which TiO and VO are cold-trapped due to condensation (). To emulate the effects of cold-trapping, the opacities of TiO and VO are set to zero during the GCM calculations. Because these molecules are important short-wave absorbers, their absence will change both the dynamics and the temperature structure of the atmosphere. As shown in <cit.>, the no-TiO/VO model naturally has a large temperature asymmetry between its trailing and leading limb, owing to a strong hotspot shift on the dayside that extends to relatively low pressures. Furthermore, our no-TiO/VO model is drag-free (τ_drag→∞), so it features an equatorial jet. The model provides a good test to assess which observational features are robust against a variety of different modelling assumptions. §.§.§ Mapping pressures onto altitudes As described in <cit.>, the GCM uses pressure as a vertical coordinate. However, to compute the transmission spectrum of the planet, the atmosphere must be defined on an altitude grid. Thus, before we can feed the models into the radiative-transfer framework, we need to perform the mapping P → z in every atmospheric column, with P the pressure and z the altitude coordinate. To this end, we follow the approach from <cit.> (see their Section 3.2), whereby we assume that the atmosphere is an ideal gas in hydrostatic equilibrium. For every atmospheric cell i, we compute the scale height as follows: H_i = k_B T_i/μ_i g_i, with k_B the Boltzmann constant, T_i the cell's temperature, μ_i its mean-molecular weight, and g_i its gravity. One important improvement we make compared to <cit.> is that we also account for mean-molecular weight variations across the atmosphere when computing H_i, in addition to temperature and gravity variations. On most of the dayside, the mean-molecular weight is significantly lower than on the nightside due to hydrogen (H_2) dissociation – lowering its value from μ ≈ 2.33 m_h to μ ≈ 1.27 m_h (with m_h the mass of a hydrogen atom). As a result, the scale-height difference between the dayside and the nightside of the models is even larger than suggested in <cit.>. We verified, however, that (not) accounting for thermal dissociation in the P → z mapping does not drastically alter the shape of the final CCF signals (see Appendix <ref>), so the results from <cit.> remain valid. Fig. <ref> shows a to-scale plot of the nominal model mapped onto its altitude grid. The bottom of the atmosphere, with and , is situated at a radius R = 1.85 R_jup. At the substellar point (on the dayside), the 10-μbar isobar lies at R = 2.44 R_jup. At the antistellar point (on the nightside), it lies at R = 2.06 R_jup. To prevent absorption lines from being “truncated” by the model boundaries in the radiative transfer, we extrapolate the entire atmosphere to a radius R = 2.64 R_jup (black dashes in Fig. <ref>), assuming that temperatures, abundances, and wind speeds remain constant above the upper GCM boundary of 2 μbar. Because the nightside has a much smaller scale height, it is extrapolated to trivially low pressures where the absorption is zero. §.§ Radiative transfer §.§.§ Monte-Carlo radiative transfer with gCMCRT To compute transmission spectra associated with the 3D model atmospheres, we use gCMCRT[gCMCRT is publicly available from https://github.com/ELeeAstro/gCMCRT ] (). gCMCRT is an updated, GPU-compatible version of Monte-Carlo radiative transfer code from <cit.>. In <cit.>, we adapted the framework for high-resolution purposes. The main advantage of gCMCRT is that it fully exploits the architecture of a GPU, which comprises hundreds to thousands of individual cores (processing units). Hence, a large number of photon packets can be simulated in parallel, making gCMCRT a lot faster than its predecessor. In <cit.>, we had to restrict our simulations to ∼10,000 wavelength points for computational reasons, but with gCMCRT we can efficiently model high-resolution spectra across the full bandwidth of instruments like VLT/ESPRESSO and Gemini-S/IGRINS. For each of the four WASP-76b models, we simulate the orbit over an angle of 31.3 degrees, covering the transit as well as ingress and egress. We compute 25 transmission spectra, equidistant in orbital phase. Furthermore, we assume an edge-on orbit, a semi-major axis of 0.033 AU, a stellar radius of 1.73 R_sun, and an orbital period of 1.81 days – commensurate with the parameters of the WASP-76 system (). We ignore effects of limb darkening as these were reported have a negligible impact on the Doppler shifts obtained from cross-correlation (). As discussed in <cit.>, Monte-Carlo radiative transfer is a stochastic technique. To compute a transmission spectrum, we initialise n photon packets with a random impact parameter and impact angle at each wavelength. During ingress and egress we only illuminate the part of the limb that is blocking the star. The spectrum converges to the true solution in the limit n →∞ (we use n = 10^5 in this work). For each photon packet, we compute the optical depth τ along the line of sight, whereby we Doppler-shift the opacities in each atmospheric cell according to the local line-of-sight velocity v_los that results from winds and planetary rotation (see Fig. <ref>). We refer to Section 3.3 in <cit.> for the relevant equations. Because we account for scattering through absorption cross-sections (a treatment justified in transmission as scattering causes photons to depart from the line of sight and not contribute to the flux), we effectively use gCMCRT as a randomised-transit-chord algorithm. The propagation direction of the photon packets does not change after their initialisation. Once the optical depth associated with the photon packets has been computed, the “transit area” A_p(λ) of the planet can be found from () A_p(λ) = A_0 + A_annu⟨ 1 - e^-τ⟩|_λ, with A_0 the projected area of the planetary interior and A_annu the area of the atmospheric annulus (extending from the bottom to the top of the model atmosphere). The angle brackets imply an average over all photon packets with wavelength λ. During ingress and egress, we scale down the value of A_p with the fractional overlap (< 1) between the stellar and the planetary disk to obtain the correct transit depth. As in <cit.>, we also compute spectra associated with individual sectors on the limb (see their Fig. 3): the trailing equator, the trailing pole(s), the leading pole(s), and the leading equator. The trailing (leading) equator is the limb region between -45^∘ and +45^∘ latitude that is last (first) to appear in front of the star during ingress. The trailing (leading) poles are the regions between -90^∘ and -45^∘, and +45^∘ and +90^∘ that are last (first) to appear in front of the star during ingress. All sectors span a quarter of the limb, but as shown in Fig. 3 in <cit.>, the poles are disjoint. For a tidally locked planet, the trailing regions rotate towards the observer, while the leading regions rotate towards the star (away from the observer). To compute spectra for each sector, we also use equation <ref>, but we only perform the average over the photon packets impinging on that sector. §.§.§ Modelling spectra in the optical (Fe and TiO signals) In the optical, we model the transit of the four WASP-76b models across the full ESPRESSO wavelength range (0.38–0.79 μm) at a spectral resolution R = 300,000 (>2× the ESPRESSO resolution). This results in a total of ∼220,000 wavelength points[With 10^5 photon packets per wavelength, this means that the total number of photon packets simulated across the spectrum is of the order 10^10.]. For memory-related reasons, we split the computation in two batches of ∼110,000 wavelength points and we stitch the spectra together at the end. Since we read all opacity data at once at the start of the simulation, the GPU memory needs to hold the full 3D opacity structure of the atmosphere at each wavelength. In the radiative transfer, we include (continuum) opacities associated with H_2, He, and H scattering, collision-induced absorption (CIA) by H_2-H_2 and H_2-He, and bound-free and free-free transitions of H^-. References to these opacities can be found in Table 2 in <cit.>. Also, we consider the following line species: Fe, Fe ii, K, Na, Ti, Mn, Mg, Cr, Ca ii, TiO, VO, H_2O, and OH. Atomic opacities are taken from the <cit.> database and we apply pressure broadening using a code based on <cit.>. In <cit.>, the atomic opacities were generated with () and no pressure broadening was applied. Furthermore, imposed a line-wing cut-off, as opposed to our current treatment. The opacities of TiO and VO are from the EXOPLINES database (), and were generated by <cit.> using the TOTO () and the VOMYT () line lists. For H_2O, we use the POKAZATEL line list (). Finally, the OH opacities are taken from (). Compared to <cit.>, we thus make a total of four changes to the radiative transfer. Firstly, we use iron line lists with pressure broadening and no line-wing cut-off, and we use opacities for a larger number of species. Secondly, we account for variations in mean-molecular weight when evaluating the scale height (see Section <ref>). Thirdly, we reduce the spectral resolution from 500,000 to 300,000. Finally, we consider the full ESPRESSO wavelength range instead of a small set of ∼10,000 wavelength points. Fig. <ref> in Appendix <ref> depicts the effect that each of these changes has on the iron signal of the cold-morning-limb model originally presented in <cit.>. The figure shows that the “new” iron opacities and the new resolution do not significantly impact the CCF map. As expected, the new scale heights and the new wavelength range lead to the biggest changes, but the overall trends in the CCF map remain the same. §.§.§ Modelling spectra in the infrared (CO, H_2O, and OH signals) In the infrared, we model the transit of the four WASP-76b models across the full IGRINS wavelength range (1.43–2.42 μm) at (∼3× the IGRINS resolution). This leads to ∼71,000 wavelength points. We can afford a lower resolution here as the absorption features of the relevant molecules tend to be intrinsically broader than in the optical, so they can still be resolved at a lower resolution. We performed a comparison similar to Fig. <ref> to verify that the Doppler shifts obtained from cross-correlation remain the same (within 0.5 km/s) at higher spectral resolutions. To compute the infrared spectra, we consider the same continuum opacities as in the optical. Additionally, we use the line species CO (), H_2O (), OH (), CH_4 (), CO_2 (), HCN (), and NH_3 (). §.§ Computing observables §.§.§ CCF maps For each transit, we cross-correlate all 25 spectra with a template – see Section 3.6 in <cit.> for relevant equations. This gives rise to a CCF map with Doppler shift (radial velocity, or RV) as a horizontal coordinate and orbital phase as a vertical coordinate. We compute CCF maps for Fe and TiO (based on the optical spectra), as well as for CO, H_2O, and OH (based on the infrared spectra). To generate the template for a species X, we compute the mid-transit spectrum of the nominal model without Doppler shifts, whereby we only include the opacities of the continuum and X. Before we perform the cross-correlation, we subtract the continuum from both the templates and the spectra. We do this by splitting a spectrum into bins of 1000 wavelength points and fitting a low-order polynomial to the minima of these bins. We then subtract the polynomial from the spectrum to obtain a “flat” baseline. This procedure mimics the steps taken in the analysis of real high-resolution data. We also compute CCF maps associated with the four limb sectors. As demonstrated in <cit.>, the CCF map of the full limb can be interpreted as the sum of the CCF maps of the individual sectors, thanks to the linearity of the cross-correlation. The benefit of this approach is that it allows to link certain features of the CCF map to specific atmospheric regions. §.§.§ K_p–V_sys maps In most high-resolution datasets, the CCF values associated with individual integrations must be “stacked” across the whole transit to get a strong enough planet detection. A common way to do this is by constructing a map (e.g., ). The signal emerging in the map can be seen as a time average, because it is a sum over all orbital phases. Once the CCF map of a certain species is computed, we obtain the corresponding K_p–V_sys map by integrating the CCF values along a curve of the form v(ϕ) = V_sys + K_psin(ϕ), with v(ϕ) the radial velocity at phase angle ϕ∈ [-15.7^∘, +15.7^∘], V_sys the systemic velocity, and K_p the orbital velocity. In other words: SNR(K_p, V_sys) = 1/ξ∑_i^N_ϕCCF(ϕ_i, v(ϕ_i) ). In this equation, SNR is the value of the K_p–V_sys map at (K_p,V_sys), ξ is a scaling factor, and N_ϕ the number of simulated transit spectra. For each orbital phase, we obtain the CCF value at v(ϕ_i) by linearly interpolating between the two values at the nearest radial velocities in the CCF map. §.§ Computing absorption regions Following the approach from <cit.>, we also compute absorption regions for each of the atmospheric models (see their Section 3.2.2). The information needed to infer these regions is a byproduct of the radiative transfer. The idea is that the spectrum does not contain any information about parts of the atmosphere where all the light is absorbed (e^-τ∼0) or where all the light is transmitted (e^-τ∼1). Instead, the observation probes the region where the transition from optically thick to optically thin occurs. Hence, given a wavelength λ, we define the absorption region as being spanned by all transit chords that satisfy β < e^-τ < 1 - β. In <cit.>, we opted for β = 0.1 and β = 0.01, and we named the corresponding regions the 10–90% and the 1–99% absorption regions, respectively. The condition β < e^-τ < 1 - β only constrains the extent of the absorption regions in the altitude direction. However, to obtain a region that is finite along the line of sight as well, we only select the central part of the transit chords where the total optical depth increases from βτ to (1-β)τ[As motivated in <cit.>, this definition ensures that an absorption region is symmetric about the limb plane in the limit of a uniform 1D atmosphere.]. These two conditions allow us to infer the approximate regions that are probed by the transmission spectrum at a certain wavelength. Because we “truncate” the transit chords along the line of sight, the extent of the absorption regions is also independent of the (arbitrary) upper model boundary. § RESULTS & DISCUSSION §.§ 3D temperatures, abundances, and line-of-sight velocities Fig. <ref> shows the temperature structure of the four WASP-76b models from Table <ref> in the equatorial plane. As discussed in Section <ref>, the daysides are more “puffy” than the nightsides, owing to their higher temperature and lower mean molecular weight. The daysides also feature a strong thermal inversion. For example, at the substellar point in the nominal model, the temperature increases from at to ∼3500 K at 1 mbar. The nightside does not feature a thermal inversion and this is the reason why the cloud deck mostly spans 10 scale heights in the optically-thick-clouds scenario. At the antistellar point, the temperature drops from ∼1700 K at 1 bar to ∼1000 K at 10 μbar. Fig. <ref> shows the abundances of Fe, CO, H_2O, OH, and TiO across the equatorial plane. All species have a unique 3D spatial distribution. Iron is abundant on the dayside, but absent on the nightside due to condensation. Water, on the other hand, is abundant on the nightside, but subject to thermal dissociation on the dayside (). These “mirrored” chemical distributions imply that iron lines mainly probe the dayside of the planet, while the water lines mainly probe the nightside. The CO abundance is nearly constant across the atmosphere – its value does not vary by more than ∼0.3 dex. Because CO has a strong triple bond between its constituent atoms, it is neither affected by condensation, nor by thermal dissociation. In fact, the only ultra-hot Jupiter hot enough to dissociate CO is KELT-9b (). Consequently, the absorption lines of CO are the most reliable gauge of the 3D temperature structure and wind profile of the planet. They only probe spatial variations in temperature and dynamics, and not so much in chemistry. See <cit.> for further discussion. The distribution of OH is a bit more complicated. On the dayside, the molecule forms when water is dissociated into OH and atomic hydrogen. However, higher up in the atmosphere, OH itself also falls prey to thermal dissociation, producing atomic oxygen and atomic hydrogen. As a result, the OH abundance first increases with altitude and then decreases. On the nightside, OH is absent, because hydrogen and oxygen are contained in water at lower temperatures. Finally, TiO is subject to both dissociation on the dayside and condensation on the nightside. Therefore, the only observable TiO is present in a narrow region around the limb where the temperature is lower than the dissociation temperature, but higher than the condensation temperature of TiO. Fig. <ref> shows the line-of-sight velocities v_los due to winds (and planetary rotation) in the equatorial plane and the terminator plane of the nominal and the no-TiO/VO model, at mid-transit. These are the velocities by which the opacities in different cells are Doppler-shifted during the radiative transfer. v_los<0 implies that absorbers are moving towards the observer, causing a blueshift to the transmission spectrum, while v_los>0 means that absorbers are moving away, inducing a redshift. As illustrated in Fig. <ref>, the nominal model only features day-to-night winds (both planes are completely blueshifted in the top row), such that the only redshift contributions come from rotation (see bottom row). The no-TiO/VO model has an equatorial jet and this is why half of the equatorial plane is blueshifted, while the other half is redshifted. Note, though, that the jet only occupies a small region in the terminator plane, spanning an angle of ∼25 degrees at pressures ≲ 1 bar on both limbs (the latitudinal extent of the jet is of the order of the equatorial Rossby deformation radius – see ). However, despite the smaller “effective area” occupied by superrotating winds compared to the day-to-night flow, it may still possible to make inferences about the equatorial jet based on the absorption signal of the full limb (e.g., ). §.§ Prelude: the nominal vs. the cold-morning-limb model Fig. <ref> shows the CCF signals of the nominal model for each of the five chemical species. Remarkably, they all have very similar absorption signatures, except for TiO. However, in the nominal model, the iron signal does not feature the “kink” that has been observed in the real data of WASP-76b ( – see also ). The cold-morning-limb model, on the other hand, does give rise to a kink in the iron signal (blue curve in the right panel of Fig. <ref>), whereby the blueshift (RV < 0) increases during the first half of the transit and remains constant during the second half. For a full discussion of this behaviour, we refer to <cit.>. As opposed to the nominal model, the cold-morning-limb model shows a range of different CCF signals for the five species. In the following sections, our aim is to understand how these CCF signals come about and what physics causes the differences between the models. To build some basic intuition, we start by discussing the CCF maps of the nominal model. Subsequently, we focus on the behaviour of the other three models: the cold-morning limb model, the optically-thick clouds model, and the no-TiO/VO model. §.§ CCF maps for the nominal model Fig. <ref> depicts the CCF maps of the four limb sectors and the full limb of the nominal model, for all species. The CCF maps of the individual sectors can be seen as the “building blocks” of the more complicated absorption signal that emerges from the entire atmosphere. This is because the the CCF map of the full limb is the sum of the maps of the limb sectors. Figs. <ref> and <ref> show the absorption regions that are probed on the trailing part of the equatorial plane by (randomly chosen) line cores of Fe, CO, H_2O, OH, and TiO, respectively. §.§.§ Recap: two important effects for iron <cit.> (see their Fig. 9) showed that there are two important effects that drive the Doppler shift of iron in the nominal model: (i) the variation in the signal strengths of the limb sectors during the transit, and (ii) atmospheric dynamics. In tandem, these effects cause the absorption signal to become increasingly blueshifted during the transit, even though there is no significant thermal or chemical asymmetry between the planet's trailing and leading limbs. On top of this “baseline behaviour”, limb asymmetries (e.g., ) can further enhance changes in the Doppler shift with orbital phase (see Section <ref>). Effect (i) is due to the day-night temperature contrast of the planet, in combination with tidally-locked rotation. Ignoring the contribution from winds, the absorption signal of the redshifted leading limb becomes weaker during the transit, as the dayside rotates out of view. On the other hand, the signal of the blueshifted trailing limb becomes stronger, as the dayside rotates into view. Therefore, in a scenario without winds, the absorption signal of the full limb transitions from being mainly redshifted in the first half of the transit to being mainly blueshifted in the second half[See the third row of Fig. 9 in <cit.> (nominal model w/o winds).]. Effect (ii), atmospheric dynamics, impacts the absorption signal of the planet in multiple ways. Firstly, winds shift the whole CCF to negative radial velocities, as the day-to-night flow causes the whole terminator plane to be blueshifted (see Fig. <ref>). Secondly, they “smoothen” the CCF map, resulting in a more gradual change of the net Doppler shift as a function of orbital phase. Finally, the angle between the polar wind vectors and the line-of-sight vector becomes smaller during the transit (see Fig. 10 in ). This projection effect causes the absorption signals associated with the polar sectors to become more blueshifted over time. The signal of the leading equator shows the same behaviour. §.§.§ Fe signals The first row of Fig. <ref> shows the CCF maps for iron. As mentioned in the previous paragraph, the iron signals of the nominal model were already presented in <cit.>, but not for the entire ESPRESSO wavelength range. Although the atmosphere does not feature strong limb asymmetries, the iron lines become progressively more blueshifted during the transit, owing to the effects discussed in the previous section. <cit.> (see their Section 5.1) suggested that the change in signal strength of the limb sectors was because the observation first probes the nightside on the trailing limb, and later the dayside – with the exact opposite occurring on the leading limb. However, Fig. <ref> demonstrates that something else is going on. Inside an iron line core, the absorption region lies on the dayside at every orbital phase. The reason why the signal of the trailing limb becomes stronger, though, is the fact that the projected separation (onto the limb plane) between the absorption region of the line core (in blue) and the absorption region of the continuum (in red) becomes larger during the transit. As shown in the top-left panel of Fig. <ref>, this causes an iron line to become stronger relative to its continuum, which is exactly what the magnitude of the CCF encodes. Therefore, in the case of iron, changes to a sector's signal strength are a consequence of geometry, rather than absorption regions shifting between the dayside and the nightside of the planet. However, the day-night contrast is still crucial for the projection effect to occur. Furthermore, Fig. <ref> illustrates that if iron was uniformly distributed across the atmosphere, its absorption lines would still only probe the dayside (see also the next paragraph about CO). Hence, it is not the 3D chemical map of iron that confines its absorption regions to the dayside. Instead, the scale-height difference between the dayside and the nightside causes a “shielding effect” – because the dayside is more puffy, the absorption lines probe altitudes at which the opacity of the nightside is negligible. §.§.§ CO signals The second row of Fig. <ref> shows the CCF maps of CO. Both the CCF maps and the absorption regions of CO (see Fig. <ref>) display nearly identical behaviour compared to iron. Although the abundance of CO is uniform across the atmosphere, its absorption lines virtually only probe the dayside. That is, the 10-90% absorption regions of the CO line core are situated on the dayside at all orbital phases. The reason for this is the “shielding effect” discussed in the previous paragraph. Stellar light rays first encounter the dayside, and the τ∼1 region lies at altitudes where the nightside does not contribute to the total optical depth. For CO, the signal strength of the trailing equator also increases during the transit, again due to a projection effect. Around the CO line plotted in Fig. <ref>, the “continuum” is caused by water absorption. As a result, the absorption region of the continuum behaves exactly like that of water in Fig. <ref> (see also the next paragraph). Interestingly, at ϕ = -11.7^∘ the CO line core and its adjacent continuum probe different sides of the planet. §.§.§ H_2O signals The third row of Fig. <ref> shows the CCF maps of water. Remarkably, the phase-dependence of the signal strengths displays the opposite behaviour compared to iron and CO. For iron and CO, the trailing-equator signal becomes stronger during the transit, but for water it becomes weaker. The absorption regions in Fig. <ref> demonstrate why this is the case. At the start of the transit, a water line core probes the nightside. Then, as the planet rotates, its absorption region shifts towards the dayside. However, because of the lack of water at low pressures (due to thermal dissociation), the absorption region is “pushed” down to higher pressures on the dayside. At these higher pressures, the absorption regions of the line core and the continuum lie closer together, which explains why the signal strength of the trailing limb decreases over the transit. Naturally, the opposite occurs on the leading limb, as shown by the CCF maps of the limb sectors in Fig. <ref>. Based on the behaviour of the individual limb sectors, one would expect the blueshift of the full limb to decrease during the transit. After all, the signal strength of the (least blueshifted) leading sectors becomes stronger over time. However, the reason why the net Doppler shift of the planet does not decrease is that the signals of the leading limb are stronger than those of the trailing limb over almost the entire transit. Therefore, the water signal of the full limb is dominated by the leading sectors, which are subject to an increasing blueshift over time. To build more physical understanding, Fig. <ref> shows the CCF maps of CO and water for two additional realisations of the nominal model: (i) a scenario with rotation only, in which the winds are zero, and (ii) a scenario with a shorter drag timescale , in which the winds are weaker (as they are subject to stronger drag) compared to the original model with . In the rotation-only case, the Doppler shifts of the individual limb sectors are constant during the transit. This means that the phase-dependence of the absorption trail of the full limb is purely governed by the varying signal strengths of the limb sectors. Therefore, in the rotation-only case, we do see that CO and water display the exact opposite behaviour: the CO signal goes from redshifted to blueshifted , while the water signal goes from blueshifted to redshifted. In the scenario with strong drag (second and fourth row in Fig. <ref>), planet rotation still dominates the shape of the absorption signals of the full limb. However, the signals are now more blueshifted because of the prevalence of day-to-night winds. As shown in Fig. <ref>, increasing the drag timescale ( ) causes the “step” feature in the planet's water signal to disappear. As previously mentioned, this is because the leading-limb signal becomes stronger than the trailing-limb signal over the entire transit. Hence, it is the planet's 3D wind-profile that is inducing an asymmetry in the nominal model: on the trailing limb, the variance in probed wind speeds is larger, causing the line contrast to become smaller and the CCF of the trailing sectors to become broader. Ultimately, this causes the water signal to look relatively similar to that of iron and CO, even though the water lines probe completely different regions of the atmosphere. Our result is in qualitative agreement with <cit.> and <cit.>, who also found an increasing blueshift for water with their “baseline” 3D models of WASP-76b. Furthermore, <cit.> also reported a step feature in the water signal of one of their magnetic-drag models, hinting at weaker winds and a more visible signature of planet rotation. On a final note, the absorption regions of water become very narrow towards the end of the transit (ϕ = 11.7^∘ in Fig. <ref>). This is because the distance between isobars is smaller at higher pressures. Also, the absorption regions coincide with a steep vertical gradient in the water abundance. Therefore, shifting the transit chord to a lower pressure will result in a sharp decrease in integrated abundance (and thus optical depth τ), while moving it to higher pressures will result in τ≫ 1. §.§.§ OH signals The fourth row of Fig. <ref> shows the CCF maps of OH. In the nominal model, the OH signal resembles that of iron and CO. However, the strength of the OH signal drops by a factor ∼2 during the transit. Also, Fig. <ref> demonstrates that the change in the signal strength of the trailing equator (as well as the other limb sectors) is more extreme than for the other species – at the start of the transit it is almost zero. Fig. <ref> illustrates the cause of this significant variation. Because the nightside is depleted of OH and because the higher-altitude regions on the dayside have a low OH abundance, the absorption regions of the line core and the continuum overlap at the start of the transit. This produces a negligible CCF signal. However, as the planet rotates, the more OH-abundant parts of the dayside rotate into view, and the line contrast increases. §.§.§ TiO signals The bottom row of Fig. <ref> shows the CCF maps of TiO. Whereas the other species clearly show an increasing blueshift during the transit, the blueshift of TiO is decreasing in the nominal model (albeit marginally). The reason for this is that the signal strengths of the limb sectors behave like those of water (see Fig. <ref>). Yet, in contrast to water, the signal of the trailing sectors is strong enough at the beginning to contribute to the Doppler shift of the full limb. At the end of the transit, the CCF map is dominated by the signal from the leading sectors. As shown in the figures, the signal of the (most blueshifted) trailing limb becomes weaker during the transit, as the absorption region of the TiO line core shifts from the nightside to the dayside. We note, though, that the absorption region very much hinges on the TiO abundances in the first column on the nightside (see Fig. <ref>), where the temperature profile allows for TiO to exist at all pressures. Without this column, the absorption regions would have been situated at lower altitudes, resulting in much weaker absorption lines (the column has to exist, though, as the atmosphere transitions from dayside to nightside). On the other hand, if the temperature gradient at the terminator was smoother, for example due to H_2 dissociation/recombination (), TiO would have existed across a wider range of longitudes. Therefore, in this model, it is the steepness of the temperature gradient at the terminator that determines whether or not TiO may be observable. §.§ CCF maps for other models Fig. <ref> shows the CCF maps computed for all models and all species (only the CCF maps of the full limb are shown). The colourmaps were normalised per row to allow for inter-model comparisons. Because TiO is cold-trapped in the no-TiO/VO model, it is not observable. Hence, there are 19 maps in total, rather than 20. Fig. <ref> depicts the absorption trails of all species in the same panel, for the nominal model and the cold-morning-limb model. §.§.§ Fe signals As expected, the cold-morning-limb model in Fig. <ref> shows a strong increase in blueshift over the first half of the transit. The reason for this is that the signals of the leading sectors are much weaker compared to the nominal model (see Fig. 13 in ). Hence, the signals of the trailing sectors already start to dominate the sum before mid-transit. After mid-transit, the blueshift remains constant, because the only contributions to the signal come from the trailing sectors. For further discussion, we refer to <cit.>. The CCF map of the optically-thick-clouds model is very similar to that of the nominal model, suggesting that adding a cloud deck has minimal impact on the iron signal. The reason for this is that the absorption regions of iron on the dayside lie at much higher altitudes compared to the optically thick clouds on the nightside (see Figs. <ref> and <ref>). Yet, the signal strength of the full limb does decrease slightly as a result of clouds. This is because the cloud deck lies at higher altitudes than the absorption regions of the continuum in the cloud-free atmosphere. Since the cloud deck is nearly symmetric, the signals of the trailing and leading limbs are affected equally – the line contrast and the magnitude of the CCF become marginally smaller, but the shape does not change. Thus, in this particular model, we find that nightside clouds are unable to mute the iron absorption signal, as it originates from too high altitudes. To make a cloud mute the absorption features of iron, as in <cit.>, it should be located at a significantly higher altitude than the continuum in the cloud-free case. Hence, based on our modelling efforts, we still find that a temperature (or scale-height) asymmetry between the trailing and leading limb is the most likely explanation for the strongly blueshifted iron signals of WASP-76b () and WASP-121b (). The iron signal of the no-TiO/VO model also exhibits stronger blueshifts compared to the nominal model. This is also related to a temperature asymmetry (see Fig. <ref>). Due to a large hotspot shift on the dayside, the 3D temperature structure of the model is lopsided, with the trailing limb being hotter and more extended than the leading limb. Hence, the signal of the blueshifted trailing limb contributes more strongly to the CCF map of the full planet. §.§.§ CO signals The behaviour of CO across the different models is very similar to that of iron (e.g., compare the first and the second rows in Figs. <ref> and <ref>). In the cold-morning-limb model, for example, the signal also undergoes a strong increase in blueshift during the first half of the transit, owing to the temperature asymmetry between the trailing and the leading sectors. The CCF map of CO is somewhat affected by the presence of optically thick clouds. This indicates that there are weaker CO lines that probe the atmosphere at lower altitudes, and which are thus muted by the cloud deck. For these weaker lines, the absorption regions are likely to lie partly on the dayside and partly the nightside, as CO is equally abundant on both hemispheres. With this in mind, the (stronger) CO line considered in Fig. <ref> may not be fully representative. However, note that the blue absorption regions plotted in Fig. <ref> only pertain to the line core – the line wings, which also contribute to the CCF, must probe lower altitudes. §.§.§ H_2O signals The signals of the nominal model, the cold-morning-limb model, and the no-TiO/VO model show the same behaviour, with the no-TiO/VO signal being slightly more blueshifted due to stronger day-to-night winds. The reason for this is that the 3D spatial distribution of water in each of the models is very similar. Water is present on the nightside, as well as at higher pressures on the dayside where the scale heights on the trailing limb and the leading limb are still the same . Consequently, the temperature asymmetries in the cold-morning-limb model and the no-TiO/VO model do not manifest in the CCF maps. In contrast to iron, the presence of optically thick clouds strongly suppresses the water signal. This is because the cloud deck is situated at roughly the same altitude as the water absorption regions. Hence, the line contrast is small and the vast majority of water lines are muted. §.§.§ OH signals The OH signal of the cold-morning-limb model also features an increasing blueshift during the first half of the transit. However, the signal from the leading sectors is very weak. The reason for this is that the colder leading limb is depleted of OH, while the high-altitude part of the dayside that is in view does hot have sufficient OH abundance to cause significant absorption. In the second half of the transit, the trailing sectors completely dominate the absorption signal. The same idea holds for the no-TiO/VO model, which does not have any detectable OH on the leading limb. Just like water, OH probes relatively low altitudes. Therefore, the introduction of optically thick clouds heavily mutes the OH absorption lines. At mid-transit, the absorption signal is almost zero. The strongest contributions come from the leading equator around ingress and from the trailing equator around egress. Again, however, it is questionable whether optically thick clouds allow for OH to be detected at all, given that the CCF signal only emerges from a narrow range of orbital phase angles. §.§.§ TiO signals In the cold-morning-limb model, the contribution from the leading equator is zero (not shown in a plot). This is why the TiO signal is more blueshifted than in the nominal model. Additionally, the TiO signal appears to have a more bimodal structure. As a result, the signal is “smeared” over a range of K_p and V_sys values (see Fig. <ref>), which could make TiO harder to detect in this scenario. When optically thick clouds are introduced, TiO absorption is muted in all limb sectors except the leading pole (not shown). This is why the blueshift of the full-limb signal in Fig. <ref> closely resembles that of the leading pole in Fig. <ref>. §.§ K_p–V_sys maps §.§.§ Systematic peak offsets Fig. <ref> shows the K_p–V_sys maps associated with the CCF maps from Fig. <ref>. Because the absorption signals all exhibit Doppler shifts in the planetary rest frame, the SNR peaks in the K_p–V_sys maps are offset from (0, 0) km/s. All SNR peaks, except for the the TiO signal of the nominal model, are consistently located at lower V_sys and lower K_p values than would be expected based on the orbital motion of the planet and the radial velocity of the star. The red dashed curves in Fig. <ref> illustrate why this is the case. These are the curves that give rise to the highest integrated SNR value (equation <ref>). Because all absorption signals are blueshifted on average, the best-fitting curve has a negative horizontal offset, yielding Δ V_sys < 0. Also, because the signals become more blueshifted over time (with the exception of TiO in the nominal model), the slope of the curve is negative, corresponding to Δ K_p < 0. Along the V_sys axis, the offset of the SNR peak is typically a few km/s. To zeroth order, Δ V_sys can be interpreted as the average wind speed across the terminator. The maximum shift we encounter is Δ V_sys = -7 km/s for the iron signal of the no-TiO/VO model. Intuitively, it makes sense that the shifts are the largest for this model, as it has no drag and thus the highest wind speeds. Along the K_p axis, the peak offset can be more significant. Typically, the K_p shift is much larger than the Doppler shift measured from the CCF at any orbital phase. This is because the value of Δ K_p does not encode information about the absolute value of the line-of-sight velocities. Rather, it reflects the rate of change of the planet's Doppler shift during the transit (and thus how steep the slope of the best fitting absorption trail needs to be). For this reason, signals with a strong phase-dependence show the most extreme values of Δ K_p. For example, both the cold-morning-limb model and the no-TiO/VO model yield Δ K_p≈ -20 km/s for iron and CO. §.§.§ The signature of planet rotation The K_p–V_sys maps in Fig. <ref> (nearly) all show negative K_p and V_sys offsets. However, there are theoretical scenarios in which Δ K_p and/or Δ V_sys can be positive. To explore these scenarios, we revisit the two alternative versions of the nominal model presented in Fig. <ref>. Their K_p–V_sys maps are depicted in Fig. <ref>. The left panels show the SNR peaks of CO and water for the model with rotation only. In this scenario, the SNR peaks acquire a “boomerang” shape. That is, there is a family of (K_p, V_sys) values that “fit” the absorption signal of the planet equally well. Fig. <ref> demonstrates why this is the case. For a model with rotation only, the absorption signal of the planet clearly features two components: a blueshifted component associated with the trailing limb and a redshifted component associated with the leading limb. Such a signal can be described by different trails that give rise to roughly the same integrated SNR: a trail fitting the trailing limb (Δ K_p≈ 0; Δ V_sys<0), a trail fitting the leading limb (Δ K_p≈ 0; Δ V_sys>0) and a trail that fits both components (large K_p offset; V_sys≈0). The latter has a negative slope for CO (Δ K_p < 0), but a positive slope for water (Δ K_p > 0), owing to the 3D distribution of these species across the atmosphere. The right panels in Fig. <ref> show how the K_p–V_sys maps of the planet change when weak winds are added to the model (the “strong drag” scenario). Because of the presence of day-to-night winds, the planet signal becomes more blueshifted and the SNR peaks shift to negative Δ V_sys. Also, the “boomerang” shape partly disappears, as winds tend to make the absorption trail of the planet smoother. However, especially for water, there are still a wide range of combinations that fit the absorption signal of the planet well. §.§.§ Estimating the K_p shift of a planet due to rotation In the scenario where planet rotation is dominating the Doppler shift of the planet, we can estimate the K_p shift imposed on the planet signal. To this end, we assume that the absorption signal is dominated by the leading limb at the start of the transit and by the trailing limb at the end of the transit. During transit, it holds that cos(ϕ) ≈ 1, such that the change in radial velocity ΔRV between two phases due to orbital motion is ΔRV = 2 π K_p( Δϕ/360^∘), with Δϕ the phase difference (in degrees). Therefore, in the planetary rest frame, the K_p shift resulting from planet rotation can be computed from Δ K_p = ΔRV/2π( 360^∘/Δϕ), with ΔRV the radial-velocity difference between the trailing and the leading limb, and Δϕ the phase difference between ingress and egress. The most extreme RV value that can be acquired by both limbs is ± v_eq, the rotational velocity of the planet at the equator. Hence, a rough approximation[In reality, the average Doppler shift across the limb will be smaller than v_eq, as regions away from the equator lie closer to the rotation axis. Nonetheless, Fig. <ref> demonstrates that the assumption ΔRV = 2v_eq is not too unrealistic, as the peaks of the CCFs of the full limb lie relatively close to v_eq (± 5.3 km/s). This is because the signal from the equatorial sectors is stronger than that of the polar sectors.] is ΔRV≈±2v_eq, resulting in Δ K_p≈±v_eq/π( 360^∘/Δϕ). Invoking v_eq = 2π R_p / P, Δϕ = 2 arcsin(R_*/a) ≈ 2 R_*/a, 360^∘ = 2π rad, and Kepler's third law, we can also write equation <ref> as Δ K_p≈±R_p/R_*( 2π GM_* /P)^1/3, with R_p the planet radius, P the orbital period, a the semi-major axis of the orbit, R_* the stellar radius, M_* the stellar mass, and G the gravitational constant, respectively. For signals dominated by the leading limb in the first half of the transit and by the trailing limb in the second half (e.g., Fe and CO), Δ K_p will be negative. For signals dominated by the trailing limb in the first half of the transit and by the leading limb in the second half (e.g., H_2O), Δ K_p will be positive. Hence, the sign of Δ K_p depends on the 3D distribution of a species across the atmosphere. Evaluating equation <ref> for the parameters of WASP-76b[See e.g., ], we find Δ K_p≈ ±21 km/s, which is in rough agreement with the K_p shifts reported for CO and water in Fig. <ref>. For WASP-121b, another well-studied ultra-hot Jupiter, we find Δ K_p≈ ±28 km/s. This demonstrates that the K_p offsets observed for a planet can be much larger than the actual line-of-sight velocities in its atmosphere. §.§.§ Comparison to transit observations A considerable number of HRS observations of ultra-hot Jupiters have revealed peak offsets in K_p–V_sys maps that hint at atmospheric dynamics and/or 3D spatial variations in temperature and chemistry. <cit.> and <cit.> presented K_p–V_sys maps for a plethora of species in the atmosphere of WASP-76b (H, Li, Na, Mg, K, Ca ii, V, Cr, Mn, Co, Ni, Sr ii, VO, Ca, Ba ii, O, Fe, and Fe ii). For the vast majority of these species, they reported negative K_p and V_sys offsets, which is in good agreement with this work (see Fig. <ref>). Note that many of the species observed in the optical are refractories and alkalis, which are abundant on the dayside of the planet. Therefore, their absorption signals should behave in the same way as those of iron, CO, and OH modelled in this work. For species such as H, O, and Ca ii, <cit.> and/or <cit.> found positive K_p offsets. This is because the absorption lines of these species probe higher regions of the atmosphere that are likely prone to atmospheric escape (e.g., ). Such physics is not included in our model. In the infrared, CARMENES observations of WASP-76b revealed positive K_p offsets for H_2O, HCN (+50 km/s, ), and OH (+35 km/s, ), suggesting a decreasing blueshift of the absorption lines over the course of the transit. For water, our models are able to produce a positive Δ K_p when the line-of-sight velocities are dominated by planet rotation (see Fig. <ref>). However, the expected offset would be of the order in this scenario. A +50 km/s shift in K_p is hard to explain with our current framework. As for OH, our models predict Δ K_p to be negative, rather than positive. Further observations[The studies by <cit.> and <cit.> were based on the same archival CARMENES data, so further observations would be needed to rule out the presence of any systematics in the dataset.] and/or modelling studies will be required to elucidate the differences between our models and the findings by <cit.> and <cit.>. Optical transmission observations of WASP-121b have shown negative K_p and V_sys offsets for Fe (), Cr, V, Fe ii (), Ca, K, Co, Cu, V ii, Ti ii, Mg, and Sc ii (). Many of the more “exotic” species reported by <cit.> only showed weak or tentative detections, so their (K_p, V_sys) values should be treated with caution. However, the observations demonstrate that the majority of refractories and alkalis undergo increasing blueshifts during the transit, just like on WASP-76b. <cit.> also reported a few species with Δ K_p≈ 0 km/s and Δ V_sys<0 km/s (Mn, Co ii, Ni), which could imply that these species are only observable on the trailing limb of the planet. More recently, <cit.> also recovered negative K_p and V_sys offsets when cross-correlating ESPRESSO data of WASP-121b with a template containing Fe, Mg, Cr, Ti, V, Na, and Ca lines. For Ca ii, <cit.> found a positive Δ K_p, again indicating that its absorption lines probe higher regions of the atmosphere with different dynamics. For KELT-20b/MASCARA-2b, <cit.> and <cit.> reported “double-peak” features in the K_p–V_sys maps of neutral iron. These could hint at the fact that planet rotation is the dominant contributor to the line-of-sight velocities, such that the absorption signal is made up of separate components associated with the trailing and leading limb, respectively (as in Figs. <ref> and <ref>). What is puzzling however, is that the SNR peaks lie 70–80 km/s apart along the K_p axis, while equation <ref> only predicts Δ K_p≈ 20 km/s for KELT-20b. Furthermore, <cit.> observed five transits of KELT-20b, and only in two transits was the double-peak feature recovered. Other transiting ultra-hot Jupiters for which peak offsets in K_p–V_sys maps were found are WASP-189b (), HAT-P-70b (), and KELT-9b (). For 18 species present in the atmosphere of KELT-9b, <cit.> extracted K_p values spanning a range of 60 km/s (see their Fig. 6). Interpreting their observations with our current set of models is hard, as the equilibrium temperature of KELT-9b is roughly two times that of WASP-76. §.§.§ A note on high-resolution retrievals of ultra-hot Jupiters In this work, we showed that different species are subject to different Doppler shifts and K_p–V_sys offsets in transmission. At the moment, retrieval frameworks typically include one Δ K_p and one Δ V_sys parameter to describe the “bulk” Doppler shift of the entire spectrum as a function of phase (). In the optical, such an approach is justified for the vast majority species (i.e., most alkalis and refractories) as these are expected to have similar distributions across the atmosphere, resulting in similar K_p–V_sys offsets (e.g., Figs. 1 and 9 in ). However, as noted by <cit.> and <cit.>, care should be taken with species that probe the planet's exosphere (e.g., H, O, Fe ii, Mg ii, and Ca ii). Both studies excluded these species from their retrievals, as the exosphere is non-hydrostatic (impacting line strengths) and features strong outflows (impacting line shapes and positions). A 1D retrieval model with one set of parameters and a single scale factor (a parameter controlling the line strengths of the model) cannot account for the behaviour of all species at the same time. Following <cit.>, a good practice for high-resolution retrievals would be to plot the K_p–V_sys maps of all species to be included in the forward model, and examine their peak offsets. If the peak offsets of two (groups of) species are substantially different, they may require their own set of (Δ K_p, Δ V_sys) parameters. Another, option is to run a separate retrieval for each (group of) species. The latter does not increase the complexity of the forward model, but doubles the computing time. In the infrared, things are more intricate than in the optical as water and CO – the two most prominent species – probe completely different parts of the atmosphere (see Fig. <ref>), each with their own temperature, dynamics, and scale height. Therefore, fitting the same Δ K_p, Δ V_sys, and temperature profile to the absorption lines of both species may be problematic. The most straightforward solution would be to run two separate retrievals for water and CO. Contrary to CO, which probes the dayside during the entire transit, the absorption regions of water shift across the terminator as a function of orbital phase. On the trailing limb, the absorption regions shift from the nightside to the dayside, while they shift from the dayside to the nightside on the leading limb. Therefore, water would be the ideal molecule to study with a 2D retrieval model (), which is able to assign separate abundances to the trailing and leading limb of the planet, respectively. § CONCLUSION Developing a deeper understanding of the “3D-ness” of exoplanet atmospheres is crucial to fully leverage the information content of both their high-resolution and low-resolution spectra. With JWST delivering its first data (e.g., ) and a new generation of ground-based telescopes (E-ELT, GMT, TMT) on the horizon, modelling studies that bridge the gap between theory and observation play an essential role in the interpretation of current and future observations. In this work, we simulated the cross-correlation signals of Fe, CO, H_2O, OH, and TiO for four different 3D models of a benchmark ultra-hot Jupiter (WASP-76b) in transmission. Because ultra-hot Jupiters show extreme spatial variations in temperature and chemistry across their terminators, their transmission spectra contain a wealth of information about the 3D structure of the atmosphere. VLT/ESPRESSO and GEMINI-N/MAROON-X are able to phase-resolve the absorption signals of ultra-hot Jupiters in the optical (). With novel spectrographs such as GEMINI-S/IGRINS () and VLT/CRIRES+ (), this will now also possible in the infrared. Moreover, once the E-ELT is on sky, phase-resolving the CCF will become standard practice for any high-resolution observation of a hot gas giant, as the signal-to-noise will be high enough to detect the planet in only a fraction of a transit. Also, the E-ELT will offer the opportunity to take ingress and egress spectra, whereby only a part of the planet disk is blocking the star. We summarise our most important findings below: ∙ For species that probe the dayside of an ultra-hot Jupiter (refractories like Fe, or stable molecules like CO and OH), the net blueshift should increase during the transit, resulting in a negative K_p offset. This holds even in the absence of an east-west asymmetry (e.g., due to a hotspot offset). The increasing blueshift is due to the combined effect of the 3D spatial distribution of the species and planet rotation. Our findings are in good agreement with optical high-resolution observations of WASP-76b and WASP-121b (e.g., ). Conversely, for species that probe the nightside (such as H_2O and TiO), their 3D spatial distribution and planet rotation act in an opposite manner. Depending on the 3D wind profile of the planet, this can lead to weaker blueshifts with orbital phase, or even increasing redshifts. Such behaviour results in a positive K_p offset. ∙ The K_p offset of a species reflects the rate of change of its Doppler shift in the planetary rest frame. Therefore, as opposed to Δ V_sys (which is of the same order as the wind speeds), Δ K_p can be much larger than the line-of-sight velocities in the planet's atmosphere at any time. Δ K_p < 0 when the Doppler shift becomes more negative during the transit, while when the Doppler shift becomes more positive. In this work, we derived a formula to estimate the typical K_p offset of a planet. For and , Δ K_p can be as large as ± 21 km/s and , respectively. ∙ When performing atmospheric retrievals on transmission spectra of ultra-hot Jupiters, separate temperature profiles and values should be retrieved for species that probe the dayside and the nightside of the atmosphere, respectively (e.g., CO and H_2O in the infrared). Our analytical formula can provide a reasonable prior for the range of possible departures from a planet's orbital K_p value. ∙ For WASP-76b, our nominal GCM model does not predict strong differences between the cross-correlation signals of Fe, CO, H_2O and OH in transmission. However, our model with a colder morning limb, which produces the same “kink” feature as seen in the data ( ), predicts a more diverse set of absorption signals for the chemical species studied. We conclude that observing the phase-dependent absorption signal of multiple species that probe distinct parts of the atmosphere allows to differentiate between two models that fit the signal of a single species equally well. ∙ Even though CO is uniformly distributed across the atmosphere of an ultra-hot Jupiter, it predominantly probes the dayside. This is because of a “shielding effect”. Since the dayside is more extended than the nightside, CO absorption happens at high altitudes on the dayside where the nightside contribution to the optical depth is zero. ∙ H_2O absorption lines can be strongly muted by optically-thick clouds on the nightside of ultra-hot Jupiters. On the other hand, nighside clouds will not have a big impact on the absorption signals of Fe and CO, as these species probe higher altitudes on the dayside. § ACKNOWLEDGEMENTS We are grateful to Ray Pierrehumbert for sharing computing resources. We also thank David Ehrenreich and Ray Pierrehumbert for insightful discussions. JPW sincerely acknowledges support from the Wolfson Harrison UK Research Council Physics Scholarship and the Science and Technology Facilities Council (STFC). Finally, we thank the anonymous referee for thoughtful comments that helped improve the quality of the manuscript. § DATA AVAILABILITY The data and models underlying this article will be shared on reasonable request to the corresponding author. mnras § IMPACT OF NEW MODELLING APPROACHES ON THE CCF MAP Fig. <ref> shows a comparison between the CCF map of iron obtained for the cold-morning-limb model in <cit.> (top left) and the cold-morning-limb model from this work (bottom right). The underlying atmosphere is the same, but a few changes were made to the radiative transfer: (i) accounting for scale-height differences due to hydrogen dissociation, (ii) including opacities for more species, and using iron line lists with pressure broadening and no line-wing cut-off, (iii) increasing the wavelength range, and (iv) decreasing the resolution (see Section <ref>). In summary, we find that changes (iii) and (iv) have the biggest impact on the CCF map. However, the overall behaviour of the iron signal proves robust – an increasing blueshift and signal strength over the course of the transit, with the blueshift remaining constant at about -8 km/s after mid-transit.
http://arxiv.org/abs/2307.05464v1
20230711175033
Quantum noise dynamics in nonlinear pulse propagation
[ "Edwin Ng", "Ryotatsu Yanagimoto", "Marc Jankowski", "M. M. Fejer", "Hideo Mabuchi" ]
quant-ph
[ "quant-ph", "physics.optics" ]
These authors contributed equally to this work. Physics & Informatics Laboratories, NTT Research, Inc., Sunnyvale, California 94085, USA E. L. Ginzton Laboratory, Stanford University, Stanford, California 94305, USA These authors contributed equally to this work. Physics & Informatics Laboratories, NTT Research, Inc., Sunnyvale, California 94085, USA E. L. Ginzton Laboratory, Stanford University, Stanford, California 94305, USA School of Applied and Engineering Physics, Cornell University, Ithaca, New York 14853, USA Physics & Informatics Laboratories, NTT Research, Inc., Sunnyvale, California 94085, USA E. L. Ginzton Laboratory, Stanford University, Stanford, California 94305, USA E. L. Ginzton Laboratory, Stanford University, Stanford, California 94305, USA E. L. Ginzton Laboratory, Stanford University, Stanford, California 94305, USA The propagation of ultrafast pulses in dispersion-engineered waveguides, exhibiting strong field confinement in both space and time, is a promising avenue towards single-photon nonlinearities in an all-optical platform. However, quantum engineering in such systems requires new numerical tools and physical insights to harness their complicated multimode and nonlinear quantum dynamics. In this work, we use a self-consistent, multimode Gaussian-state model to capture the nonlinear dynamics of broadband quantum fluctuations and correlations, including entanglement. Notably, despite its parametrization by Gaussian states, our model exhibits nonlinear dynamics in both the mean field and the quantum correlations, giving it a marked advantage over conventional linearized treatments of quantum noise, especially for systems exhibiting gain saturation and strong nonlinearities. Numerically, our approach takes the form of a Gaussian split-step Fourier (GSSF) method, naturally generalizing highly efficient SSF methods used in classical ultrafast nonlinear optics; the equations for GSSF evaluate in 𝒪(M^2log M) time for an M-mode system with 𝒪(M^2) quantum correlations. To demonstrate the broad applicability of GSSF, we numerically study quantum noise dynamics and multimode entanglement in several ultrafast systems, from canonical soliton propagation in third-order (χ^(3)) waveguides to saturated χ^(2) broadband parametric generation and supercontinuum generation, e.g., as recently demonstrated in thin-film lithium niobate nanophotonics. Quantum noise dynamics in nonlinear pulse propagation Hideo Mabuchi August 12, 2023 ===================================================== § INTRODUCTION The concept of shot noise is a prevailing paradigm for understanding fundamental quantum fluctuations in electromagnetic radiation, arising from the discrete nature of photons <cit.>. As photonic devices push towards the ultimate limits of energy efficiency, however, quantum fluctuations become an increasingly ubiquitous and limiting factor in their operation, and the potential emergence of non-Poissonian and correlated photon statistics—e.g., squeezing <cit.>, photon (anti-)bunching <cit.>, and quantum diffusion of optical pulses <cit.>—necessitates a more sophisticated treatment of quantum noise. At the same time, properly harnessing such nonclassical phenomena presents major opportunities in photonics research, with applications from quantum-enhanced metrology <cit.> to quantum information processing <cit.>. An emerging but promising approach for accessing this nonclassical regime is the use of dispersion-engineered nonlinear nanophotonics <cit.>, where the spatial <cit.> and temporal confinement of light to ultrashort pulses propagating in sub-wavelength waveguides significantly enhances the nonlinear polarization produced per photon. In principle, such devices can access single-photon nonlinearities <cit.>, in which full quantum models are needed to describe photon correlations <cit.>. However, even as experimental efforts advance towards this critical milestone, many transitional and practically important devices, from high-gain parametric amplifiers <cit.> to low-power microcombs <cit.>, are expected to operate in a more intermediate, semiclassical regime, where it suffices to account for first- and second-order (i.e., Gaussian) correlations in the quantum fluctuations. A complete understanding of these leading-order quantum effects is also vital for navigating the classical-quantum transition, allowing us to conceptually interpolate between these regimes and facilitating the development of hybrid semiclassical-quantum models <cit.>. The systematic treatment of Gaussian quantum correlations has recently been formalized into the language of Gaussian-state quantum optics <cit.>, a framework that describes the action of basic linear components such as squeezers, beamspitters, phaseshifters, etc., as discrete Gaussian operations on Gaussian states. From a classical perspective, however, the dynamics of light are much richer than a Gaussian-state formalism based on discrete operations might suggest. Broadband fields evolving under nonlinear partial differential equations prescribed by Maxwell's equations, e.g., the Lugiato-Lefever equation <cit.> and the nonlinear Schrödinger equation <cit.>, can support a rich phenomenology of emergent multimode dynamics, such as rogue waves <cit.>, chaos, and solitons <cit.>, enabling breakthrough technologies such as optical frequency (micro-)combs <cit.> in the process. As quantum fluctuations become increasingly relevant to the operation of highly multimode and nonlinear devices, we require a unified framework <cit.> that leverages the mathematical efficacy of multimode Gaussian-state models while capturing the physical expressivity of nonlinear ultrafast dynamics. The demand is especially acute for guiding the development of emerging platforms like thin-film lithium niobate (TFLN) nanophotonics <cit.>, which is already anticipating a regime of attojoule-level, femtosecond nonlinear optics in next-generation devices. In this paper, we show how the framework of multimode Gaussian quantum optics <cit.> can be integrated with the nonlinear dynamics of ultrafast pulse propagation, allowing us to study the roles quantum fluctuations play even in current-generation devices. Our approach is a natural Gaussian-state generalization of the classical split-step Fourier (SSF) method used in nonlinear ultrafast optics, modified to systematically treat multimode quantum noise, correlations and entanglement on the same dynamical footing as the mean field, without the use of ad hoc noise models or Monte-Carlo techniques. In contrast to conventional linearized treatments such as undepleted-pump approximations <cit.>, our Gaussian SSF (GSSF) approach uses a self-consistent Gaussian-state approximation to the quantum dynamics <cit.> to take into account nonlinear corrections to the mean-field dynamics induced by quantum fluctuations. These corrections are necessary to ensure energy conservation in the high-efficiency, low-energy regimes of nonlinear nanophotonics, where saturation energies are orders of magnitude lower than in bulk or fiber optics and linearized models are often inadequate. We apply our method to numerically study the dynamics of quantum noise in several illustrative examples from nonlinear ultrafast optics. Using the GSSF version of the nonlinear Schrödinger equation, we study the canonical Kerr soliton and show how multimode quantum fluctuations can destabilize the classical waveform. We also look at optical parametric generation <cit.>, in which intense squeezing of a signal pulse results in pump depletion solely due to parametric fluorescence <cit.>. Finally, we simulate supercontinuum generation based on broadband, saturated second-harmonic generation <cit.> and analyze how quantum entanglement of the octave-spanning frequency comb affects the quantum noise limit for the detection of carrier-envelope-offset beat notes in f-2f interferometry. Notably, the latter two examples involve device parameters that have already been demonstrated experimentally using TFLN waveguides <cit.>, underscoring the utility of our GSSF framework for engineering ultrafast quantum nonlinear devices. § GAUSSIAN APPROXIMATION OF NONLINEAR DYNAMICS To illustrate our scheme and compare it with other approaches, we consider one of the simplest nonlinear optical models, the single-mode Kerr Hamiltonian Ĥ_Kerr1/2ħ g â^†2â^2 (note we use the notation for definitions, and ∂_y x x / y). Physically, Ĥ_Kerr can be seen as describing a single trapped mode in a cavity experiencing self-phase modulation, and the Heisenberg equation of motion for its quantum dynamics is ∂_gtâ=â^†â^2. Note that such a single-mode model does not inherit the modeling challenges intrinsic to multimode quantum dynamics of pulse propagation, and thus, it should be seen as only a toy model in the context of this work. Nevertheless, as we show in this section, we can still obtain useful insights translatable to generic multimode scenarios through the studies of the single-mode toy model. Our approach is to assume that the system can be well described by a Gaussian state characterized by the mean â and covariances â^†â and â, where δââ - â is the fluctuation operator corresponding to â. Intuitively, the mean corresponds to (or generalizes) the classical field, while the covariances describe the statistics of the system's quantum noise. Of course, since Ĥ_Kerr is a nonlinear Hamiltonian, the dynamics in principle can generate non-Gaussian features in the state. Here, we are primarily interested in a systematic approach for neglecting such non-Gaussian features in order to arrive at a Gaussian approximation of the dynamics, which physically is well justified outside regimes of strong single-photon nonlinearities. To derive the equations of motion for the mean field, we take expectations on both sides of the Heisenberg equation of motion, to obtain ∂_gtâ = â^†â^2. The righthand side involves an expectation over a higher-order product of operators, for which we require a suitable approximation. We now describe one conventional approach for dealing with this problem, which we call the linearized treatment. We then generalize this treatment with a nonlinear Gaussian model to include nonlinear corrections. §.§ Linearized treatment In the linearized treatment, we make two main assumptions to simplify (<ref>). First, we assume the state is well approximated by a coherent state, so we can write â^†â^2↦â^†â^2. As a result, we immediately recover the (classical) mean-field equation of motion for a Kerr cavity, ∂_gtâ = â^†â^2, which can now be solved without employing any knowledge of how the quantum noise evolves in the system. Second, for the covariance equations, we discard any terms on the righthand side which are second-or-higher-order in the fluctuation operators. This generates a linearized equation of motion for the fluctuation operator ∂_gtδâ = â^2 δâ^† + 2â^†âδâ. From this, it follows that the covariances also evolve linearly, according to ∂_gtâ = â^2 δâ^†δâ + δâδâ^† + 4â^†ââ ∂_gtâ^†â = â^2 δâ^†2 - â^†^2 â, which, together with (<ref>), constitute the dynamics under the linearized treatment of Ĥ_Kerr. In fact, we can analytically solve these equations: For an initial coherent state with â = α_0, we have for the mean â = α_0 -τ and for the covariances â = --2τ(τ^2+τ), â^†â = τ^2, where we have defined τα_0^2 gt. In the linearized treatment, there is an asymmetry or separation of scales between the classical and semiclassical dynamics: While the evolution of the quantum fluctuations are driven by the evolution of the mean field, the mean itself evolves purely classically and is unaffected by the quantum noise. This inherent inconsistency is often acceptable in situations where a very large mean field is required to produce even modest amounts of squeezing, but it can lead to unphysical consequences, such as violation of photon-number (energy) conservation, in more mesoscopic regimes of operation. In this case, the mean photon number under the linearized dynamics is n̅â^2 + â^†â = α_0^2(1 + τ), which is clearly increasing with time. In Fig. <ref>, we show the evolution of an initial coherent state under Ĥ_Kerr comparing the exact quantum dynamics (dotted line) to the linearized treatment (dashed lines). We see that linearization overestimates both the photon number and the variances of the quantum fluctuations, especially at later times. As hinted in the figure, however, such issues can be mitigated by turning to a self-consistent nonlinear Gaussian model. §.§ Nonlinear Gaussian model Using again the simple single-mode Ĥ_Kerr, we now outline the essential ingredients for an alternative approach based on a self-consistent Gaussian-state approximation. Our goal is again to derive equations of motion for the mean â and the covariances â^†â and â, but, here, we keep all terms in intermediate calculations and only apply Gaussian-state assumption at the end, after expanding higher-order moments as needed. Rather than (<ref>), we instead have the exact expression â^†â^2 = â^†â^2 + 2ââ^†â + â^†â + δ a^†δâ^2. With this, the mean-field equation of motion becomes ∂_gtâ = â^†â^2 + 2â⟨δâ^†δâ⟩ + â^†â, where the only approximation we have made is that δâ^†δâ^2 = 0, which is necessarily true for a Gaussian state since this term is an odd-order central moment. In contrast to (<ref>), the equation of motion for the mean now involves the covariances. To obtain the equation of motion for the covariances, we also first obtain the dynamics of the fluctuations. However, without making the linearization approximation, the exact form of (<ref>) is instead ∂_gtδâ = â^2 δâ^† + 2â^†âδâ + δâ^†δâ^2 + 2âδâ^†δâ - â^†â + â^†δâ^2 - â. We next utilize the chain rule, taking care to preserve operator ordering, via ∂_tẑ_1ẑ_1 = (∂_tδẑ_1)(δẑ_2) + δẑ_1 (∂_tδẑ_2). Applying this to, e.g., the equation of motion for â, ∂_gtâ = â^2 â^†â + â^2 ââ^† + 4â^†ââ + δâ^†δâ^3 + δâδâ^†δâ^2, where the only approximation we have made is again the elimination of odd central moments, i.e., the terms stemming from the second line of (<ref>). Compared to (<ref>), we see that there are fourth-order correction terms [ Interestingly, it turns out in that in the single-mode case, such higher-order corrections only affect the equation for â, and in fact (<ref>) is unchanged. However, in the general multimode case, higher-order moments enter into the evolution of all non-diagonal moments in general. ]. Finally, we require one additional step in order to simplify the fourth-order moments occurring in the second line above, as this term is not generally zero for a Gaussian state. However, for a Gaussian state, it turns out that even higher-order central moments can be decomposed into sums of products of covariances only. In particular, for this case, we can use the expansion [ More generally, for a Gaussian state, δẑ_1 ⋯δẑ_n = ∑_p ∈ℙ_n∏_(i,j) ∈ pẑ_iẑ_j, where ℙ_n denotes the set of all order-preserving pair partitions of {1, …, n}. For example, for n = 4, the elements of ℙ_4 are the 3 pair partitions {(1,2),(3,4)}, {(1,3),(2,4)}, and {(1,4),(2,3)}. Taking these elements in the sum-of-products produces (<ref>). ] δẑ_1 δẑ_2 δẑ_3 δẑ_4 = ẑ_1ẑ_2ẑ_3ẑ_4 + ẑ_1ẑ_3ẑ_2ẑ_4 + ẑ_1ẑ_4ẑ_2ẑ_3. With this decomposition, we can now show that ∂_gtâ = â^2δâ^†δâ + δâδâ^† + 4â^†ââ ∂_gtâ^†â = â^2δâ^†2 - â^†2â, which, together with (<ref>), constitute the nonlinear Gaussian model for Ĥ_Kerr. Note that in (<ref>), we use the shorthands â^2 = â^2 + â and â^†â = â^†â + â^†â to highlight similarities to the linearized treatment (<ref>). At the same time, we clearly see the distinction as well: The linearized treatment effectively assumes that â≪â^2 and â^†â≪ |â|^2 in evaluating the evolution of the variances. We emphasize that the coupled equations (<ref>) and (<ref>) are nonlinear differential equations, describing nonlinear evolution of the Gaussian moments. As first pointed out in Ref. <cit.>, such models, while approximate, can capture a wider set of physical behaviors than linearized approximations, where the Gaussian moments follow strictly linear dynamics. To distinguish the two, we therefore refer to such models as nonlinear Gaussian-state models. We also note that, in the single-mode case, these nonlinear equations are consistent with those derived using a similar approach in Ref. <cit.>. Finally, we can show that, in contrast to the linearized treatment, the nonlinear Gaussian-state model preserves photon number. This can be seen by computing ∂_t n̅ = â^† (∂_tâ) + (∂_tâ^†) â + ∂_t â^†â = 0. In fact, as shown in Fig. <ref>, the nonlinear Gaussian-state model exactly tracks the photon number of the correct quantum model, while providing a more faithful estimate of the variances compared to the linearized treatment. § QUANTUM NOISE PROPAGATION IN A CHI(3) WAVEGUIDE Extending this method to the broadband and multimode setting, we now consider a 1D waveguide with a non-dispersive third-order χ^(3) nonlinearity. Theoretically, a continuum treatment of such a system can be quantized by introducing field operators ψ̂_z which obey continuum commutation relations []ψ̂_z,ψ̂_z'^† = δ(z-z') and annihilate the quantized photon-polariton field of the medium at some spatial position z. Using these field operators together with their Fourier duals Ψ̂_k ∫ z - kzψ̂_z, we consider a χ^(3) Hamiltonian Ĥ_χ^(3) Ĥ_4wm + Ĥ_lin, where Ĥ_4wm ħ/2∫ z g ψ̂^†2_z ψ̂_z^2, Ĥ_linħ∫ k/2π Ω(k) Ψ̂_k^†Ψ̂_k. Here, g is a coupling rate related to the nonlinearity, while Ω(k) describes the linear dispersion of the field. In this work, we are interested in the copropagating envelope of a pulse but not the carrier nor the absolute group velocity. Thus, if ω(k_0+k) is the bare frequency of a monochromatic mode Ψ̂_k with wavevector offset by k from the carrier's at k_0, then we define Ω(k) ω(k_0+k) - []ω(k_0) + kω'(k_0). That is, we interpret Ψ̂_k in a frame rotating at ω(k_0+k) - Ω(k) = ω(k_0) + kω'(k_0), and ψ̂_z acts on a relative position z comoving at ω'(k_0). The Heisenberg equation of motion for ψ̂_z generated by Ĥ_χ^(3) is ∂_t ψ̂_z = g ψ̂_z^†ψ̂_z^2 + Ω(-∂_z)ψ̂_z. The mean-field version of this equation, obtained by formally replacing ψ̂_z with a c-number function ψ(z), is is the usual classical equation of motion for a mean-field waveform ψ(z) in a χ^(3) waveguide with linear dispersion, of which the famous nonlinear Schrödinger equation (NLSE) is a special case when Ω is expanded to second order <cit.>. The two terms representing the χ^(3) nonlinearity and linear dispersion above are respectively generated by the two Hamiltonians Ĥ_4wm and Ĥ_lin. They each take a simple local form in (<ref>), only when respectively expressed in position and momentum space, which are Fourier dual to one another; consequently, Ĥ_4wm and Ĥ_lin do not commute in general. In numerical methods, it is well-known that such a situation can be effectively treated with a split-operator approach: Instead of trying to evolve the system under both Hamiltonians simultaneously, we Trotterize the dynamics by iteratively applying the evolution due to Ĥ_3wm and Ĥ_lin separately, using the Fourier transform to convert between position and momentum space as needed. To facilitate this approach, we rewrite (<ref>) in terms of differential (super)operators 𝒩̇ψ̂_z /ħ gĤ_3wm, ψ̂_z = -ψ̂_z^†ψ̂_z^2, 𝒟Ψ̂_k /ħĤ_lin, Ψ̂_k t = -Ω(k) Ψ̂_k t, where we use the dot notation 𝒩̇ = 𝒩/(gt) to denote differential evolution with respect to normalized time gt in the nonlinear part, but we retain Leibniz notation 𝒟 = 𝒟̇ t in the linear evolution for convenience when treating loss as described in Appendix <ref>. Despite their superficial similarity to the classical model, both (<ref>) and (<ref>) are numerically intractable to solve directly. Physically, the problem amounts to solving for the dynamics of an entire quantum field, where each field degree of freedom (i.e., mode) occupies a bosonic Fock space. Even if we discretize the field to M modes and truncate the Fock space of each mode to D dimensions (i.e., allowing at most D-1 photons per mode), the quantum state of the field lives in a D^M-dimensional (Hilbert) space, upon which operators such as Ĥ_3wm and Ĥ_lin act. A typical discretization of the classical field ψ(z) in the NSLE might employ M = 1024 points, but even just allowing one photon per mode at D = 2, we have, at least without the use of sophisticated model reduction techniques, a 2^1024-dimensional problem! The situation becomes greatly simplified, however, if we are able to focus our attention solely on the Gaussian moments of the state, namely the mean ψ̂_z (corresponding to an M-dimensional vector when discretized) and the covariances ψ̂_zψ̂_z' and ψ̂^†_zψ̂_z' (each corresponding to an M × M matrix). Thus, in a Gaussian framework, the numerical problem of solving for the quantum noise dynamics becomes 𝒪(M^2)-dimensional. As a result, our nonlinear Gaussian-state model has access to the same highly efficient numerical techniques employed by classical pulse propagation techniques, including the use of split-step methods based on the fast Fourier transform (FFT) and massively parallel computation on graphics processing units (GPUs). Specifically, just as the cost of evolving the field over one time step for the classical NLSE is well known to be limited by FFT to 𝒪(Mlog M), our method does the same for the full Gaussian moments of the field with only cost 𝒪(M^2log M). This makes our method a natural generalization of the classical split-step Fourier (SSF) method, and we therefore refer to our numerical approach, when applied to the problem of ultrafast pulse proagation, as a nonlinear Gaussian-state SSF (GSSF) method. In this work, we perform all GSSF simulations using a GPU implementation of the RK4IP split-step method <cit.> via the high-level Julia package CUDA.jl <cit.>. As in Sec. <ref>, the key contribution of this work is to prescribe nonlinear equations of motion for the mean and covariance of the multimode field ψ̂_z, making only the assumption that the state is Gaussian. Due to the split-step nature of the GSSF method, we have, as in the classical SSF, the additional requirement of applying the dispersive step due to Ĥ_lin, but because (<ref>) is linear, we straightforwardly have 𝒟Ψ̂_k = Ω(k) Ψ̂_k t for the mean, and, for the covariances, 𝒟Ψ̂_kΨ̂_k' = []Ω(k') + Ω(k)Ψ̂_kΨ̂_k' t 𝒟Ψ̂^†_kΨ̂_k' = []Ω(k') - Ω(k)Ψ̂^†_kΨ̂_k' t, which can be analytically integrated. Thus, the nontrivial part is deriving the equations of motion in the nonlinear (or real-space) step, but because (<ref>) is local in z (i.e., the differential evolution of ψ̂_z is decoupled from that of ψ̂_z' for z ≠ z'), we can simply make use of the same methods already presented in Sec. <ref> for the single-mode case, making sure to carefully track the multimode indices in the covariances. For the mean, we have a modified version of the classical nonlinear step, 𝒩̇ψ̂_z = ψ̂^†_zψ̂_z^2 + 2ψ̂_zψ̂^†_zψ̂_z + ψ̂^†_zψ̂_z, where the last two terms are corrections due to coupling to the covariances. Then the equations of motion for the covariances are, after some algebra, 𝒩̇ ψ̂_zψ̂_z' = ψ̂_z^2ψ̂_z^†ψ̂_z' + ψ̂_z'^2ψ̂_zψ̂^†_z' + 2 []ψ̂_z^†ψ̂_z + ψ̂^†_z'ψ̂_z'ψ̂_zψ̂_z' 𝒩̇ ψ̂_z^†ψ̂_z' = -ψ̂_z^†2ψ̂_zψ̂_z' + ψ̂_z'^2ψ̂_z^†ψ̂^†_z' - 2[]ψ̂_z^†ψ̂_z - ψ̂^†_z'ψ̂_z'ψ̂_z^†ψ̂_z', where we use the shorthand notations ψ̂_z^2 = ψ̂_z^2 + []ψ̂_z, ψ̂^†_z ψ̂_z = ψ̂_z^2 + ψ̂^†_zψ̂_z, and ψ̂_zψ̂^†_z' = ψ̂^†_z'ψ̂_z + δ(z-z'). (Note that the latter Dirac delta function is converted into a Kronecker delta upon discretizing of the continuum field following Appendix <ref>.) To summarize, the GSSF equations of motion describing the propagation of both the mean field and the Gaussian quantum noise in χ^(3) waveguides are given by (<ref>), (<ref>), (<ref>), and (<ref>). §.§ Example: Soliton noise dynamics As a first demonstration, we apply GSSF to study propagation of a canonical Kerr soliton in a χ^(3) nonlinear waveguide. Classically, the Kerr soliton is a perfectly stable waveform arising from the balance of linear dispersion with nonlinear self-phase modulation, and quantum noise around this classical solution, in the form of so-called “Kerr squeezing”, has been extensively studied in quantum optics <cit.>. Conventionally, such studies use a linearized treatment <cit.> which, as discussed in Sec. <ref>, presupposes a separation of energy scales between dynamics of the mean field and the quantum noise: The former occurs very quickly and is first solved using classical SSF, while the latter is treated as simple linear perturbations that follow the classical solution. As we show, however, nonlinear dynamics captured by our GSSF method can have a qualitative impact on Kerr squeezing in the regime of small soliton amplitude (i.e., under stronger optical nonlinearities). The Kerr soliton can be canonically treated using the χ^(3) nonlinear waveguide propagation model (<ref>), and we assume a quadratic dispersion where Ω(k) = 1/2ω” k^2. In this case, the mean-field limit of (<ref>) (i.e., the classical NLSE) supports the well known sech-soliton solution <cit.> ψ_z^(sech) = √(n̅/2z_n̅)exp*π t/4t_n̅*z/z_n̅, where n̅ is the mean photon number of the soliton, and t_n̅ = 2πω”/g^2n̅^2, z_n̅ =-2ω”/gn̅ are the characteristic soliton period and pulse width, respectively. Note that we assume the regime of modulation instability gω”<0 for the soliton solution to exist. For the purposes of this example, we initialize the pulse as a coherent state described by (<ref>). Note that after scaling t and z by t_n̅ and z_n̅, respectively, the only free parameter is the mean photon number n̅, which effectively captures the “quantumness” of the system. A convenient way to analyze the quantum noise dynamics is to calculate the squeezing supermodes of the field and their respective squeezing levels <cit.>. Physically, for a pure Gaussian state, the squeezing supermodes correspond to a set of orthogonal pulse waveforms which independently experience quadrature squeezing. In most cases, only a few dominant supermodes (corresponding to low-order waveforms) experience significant squeezing, thus providing an efficient description of the multimode squeezing and entanglement in the pulse. Appendix <ref> summarizes the procedure we use to calculate these squeezing supermodes and their squeezing levels, using the covariance matrix produced by a numerical method like GSSF. In general, the waveforms of the squeezing supermodes can dynamically change throughout propagation <cit.>, and their shapes are also independent of (though usually influenced by) the shape of the mean-field waveform. This can happen even when the mean field is classically stable (as is the case for the soliton solution (<ref>)), and the transfer of photons out of the mean field and into the various squeezing supermodes effectively constitutes a quantum-noise-induced destabilization of the stable classical solution. Figure <ref> shows the quantum noise dynamics calculated by GSSF for a pulse initialized as a coherent-state soliton according to (<ref>). Classically, the propagation dynamics are nearly trivial, with the classical waveform experiencing only a phase rotation π t/4t_n̅ as given by (<ref>). However, as shown in Fig. <ref>, there is continuous growth of squeezing in the pulse, occurring primarily in two squeezing supermodes, which we denote A^(0)_z and A^(1)_z. The waveforms describing these squeezing supermodes are shown at various propagation times in Fig. <ref>(c), from which we see that they clearly have significant transient behavior. In particular, it is only after some propagation time (t ≳ 1.5 t_n̅) that the real part of A^(0)_z approaches that of the classical sech waveform. However, even then there is a significant imaginary component (indicating a nonuniform phase shift from the classical envelope), as well as a significant amount of squeezing in the higher-order supermode A^(1)_z, which can be interpreted as timing jitter of the pulse due to quantum fluctuations <cit.>. For a mean photon number of n̅ = 1000 at which these simulations are done (and more generally in the semiclassical limit n̅→∞), we note that many of these findings are in qualitative agreement with previous studies based on linearized treatments <cit.>. At the same time, Fig. <ref>(b) shows that when we decrease n̅ to ∼30, deviations appear between the linearized treatment and our nonlinear GSSF model, e.g., in the squeezing level. Intuitively, this “nonlinear saturation” of the squeezing arises because the mean field becomes depleted to provide energy towards (anti)squeezing, thus limiting the effective gain available for further amplification of quantum fluctuations. § QUANTUM NOISE PROPAGATION IN A CHI(2) WAVEGUIDE Although the previous section treated the case of a χ^(3) waveguide, it should be clear that the moment-expansion and split-operator techniques can be readily generalized to other settings and optical nonlinearities as well. Recently, χ^(2) waveguides in particular have shown experimental promise in being able to reach levels of optical nonlinearities where the nonlinear dynamics of quantum noise may become important. In this section, we apply the GSSF formalism to simulate χ^(2) nonlinear pulse propagation with conditions and parameters that are demonstrated recently, and we show that one could indeed observe strongly nonclassical and multimode photon dynamics in such experiments. Due to the nature of the three-wave interactions characteristic to χ^(2) systems, it is often useful (though not required) to distinguish between fundamental- and second-harmonic bands in the spectrum of interacting modes. In this two-envelope model, we introduce two fields ϕ_z and ψ_z for the fundamental and second harmonic bands (FH and SH), respectively; as before, we assume []ψ̂_z,ψ̂_z'^† = []ϕ̂_z,ϕ̂_z'^† = δ(z-z') and define Fourier duals Ψ̂_k ∫ z - kzψ̂_z (similarly for Φ_k). However, we also assume []ψ̂_z,ϕ̂_z'^† = []Ψ̂_k,Φ̂_k'^† = 0, i.e., that photons from the two bands are in principle distinguishable from one another (e.g., due to having different polarization or carrier-envelope phase). Then, a suitable continuum Hamiltonian for this two-envelope model is Ĥ_χ^(2) Ĥ_d3wm + Ĥ_lin, where Ĥ_d3wm ħ/2∫ z ϵ*ψ̂_z ϕ̂_z^†2 - ψ̂_z^†ϕ̂_z^2 Ĥ_lin ħ∫ k/2π*Ω_1(k) Φ̂_k^†Φ̂_k + Ω_2(k) Ψ̂_k^†Ψ̂_k. Here, ϵ is a coupling rate related to the nonlinearity, while the FH dispersion is taken around the fundamental carrier wavevector k_0 with Ω_1(k) ω(k_0+k) - []ω(k_0) + kω'(k_0) and the SH dispersion is taken around 2k_0 with Ω_2(k) ω(2k_0+k) - []2ω(k_0) + kω'(k_0). That is, Φ̂_k is, as usual, taken to be in a frame rotating at ω(k_0) + kω'(k_0), but Ψ̂_k rotates in an FH-derived frame at 2ω(k_0) + kω'(k_0) and ψ_z acts on relative positions z which copropagate at the same speed ω'(k_0) as the FH fields ϕ_z. Note that with this convention, both phase and group-velocity mismatch are captured by Ω_2(k). We refer to this two-envelope χ^(2) model as a (quasi)degenerate three-wave-mixing (3WM) model in analogy to degenerate 3WM in continuous-wave χ^(2) systems, where second-harmonic and half-harmonic generation are the dominant processes; in a multimode system with broadband phase matching, the 3WM is not strictly degenerate due to the energy difference between signal and idler within the FH band. The Heisenberg equation of motions generated by Ĥ_χ^(2) are ∂_t ϕ̂_z = ϵψ̂_z ϕ̂_z^† - Ω_1(-∂_z) ϕ̂_z, ∂_t ψ̂_z = -ϵ/2ϕ̂_z^2 - Ω_2(-∂_z) ψ̂_z. The mean-field version of these equations, obtained by formally replacing ψ̂_z with ψ(z) and ϕ̂_z with ϕ(z) describing the classical SH and FH waveforms, respectively, are precisely the classical coupled-wave equations for χ^(2) waveguide propagation with linear dispersion. To derive the GSSF model for this two-envelope χ^(2) model, we clearly need to track six covariances rather than two. This aside, however, the entire procedure remains the same as in Sec. <ref>. We begin with the split-operator quantum equations of motion 𝒩̇ϕ̂_z = /ħϵĤ_d3wm, ϕ̂_z = ψ̂_z ϕ̂_z^†, 𝒩̇ψ̂_z = /ħϵĤ_d3wm, ψ̂_z = -1/2ϕ̂_z^2, 𝒟Φ̂_k = /ħĤ_lin, Φ̂_k t = -Ω_1(k) Φ̂_k t, 𝒟Ψ̂_k = /ħĤ_lin, Ψ̂_k t = -Ω_2(k) Ψ̂_k t. As usual, the dispersive step due to Ĥ_lin is linear so we straightforwardly have 𝒟Φ̂_k = Ω_1(k)Φ̂_k t 𝒟Ψ̂_k = Ω_2(k)Ψ̂_k t for the means, and, for the covariances, 𝒟Φ̂_kΦ̂_k' = []Ω_1(k') + Ω_1(k)Φ̂_kΦ̂_k' t 𝒟Φ̂_k^†Φ̂_k' = []Ω_1(k') - Ω_1(k)Φ̂_k^†Φ̂_k' t 𝒟Ψ̂_kΨ̂_k' = []Ω_2(k') + Ω_2(k)Ψ̂_kΨ̂_k' t 𝒟Ψ̂_k^†Ψ̂_k' = []Ω_2(k') - Ω_2(k)Ψ̂_k^†Ψ̂_k' t 𝒟Φ̂_kΨ̂_k' = []Ω_2(k') + Ω_1(k)Φ̂_kΨ̂_k' t 𝒟Φ̂_k^†Ψ̂_k' = []Ω_2(k') - Ω_1(k)Φ̂_k^†Ψ̂_k' t. The nonlinear step is as usual more involved, and using the same moment expansion methods, we can derive 𝒩̇ϕ̂_z = ψ̂_zϕ̂_z^† + ϕ̂_z^†ψ̂_z 𝒩̇ψ̂_z = -1/2*ϕ̂_z^2 + ϕ̂_z for the means, and for the covariances, 𝒩̇ϕ̂_zϕ̂_z' = ϕ̂_z^†ϕ̂_z'ψ̂_z + ψ̂_zϕ̂_z^†ϕ̂_z' + ϕ̂_z'^†ϕ̂_zψ̂_z' + ψ̂_z'ϕ̂_zϕ̂_z'^† 𝒩̇ϕ̂_z^†ϕ̂_z' = ϕ̂_zϕ̂_z'ψ̂_z^† + ψ̂_z^†ϕ̂_zϕ̂_z' + ϕ̂_z'^†ϕ̂_z^†ψ̂_z' + ψ̂_z'ϕ̂_z^†ϕ̂_z'^† 𝒩̇ψ̂_zψ̂_z' = -ϕ̂_zϕ̂_zψ̂_z' - ϕ̂_z'ϕ̂_z'ψ̂_z 𝒩̇ψ̂_z^†ψ̂_z' = -ϕ̂_z^†ϕ̂_z^†ψ̂_z' - ϕ̂_z'ϕ̂_z'ψ̂_z^† 𝒩̇ϕ̂_zψ̂_z' = ϕ̂_z^†ψ̂_zψ̂_z' + ψ̂_zϕ̂_z^†ψ̂_z' - ϕ̂_z'ϕ̂_zϕ̂_z' 𝒩̇ϕ̂_z^†ψ̂_z' = ϕ̂_zψ̂_z^†ψ̂_z' + ψ̂_z^†ϕ̂_zψ̂_z' - ϕ̂_z'ϕ̂_z^†ϕ̂_z'. To summarize, the GSSF equations of motion for (quasi)degenerate three-wave-mixing in χ^(2) waveguides are given by (<ref>), (<ref>), (<ref>), and (<ref>). It is also worth noting that, in the single-mode scenario, these nonlinear moment equations are consistent with the ones derived in Ref. <cit.> for studying single-mode χ^(2) interactions. §.§ Example: Pump depletion in pulsed squeezing The most successful schemes to date for generating squeezed light, especially for use as resource states in quantum metrology and continuous-variable quantum information processing, rely on phase-sensitive (degenerate) optical parametric amplification in materials with χ^(2) nonlinearities, in which SH pump induces quadrature squeezing on FH signal. In the absence of a signal seed, this process produces a squeezed vacuum state via parametric deamplification of vacuum noise along one quadrature. Conventionally, such squeezing experiments utilize a highly excited coherent-state pump in a weakly nonlinear χ^(2) crystal to generate vacuum squeezing with low conversion efficiency. In this low-efficiency limit, the process is well described by an undepleted pump approximation in which the pump is in a static coherent state, i.e., an interaction Hamiltonian of the form Ψ̂_k Φ̂_k'^†Φ̂_k-k'^† + H.c.≈Ψ̂_k(0)Φ̂_k'^†Φ̂_k-k'^† + H.c., leading to multimode but purely linear squeezing dynamics for the signal. These dynamics can be integrated to obtain a linearized estimate of the signal covariance matrix. Recently, however, dispersion engineering in tightly confining TFLN waveguides has enabled a significant increase in the effective nonlinearity of χ^(2) parametric interactions, challenging the conventional undepleted pump approximation. For example, Refs. <cit.> experimentally demonstrated waveguides that can support ≈70dB of broadband parametric gain with only a 4 pump pulse. Heuristically, 70dB of antisqueezing at 2 corresponds to ≈0.5 of parametric fluorescence per pulse, suggesting that state-of-the-art devices can exhibit >10 pump depletion solely through the amplification of vacuum fluctuations. The regime where parametric fluorescence is sufficiently bright to deplete the pump is commonly referred to as optical parametric generation (OPG), and the effects of pump depletion on squeezing have previously been studied in the single-mode case <cit.>. Here, we employ GSSF to analyze the dynamics of saturated OPG in the ultrafast domain, looking in particular at the intrinsically multimode entanglement structure of the output parametric fluorescence. First, however, it is worth noting that for vacuum-seeded OPG, the nonlinear equations of motion (<ref>) and (<ref>) take a particularly simple form. Since the initial input signal field is vacuum, ϕ̂_z = 0 at t = 0. Furthermore, since the initial input pump field is a coherent state, the pump and signal are initially uncorrelated, so ϕ̂_zψ̂_z' = ϕ̂_z^†ψ̂_z'^† = 0 at t = 0. Then by inspection of (<ref>) and (<ref>), we see that, for all time, ϕ̂_z = ψ̂_zψ_z' = ψ̂_z^†ψ_z' = ϕ̂_zψ_z' = ϕ̂_z^†ψ̂_z' = 0. In fact, the only non-trivial dynamics are in mean of the pump and the covariances of the signal, given by 𝒩̇ψ̂_z = -1/2ϕ̂_z for the mean, and, for the covariances, 𝒩̇ϕ̂_zϕ̂_z' = ψ̂_zϕ̂_z^†ϕ̂_z' + ψ̂_z'ϕ̂_zϕ̂_z'^† 𝒩̇ϕ̂_z^†ϕ̂_z' = ψ̂_z^†ϕ̂_zϕ̂_z' + ψ̂_z'ϕ̂_z^†ϕ̂_z'^†. We see that even in the Gaussian-state approximation, there are nonlinear dynamics in which the pump mean experiences depletion due to the generation of signal photon pairs. The pump also remains in a coherent state and is unentangled with the signal, which need not hold true in more exotic non-Gaussian settings <cit.>. Figure <ref> shows a GSSF simulation of OPG in a waveguide with parameters similar to that of Ref. <cit.> (see Table <ref>), using the simplified equations (<ref>), with minor modifications to account for linear loss as discussed in Appendix <ref>. As expected, Fig. <ref>(a) shows that the pump experiences a significant amount of depletion, with a dip generated as the signal fluorescence grows and walks off from the center. This process amounts to a nonlinear saturation of the parametric gain even under vacuum input. The waveforms predicted by GSSF differ significantly from those in the linearized model, which we plot as corresponding dashed lines: In the latter, the pump amplitude experiences only dispersion and loss, which causes the model to overestimate the signal fluorescence due to the absence of nonlinear saturation. In this simulation, we find ∼0.6 of pump depletion per pulse, in accordance with energy conservation and in rough agreement with measurements reported in Refs. <cit.>. Figure <ref>(b) shows that the spectrum of the signal fluorescence is in qualitative agreement with experiments as well. Our numerical results also reveal the quantum correlation structure of the squeezed light produced by OPG, which to our knowledge have yet to be fully explored experimentally. Figure <ref>(c) shows the covariance matrix of the signal in the frequency domain, which fully characterizes the Gaussian quantum state of the signal pulse. Because OPG produces squeezing and antisqueezing predominantly along the quadratures of the field (see Appendix <ref>), we focus on the covariance matrix written in the quadrature basis (q̂_z, p̂_z). The spectral correlations in the covariance matrix indicate that the signal field occupies a multimode squeezed state with significant levels of entanglement among many spectral-temporal components. It is worth noting that such correlations are lost when observing only the fluorescence spectrum, viz., Fig. <ref>(b), and more sophisticated techniques in quantum state tomography of ultrafast pulses are needed to probe the covariance structure in greater detail <cit.>. To further understand the entanglement structure, we can also utilize a supermode decomposition of the covariance matrix (as discussed in Appendix <ref>) to obtain the dominant squeezing supermodes in the signal pulse, which we show in Fig. <ref>(d). Whereas any given spatial bin or narrowband component of the signal is highly entangled with the rest of the field, the squeezing supermodes comprise a superposition of many narrowband components, chosen in such a way that they are minimally correlated (i.e., unentangled or separable) with one another. In other words, measurements selectively probing these squeezing supermodes (i.e., via a pulse-shaped local oscillator in optical homodyne) are needed to fully decompose the Gaussian state into its independently squeezed components. We find that the dominant supermode, as expected, has a spectrum that is mostly determined by the pump spectrum, with a slight variation in spectral phase imposed by dispersion (in particular, group velocity mismatch). This supermode experiences nearly 67 of antisqueezing, in agreement with empirical estimates of the parametric gain in Refs. <cit.>. We also see there are at least two other supermodes all experiencing >60 of gain, which is expected as the device is not specifically engineered to exclusively provide gain in a single supermode; advanced engineering of OPG devices may enable more efficient channeling of pump energy into selectively squeezing specific supermode patterns of interest. Perhaps most interestingly, however, we observe that the dominant squeezing supermode retains up to 20 of quadrature squeezing despite the fact that our simulations already take into account a propagation loss of 30 for the signal field. Further work developing and deploying this potent control over the behavior of quantum noise (e.g. by reducing propagation losses and increasing outcoupling efficiencies) appears to be highly worthwhile for advancing the state of the art in quantum photonics. §.§ Example: Quantum noise in second-order supercontinuum generation We conclude this section by applying our GSSF method to study quantum noise in supercontinuum generation (SCG). It was recently reported in Refs. <cit.> that highly efficient, saturated second-harmonic generation in a dispersion-engineered TFLN waveguide with a modest amount of phase mismatch can dynamically produce such strong modulations on the FH and SH envelopes that their collective bandwidths span more than an octave in frequency. Classical analysis reveals that a relatively simple model involving only coherent χ^(2) nonlinear interactions between the FH and SH envelopes are sufficient to explain most of the qualitative features in the supercontinuum spectrum <cit.>, making χ^(2) SCG an interesting example on which to test our method, even without accounting for auxiliary effects like stimulated Raman, etc., which can play important roles in χ^(3) SCG <cit.>. An important application of SCG is enabling detection of the carrier-envelope-offset (CEO) frequency, f_ceo, of a frequency comb <cit.>, which is essential for building a stable clockwork for optical frequency metrology <cit.>. Because both the FH and SH envelopes are produced in the broadening dynamics in χ^(2) SCG, the waveguide output can be directly heterodyned to generate a beat note at f_ceo without the need for an additional frequency-doubling stage <cit.>. For use in optical frequency metrology, however, it is important that we understand the fundamental and practical noise limits set by the SCG process <cit.>. Due to its parametric and coherent nature, the χ^(2) supercontinuum should exhibit quantum correlations and entanglement that can, in principle, increase (or even decrease) the noise in f_ceo detection relative to a shot-noise assumption. Here, we use the GSSF method to simulate the χ^(2) SCG dynamics, and we apply the quantum theory of heterodyne detection to the resulting Gaussian state in order to quantify the fundamental noise in the beat note signal set by such quantum fluctuations. Figure <ref> shows a GSSF simulation of a waveguide with parameters similar to that of Refs. <cit.> (see Table <ref>), using the full equations of motion for quasidegenerate three-wave-mixing. The envelope spectral dynamics in Fig. <ref>(a) shows good qualitative agreement with both classical simulations and experimental data <cit.>, with the formation of spectral overlap between FH and SH within a propagation length of 3mm. Diving deeper into the quantum structure of the supercontinuum, however, Fig. <ref>(b) shows the subblocks of the covariance matrix (see Appendix <ref>) which describe correlations between the FH and SH envelopes at the end of the waveguide. We see finely patterened correlations with complex spectral structure imparted by the dynamics of the nonlinear SCG process. Generically, such quantum correlations indicate the presence of multimode entanglement, and the squeezing supermodes of the total field consists of hybridized excitations of both FH and SH envelopes. These correlations in the supercontinuum contribute to quantum-limited noise that is present when measuring the f_ceo beat note through direct heterodyne detection, e.g., as done in Ref. <cit.>. In Appendix <ref>, we use the standard quantum theory of optical heterodyne detection to derive both the signal and noise associated with measuring the f_ceo beat note. Because we are interested here in frequency combs rather than continuum fields, we introduce discrete FH modes Â_m and SH modes B̂_q (see also Appendix <ref>) in place of Φ̂_k and Ψ̂_k, and we assume that the bare frequencies of these modes are (m+m_0)f_rep + f_ceo and (q+2m_0)f_rep + 2f_ceo, respectively. Here, f_rep is the repetition rate of the comb, m_0 indexes the central comb line of the FH envelope, and f_ceo is the CEO frequency of the FH envelope. We find that the total steady-state photocurrent demodulated at f_ceo is given by I_h = f_rep∑_m S(m f_rep) where the signal contribution from each comb line is S(m f_rep) Â_m^†B̂_q(m), where q(m) m-m_0 denotes the index of the SH comb mode B̂_q(m) whose beat with Â_m contributes to the signal at f_ceo. It is interesting to note that Â_m^†B̂_q(m) = Â_m^†B̂_q(m) + Â_m^†B̂_q(m), i.e., there are contributions to the beat note not only from the mean field (first term) but also from the quantum correlations (second term) between the two envelopes. The noise on the photocurrent signal is characterized by the total variance δ I_h^2/f_rep^2 = ∑_m N_0(m f_rep) + 1/2∑_m,m' N_1(mf_rep, m'f_rep) + 1/2∑_m,m' N_2(mf_rep, m'f_rep) *1 + ^2 ϕ_ceo. where ϕ_ceo 2π f_ceo/f_rep is the relative CEO phase accumulated by successive pulses and N_0(m f_rep) Â_m^†Â_m + B̂_m^†B̂_m, N_1(mf_rep, m'f_rep) []Â_m^†Â_m'^†B̂_q(m')B̂_q(m) - [][]Â_m^†B̂_q(m)[]Â_m'^†B̂_q(m'), N_2(mf_rep, m'f_rep) []Â_m^†Â_m'B̂_q(m')^†B̂_q(m) - [][]Â_m^†B̂_q(m)[]Â_m'B̂_q(m')^†. This rather complicated expression (see Appendix <ref> for more details) arises because the beat-note fluctuations coming from two different frequency indices m and m' can, in principle, be (anti)-correlated, thus leading to an increase (decrease) in the total noise when summed together. The uncorrelated fluctuations are captured by N_0 and the diagonal components of N_1 and N_2. Note that for N_0, we can write, e.g., Â_m^†Â_m = Â_m^2 + Â_m^†Â_m; the first term is the standard shot noise of the mean field, while the second term is excess noise due to parametric fluorescence. On the other hand, the correlated fluctuations N_1 and N_2 are related to fourth-order moments of the state; for a Gaussian state these fourth-order correlations can be reduced to second-order correlations and evaluated readily (see Appendix <ref>). In Fig. <ref>(c), we show the spectrum of the beat-note signal S^2(f) (<ref>) and the diagonal contributions from the noise (<ref>) (physically corresponding to the use of a tunable narrowband optical filter in front of the detector). We separate the noise into “shot noise” and “parametric noise” contributions according to N_shot(mf_rep) Â_m^2 + B̂_q(m)^2 N_para(mf_rep) Â_m^†Â_m + B̂_q(m)^†B̂_q(m) + N_1(mf_rep,mf_rep) + N_2(mf_rep,mf_rep), where N_para captures the excess noise (beyond shot noise) coming from the fluorescence and quantum correlations in the supercontinuum. Perhaps surprisingly, we find that the parametric noise is comparable to—and in certain parts of the spectrum even in excess of—the shot noise predicted by the mean field. Finally, to get a better sense for the off-diagonal correlations occurring in (<ref>), we show in Fig. <ref>(d) the full correlation matrix N_1(f,f') + N_2(f,f'). We see that there is indeed some degree of correlation between the beat note fluctuations coming from different parts of the frequency comb, suggesting we may be able to improve the signal-to-noise ratio via selective, multiband filtering of the supercontinuum prior to self-heterodyne detection. § CONCLUSIONS In this work, we have developed a Gaussian split-step Fourier (GSSF) framework which integrates the formalism of Gaussian-state quantum optics with the nonlinear physics of ultrafast pulse propagation. This GSSF method generalizes the classical SSF method to treat quantum fluctuations and correlations, up to second order, on an equal footing with mean-field nonlinear pulse dynamics. Taking inspiration from state-of-the-art dispersion-engineered devices on thin-film lithium niobate, we have shown, through detailed case studies, how the GSSF method enables us to better understand both the operational principles and technological potential of photonic hardware near the quantum-classical transition. For saturated optical parametric generation <cit.>, we have identified squeezing supermodes and their respective squeezing levels despite the presence of significant pump depletion, which puts this system beyond the scope of conventional linearized treatments of vacuum squeezing (i.e., via undepeleted-pump approximations). For supercontinuum generation based on saturated second-harmonic generation <cit.>, we have used GSSF to resolve finely patterned spectral correlations inside the octave-spanning supercontinuum. We then leveraged standard quantum-optical theory to explicitly evaluate the quantum noise floor for f-2f beat-note detection of the CEO frequency using this novel supercontinuum source, finding contributions beyond the shot-noise limit due to parametric fluorescence and frequency-domain entanglement. These case studies demonstrate the effectiveness of the GSSF framework for analyzing and engineering ultrafast quantum photonic devices, and we expect in future work that even more sophisticated systems, from cavity-based frequency microcombs <cit.> to nanophotonic synchronously-pumped optical parametric oscillators, can be straightforwardly treated as well. Numerically, GSSF can directly leverage the remarkable efficiency of classical SSF methods, with each split-step requiring only 𝒪(M^2log M) cost to update all M^2 Gaussian moments of an M-mode pulse. It is also worth pointing out that GSSF generates all the Gaussian moments in a single simulation, as opposed to mean-field Monte-Carlo techniques that require many trajectories to statistically resolve, e.g., small spectral features in the correlations. We remark that even with a fairly naïve RK4IP implementation <cit.> of GSSF on an Ampere A100 GPU, the supercontinuum simulation of Sec. <ref> requires <5 using M = 2^10 points per envelope. Finally, while this work has immediate practical relevance to current and near-term experiments operating in the semiclassical domain, the Gaussian-state framework, and GSSF by extension, is expected to remain indispensable well beyond the classical-quantum threshold. For example, faithful descriptions of multimode squeezed states are essential for reliably generating the non-Gaussian resource states at the heart of the most mature photonic schemes for continuous-variable quantum computation <cit.>. Even in the strong-coupling regime where non-Gaussian features emerge coherently, a Gaussian approximation to quantum dynamics provides vital information on how and where (i.e., in which supermodes) such non-Gaussian features appear, facilitating significantly more concise quantum state representations for pulse dynamics <cit.>. Thus, we expect this work to not only serve as a workhorse method for engineering near-term nonlinear ultrafast devices, but to also guide new conceptual and modeling paradigms for quantum photonics generally, by embracing rather than abstracting away the rich physics of ultrafast quantum dynamics. The authors are grateful to Logan G. Wright, Melissa A. Guidry, Daniil M. Lukin, Rajveer Nehra, and Alireza Marandi for helpful discussions. The authors wish to thank NTT Research for their financial and technical support. This work has been supported by the Army Research Office under Grant No. W911NF-16-1-0086, and the National Science Foundation under awards CCF-1918549 and PHY-2011363. § CONSERVATION OF ENERGY As discussed in Sec. <ref> for the single-mode case, a distinguishing feature of our nonlinear Gaussian-state approximation compared with conventional linearized treatments is the conservation of photon(-polariton) number, i.e., energy, in the absence of linear losses. In this section, we explicitly show how this conservation property arises in the multimode using the nonlinear Gaussian equations of motion for χ^(3) and χ^(2) waveguide propagation developed in Secs. <ref> and <ref>. First, let us consider the case of χ^(3) nonlinear propagation, where the total energy is given by the total photon number n̅∫ z ψ̂_z^†ψ̂_z = ∫ k/2π Ψ̂_k^†Ψ̂_k, where ψ̂_z^†ψ̂_z = ψ̂_z^†ψ̂_z + ψ̂_z^†ψ̂_z, i.e., the sum of both classical (mean-field) and quantum-noise (diagonal covariance) contributions. For linear evolution under Ĥ_lin in (<ref>), we straightforwardly have 𝒟̇Ψ̂_k^†Ψ̂_k = 𝒟̇Ψ̂_k^†Ψ̂_k = 0 ⇒𝒟̇n̅ = 0, since Ω^*(k) = Ω(k) (i.e., we only have dispersion) in the absence of loss. On the other hand, for nonlinear evolution under Ĥ_4wm, we can calculate 𝒩̇ψ̂_z^†ψ̂_z = ψ̂_z^†𝒩̇ψ̂_z + ψ̂_z𝒩̇ψ̂_z^† = ψ̂_z^†^2 ψ̂_z - ψ̂_z^2 δψ̂_z^†2 𝒩̇ψ̂_z^†ψ̂_z = -ψ̂_z^†^2 ψ̂_z + ψ̂_z^2 δψ̂_z^†2 ⇒𝒩̇n̅ = 0. Thus, we conclude that ∂_t n̅ = 𝒟̇n̅ + 𝒩̇n̅ = 0, so total energy is conserved. Next, for χ^(2) nonlinear propagation, the total energy is given by the Manley-Rowe invariant n̅_MR n̅_a + 2n̅_b, where n̅_a ∫ z ϕ̂_z^†ϕ̂_z = ∫ k/2π Φ̂_k^†Φ̂_k n̅_b = ∫ z ψ̂_z^†ψ̂_z = ∫ k/2π Ψ̂_k^†Ψ̂_k, which represents a generalized particle number for the two-envelope model used in Sec. <ref>. Similarly to the case of χ^(3), one can straightforwardly show that for linear evolution under Ĥ_lin in (<ref>), 𝒟̇n̅_a = 𝒟̇n̅_b = 𝒟̇n̅_MR = 0. On the other hand, for nonlinear evolution under Ĥ_d3wm, we can calculate 𝒩̇ϕ̂_z^†ϕ̂_z = ϕ̂_z^†^2 ψ̂_z + ϕ̂_z^2 ψ̂_z^† + ϕ̂_z^†ϕ̂_z^†ψ̂_z + ϕ̂_zϕ̂_zψ_z^†, 𝒩̇ψ̂_z^†ψ̂_z = -1/2( ψ̂_z^†ϕ̂_z^2 + ψ̂_zϕ̂_z^†^2 . . + ψ̂_z^†ϕ̂_z + ψ̂_zδϕ̂_z^†2), 𝒩̇ϕ̂_z^†ϕ̂_z = ϕ̂_zϕ̂_zψ̂_z^† + ψ̂_z^†ϕ̂_z + ϕ̂_z^†ϕ̂_z^†ψ̂_z + ψ̂_zδϕ̂_z^†2, 𝒩̇ψ̂_z^†ψ̂_z = -ϕ̂_z^†ϕ̂_z^†ψ̂_z - ϕ̂_zϕ̂_zψ_z^†, from which we obtain 𝒩̇n̅_MR = 0 (though 𝒩̇n̅_a, 𝒩̇n̅_b≠ 0 in general). Thus, we conclude that ∂_t n̅_MR = 𝒟̇n̅_MR + 𝒩̇n̅_MR = 0, so total energy is conserved. § DISCRETIZATION OF THE FIELD In the main text, for convenience, we treat the quantum fields propagating on a nonlinear waveguide as being continuous, i.e., ψ̂_z is an annihilation operator at each z ∈ℝ, with commutation relations ψ̂_z, ψ̂_z'^† = δ(z-z'). For numerical simulations, it is more convenient to use a finite set of discrete modes instead of the continuum, so that, e.g., the mean of the field ψ̂_z can be approximated by a vector and the covariance ψ̂_zψ̂_z' by a matrix. To define these discrete modes, we introduce a quantization window of length L large enough to contain the pulse of interest. Note that under an appropriate rotating frame as used, e.g., in (<ref>), this quantization window can be taken to copropagate at the group velocity of the carrier. We impose periodic boundary conditions on the window, which therefore supports monochromatic waveforms that are periodic with L, corresponding to discrete wavevectors k_0 + mΔ k (m ∈ℤ), where Δ k 2π/L. We quantize each of these momentum-space modes by introducing mode annihilation operators Â_m for each m, satisfying []Â_m, Â_n^† = δ_mn. To obtain discrete modes in the spatial domain as well, we also impose a bandwidth limit -M/2 ≤ m < M/2, where, for convenience, we take M to be an even integer; this corresponds to a momentum cutoff MΔ k. We can now use the discrete Fourier transform to define finite spatial modes â_i 1/√(M)∑_m=-M/2^M/2-1+2π miÂ_m, from which []â_i, â_j^† = δ_jk. These modes approximately correspond to spatial bins of size Δ z L/M, with â_i annihilating a mode centered on z = iΔ z, if we take the quantization window to be the interval [-L/2,L/2). Of course, we also have the inverse relation Â_m = 1/√(M)∑_i=-M/2^M/2-1-2π miâ_i. Heuristically, the discrete modes we have defined can be thought of as â_i ∼ψ̂_iΔ z√(Δ z) and Â_m ∼Ψ̂_mΔ k√(Δ k/2π); that is, they are intuitively “bin” modes in the spatial and momentum domains, respectively. Note that a consequence of this interpretation is that the photon numbers in each of these bin modes, i.e., â_i^†â_i or Â_m^†Â_m, are not intrinsic quantities, as they depend arbitrarily on the chosen values of L and M. With these definitions, we can convert continuum field operators and their Gaussian moments to their corresponding discretized quantities. Table <ref> gives a formal way to map from the continuous quantities presented in the main text to discrete quantities that are more suitable for numerical simulation. To illustrate, this procedure produces the following discrete representation of the χ^(3) Hamiltonian Ĥ_χ^(3) = Ĥ_4wm + Ĥ_lin from (<ref>): Ĥ_4wm = ħ/2∑_i=-M/2^M/2-1g/Δ zâ_i^†2â_i^2, Ĥ_lin = ħ∑_m=-M/2^M/2-1Ω(mΔ k) Â_m^†Â_m, which generates the discretized GSSF equations 𝒩â_i = -g/Δ zâ_i^†â_i^2 t, 𝒟Â_m = -Ω(mΔ k) Â_m t. Therefore, the GSSF method for a χ^(3) waveguide can be formulated in terms of the finite dynamical quantities â_i, which is an M-dimensional vector, and â_iâ_j and â_i^†â_j, which are (M× M)-dimensional matrices. § LINEAR LOSSES IN WAVEGUIDE PROPAGATION In the main text, we view the propagation of light through a waveguide as being lossless, in that the dynamics are generated by a Hamiltonian which conserves energy and therefore only incorporates nonlinearity and dispersion. However, all realistic waveguides feature some amount of propagation loss which occurs continuously throughout the evolution of the pulse in the waveguide. Since these loss mechanisms usually arise from distributed and disordered effects such as scattering (from surface roughness, etc.), they are well characterized as a source of decoherence of the quantum state. One approach to modeling such mechanisms is to consider the propagation loss as being analogous to linear dissipation of a generic optical system coupled to a Markovian reservoir, allowing us to use open-quantum-systems theory to treat the effect of this decoherence on our Gaussian state. Specifically, we posit that the evolution of the density matrix ρ̂ of the system is described not by the Schrödinger equation ∂_t ρ̂= -(/ħ)Ĥ_nl, ρ̂ (where Ĥ_nl can be Ĥ_χ(3), Ĥ_χ^(2), etc.), but rather a master equation in Lindblad form. We first consider the case of χ^(3) nonlinear propagation and use the discrete description of the field given in Appendix <ref>. Then a suitable quantum master equation for modeling multimode linear losses in the waveguide is ∂_t ρ̂= 1/ħĤ_nl, ρ̂ + ∑_m []L̂_m ρ̂L̂_m^† - 1/2[]L̂_m^†L̂_m, ρ̂, where the Lindblad operators L̂_m represent dissipation in each wavespace mode m. If the mode Â_m experiences a field loss rate of κ_m, then we set L̂_m √(2κ_m)Â_m. Since the dissipation part is written only in terms of wavespace modes Â_m, it is clear that loss only affects the Fourier part of the the split-step dynamics in GSSF, generated by 𝒟. While the master equation (<ref>) describes evolution of the state in the Schrödinger picture, our nonlinear Gaussian-state approximation is best understood in the Heisenberg picture. As a result, we turn to an equivalent formulation of (<ref>) in the form of a Heisenberg-Langevin equation of motion. More formally, the operator 𝒟 formally becomes a stochastic differential propagator, generating a quantum stochastic differential equation [In general, under quantum input-output theory, the evolution is governed by a quantum stochastic differential equation (i.e., a Heisenberg-Langevin equation) via x̂ = /ħĤ, x̂ t + 1/2∑_ℓ*L̂_ℓ^†x̂, L̂_ℓ + L̂_ℓ^†, x̂L̂_ℓ t + ∑_ℓ*L̂_ℓ^†, x̂ Ŵ_ℓ + x̂, L̂_ℓ Ŵ_ℓ^†.] 𝒟Â_m = -Ω_m Â_m t - √(2κ_m) Ŵ_m, where Ω_m Ω(mΔ k) - κ_m, and we have introduced input quantum white-noise operators Ŵ_m which obey Ŵ_m(t) = 0, Ŵ_m(t) Ŵ_m'^†(t') = δ_mm'δ(t-t') t, with all other possible products being zero. In the presence of linear loss, (<ref>) represents a generalization of (<ref>) and (<ref>). We therefore need to use (<ref>) to rederive the appropriate equations of motion for Â_m, Â_mÂ_n and Â_m^†Â_n in the Fourier step. As expected, the mean equation is unaffected by the quantum white-noise term by virtue of (<ref>), and it only picks up a field loss: 𝒟Â_m = -Ω_m Â_m t, which generalizes (<ref>) for linear loss. In calculating the equations of motion for the covariances, we note that in general, there is a contribution from (𝒟δÂ_m)(𝒟δÂ_n), as a result of a quantum generalization of Itô's lemma for the stochastic differentials Ŵ_m, which would then require simplification using (<ref>). However, because we are focusing on normal-ordered covariances involving the mode operators, such Itô terms happen to be zero; such terms cannot be neglected, e.g., for quadrature covariances like q̂_mq̂_n, where q̂_m 1/√(2)(Â_m + Â_m^†). In the end, we find that 𝒟Â_mÂ_n = -[]Ω_n + Ω_mÂ_mÂ_n t, 𝒟Â_m^†Â_n = -[]Ω_n - Ω_m^*Â_m^†Â_n t, which generalizes (<ref>) for linear loss. The same calculations can be done for χ^(2) waveguide propagation. § SUPERMODE DECOMPOSITION In many of our examples, we would like to find squeezing supermodes using the Gaussian moments of the fields that we simulate. Here, we describe the construction of the standard covariance matrix Σ written in terms of quadrature, rather than mode, operators, as well as how to use Σ to calculate the squeezing supermodes of the multimode fields. Consider a set of mode operators ĉ_i (1 ≤ i ≤ N) such that ĉ_i, ĉ_j^† = δ_ij. Here, ĉ could consist of, e.g., discretized modes of a continuous field as described in Appendix <ref>, and we note that ĉ_i can also include combinations of fields as well, e.g., ĉ = (â_1, …, â_M, b̂_1, …, b̂_M). We first define q̂_i 1/√(2)[]ĉ_i + ĉ_i^† and p̂_i 1/√(2)[]ĉ_i - ĉ_i^† to be the in-phase (real) and quadrature-phase (imaginary) components of ĉ_i, respectively. Then, the standard covariance matrix in quadrature form is defined as Σ_kℓ1/2δẑ_k, δẑ_ℓ where ẑ (q̂_1, …, q̂_N, p̂_1, …, p̂_N), and ẑ_1, ẑ_2ẑ_1 ẑ_2 + ẑ_2 ẑ_1 is the anticommutator. In general, Σ is a 2N × 2N positive-definite matrix on which we can perform a Williamson decomposition Σ = SDS^T where D is a diagonal matrix of the form D = 1/2 + (n̅, n̅) where n̅_i ≥ 0 and S is a symplectic matrix satisfying SΩ S^T = Ω, for Ω[ 0 1_N; -1_N 0 ] the symplectic form. Physically, S represents a set of quantum-limited multimode phase-shifting, mode-mixing, and squeezing operations acting on an N-mode initial thermal state with thermal photon populations n̅_i. We can furthermore perform a Bloch-Messiah (or Euler) decomposition S = O_outΛ O_in where O_out and O_in are orthogonal symplectic matrices and Λ is a diagonal matrix of the form Λ = (-r_1,…,-r_N,+r_1,…,+r_N). (Note that this convention does not list the elements in increasing order.) Physically, the N-mode operation represented by S is being decomposed into a set of (active) single-mode squeezers with squeezing parameters r_i, sandwiched between an input set of N-mode (passive) beamsplitters and phaseshifters represented by O_in and a similar set at the output represented by O_out. Since O_out passively generates multimode squeezing from single-mode squeezing, it is also the matrix that determines the separable supermodes of the Gaussian state. In general, O_out (and O_in) have the general form [ X -Y; Y X ], and U X + Y is a N × N unitary matrix, which defines the supermodes via Ĉ_i ∑_j=1^N U_ijĉ_j. Note that while they are separable, these supermodes are not necessarily uncorrelated; they have a covariance matrix Σ O_out^TΣ O_out = Λ O_in D O_in^TΛ, which is diagonal if D ∝ 1_2N; the off-diagonal elements represent correlations due to multi-(super)mode mixtures of thermal photons. For a pure Gaussian state, D = (1/2), so the first N diagonal elements of Σ give the variance along the squeezed quadrature of each supermode Ĉ_i, respectively, while the last N diagonal elements give the corresponding variance along the antisqueezed quadratures; for Ĉ_i, these variances are 1/2± 2r_i, respectively. § GAUSSIAN THEORY FOR SELF-HETERODYNE DETECTION OF CEO BEAT NOTE In this appendix, we derive expressions for the signal and noise of an f-2f beat note obtained by self-heterodyning a supercontinuum frequency comb consisting of a fundamental-harmonic (FH) and a second-harmonic (SH) envelope, when the state of the field is in a multimode-entangled Gaussian state. In self-heterodyning, we have overlapping frequency components between the FH and SH envelopes which interfere at a photodetector to produce a heterodyne signal. This situation is slightly different from the usual quantum-optical setup for single-mode heterodyne involving a strong local oscillator in a coherent state. Nevertheless, expressions for the signal and noise of the result can be derived using the same photodetection theory. Here, we follow the formalism of Ref. <cit.> to do so. The heterodyne photocharge received after demodulating the photocurrent with an electronic local oscillator at frequency f_h and then integrating for time T_h (assumed larger than the pulse duration), can be expressed as <cit.> *Q_h = ∫_0^T_h v_h(t) G_1(t) t, where v_h(t) cos(2π f_ht) and the first-order correlation function is G_1(t) []D̂^†(t) D̂(t), for some appropriate choice of D̂ such that D̂^†D̂ captures the photon flux at the surface of the detector. On the other hand, the noise on that photocharge has a variance given by <cit.> []δ Q_h^2 = ∫_0^T_h v_h^2(t) G_1(t) t + ∫_0^T_h∫_0^T_h v_h(t) v_h(t') G_2(t,t') t t', where the first term represents the shot noise associated to *Q_h and the second term is contributed by the second-order correlation function G_2(t,t') []D̂^†(t) D̂^†(t') D̂(t) D̂(t') - G_1(t)G_1(t'). To develop G_1 and G_2 further, we specialize to a pulsed field generated by a frequency comb with repetition rate f_rep. We assume the field consists of a FH envelope with carrier-envelope offset (CEO) frequency f_ceo < f_rep and an SH envelope with CEO frequency 2f_ceo. The modes of the FH comb, denoted by Â_m, have frequency (m+m_0)f_rep + f_ceo, and the modes of the SH comb, denoted by B̂_q, have frequency (q+2m_0)f_rep + 2f_ceo, where m_0 is a fixed integer denoting the central FH comb line, and we have as usual Â_m, Â_m'^† = δ_mm' and B̂_q, B̂_q'^† = δ_qq'. We also assume that the linewidth of the comb lines are much smaller than both f_ceo and f_rep - f_ceo, so that two spectral modes with optical frequencies separated only by f_ceo are distinguishable, so Â_m, B̂_q^† = 0 for all m,q. Let us move to the rotating frame of mode Â_0 (with the CEO frequency included), so that all mode operators now rotate at m_0 f_rep + f_ceo. In this frame, we can construct a field operator (at the surface of the detector) out of the modes Â_m and B̂_q via d̂(t) = ∑_m Â_m -2π mf_rep t + ∑_q B̂_q -2π(q+m_0)f_rept-2π f_ceo t. However, d̂ does not physically describe a frequency comb and its corresponding pulse train. Rather, the physical scenario described by (<ref>) is a single pulse obeying a periodic boundary; alternatively, we may interpret (<ref>) as a Fourier series decomposition of the first pulse of the train. For example, while the field d̂ has finite energy, a true pulse train would not, even though the flux (evaluated at some reference plane) may be the same. As we will see, the distinction between these two scenarios is important for calculating multi-time correlation functions such as G_2(t,t'). To fix the issue, we note that since the physical pulse train we are considering does not exhibit any interpulse correlations (i.e., each pulse is the same as the next), and the only difference we have to track is the shift in the CEO phase from one pulse to the next. In this case, we can take D̂(t) = √(f_rep)∑_ℓ=-∞^∞Π(f_rep t - ℓ) d̂_ℓ(t), where the rectangle function Π(x) is one if 0 < x < 1 and zero otherwise. By writing d̂_ℓ, we formally mean that we substitute in (<ref>) Â_m ↦Â^(ℓ)_m and B̂_q ↦B̂^(ℓ)_q, which, for different values of the superscript ℓ, are independent modes but distributed identically to our original wavespace modes Â_m and B̂_q. Formally, let Ĉ^(ℓ) be any product of operators from among Â_m^(ℓ), B̂_q^(ℓ) (and their adjoints), so that any expectation value can be written as []Ĉ_1^(ℓ_1)C_2^(ℓ_2)⋯, where ℓ_1 < ℓ_2 < ⋯. Then under our “independent but identically distributed” condition, []Ĉ_1^(ℓ_1)Ĉ_2^(ℓ_2)⋯ = []Ĉ_1[]Ĉ_2⋯ (without superscripts). We also note that we have turned D̂^†D̂ into a flux quantity by normalizing the photons per pulse by the repetition time, which corresponds to assuming a separation between a “fast timescale” on the order of the pulse duration (e.g., ∼100fs), and a “slow timescale” on the order of the repetition time (e.g., ∼1ns). Using this form for D̂(t), we can now calculate that G_1(t) = f_rep∑_l=-∞^∞Π(f_rept - ℓ) []d̂_ℓ^†(t) d̂_ℓ(t) = f_rep∑_ℓ=-∞^∞Π(f_rept - ℓ) g_1(t). where, in the first line, we have used Π(x-ℓ)Π(x-r) = Π(x-ℓ)δ_lr, and in the second line used the fact that the expectation value only involves operators from the same pulse index ℓ, which allows us to express the result in terms of the first-order correlation function for a “single pulse”, defined as g_1(t) []d̂^†(t) d̂(t). Using similar arguments, we find that G_2(t,t') = f_rep^2 ∑_ℓ,ℓ'=-∞^∞Π(f_rept - ℓ) Π(f_rept' - ℓ') ×[]d̂_ℓ^†(t) d̂_ℓ'^†(t') d̂_ℓ(t) d̂_ℓ'(t') - G_1(t) G_1(t') = f_rep^2 ∑_ℓ=-∞^∞Π(f_rep t - ℓ) Π(f_rep t' - ℓ) g_2(t,t'), where for the second line we used the fact that the subtraction of G_1(t)G_1(t') eliminates all terms of the sum for which ℓ≠ℓ', allowing us to similarly express the result in terms of the second-order correlation function for a single pulse, defined as g_2(t,t') []d̂^†(t) d̂^†(t') d̂(t) d̂(t') - g_1(t) g_1(t'). It is worth emphasizing the form of (<ref>), in which having two windows tied to the same index ℓ ensures that g_2(t,t') does not contribute to G_2(t,t') when |t-t'| > 1/f_rep. This is essential for enforcing the independence of the modes constituting each individual pulse, despite the fact that we are able to write the result in terms of only the single-pulse modes Â_m and B̂_q due to the quasi-periodicity of the pulse train. At this point, we have finished setting up the model, and we can proceed with the calculation of the heterodyne signal *Q_h and noise []δ Q_h^2. In doing so, we set f_h = f_ceo and take T_h≫ 1/f_ceo > 1/f_rep (but still smaller than the coherence time of each comb line), so that all beat notes with frequency larger than f_ceo wash out and we are left only with the components at the beat note f_ceo, demodulated to DC. We also take T_h to be an integer multiple of the pulse repetition time 1/f_rep, which causes no loss of generality in the limit T_h≫ 1/f_rep. We first calculate *Q_h. Inserting (<ref>) into (<ref>), *Q_h = ∑_ℓ=0^T_hf_rep∫_ℓ^ℓ+1 s cos(ϕ_ceos) g_1(t), where we have changed integration variables to s = tf_rep and defined the CEO phase ϕ_ceo 2π f_ceo/f_rep. The Π(f_rept-ℓ) windows have broken up the integration over t into a sum of integrals over individual pulses. In the limit T_h≫ 1/f_rep, we can evaluate the integrals in (<ref>) at each ℓ, and then neglect all oscillating terms in ℓ that do not scale with T_h f_rep. Furthermore, we can restrict our attention to only those terms in g_1(t) with oscillations at f_ceo, which are ∑_m,q[]Â_m^†B̂_q-2π(q-m+m_0)f_rept-2π f_ceot + c.c. However, we clearly also require m_0 + q - m = 0, which physically means that only interference from FH and SH lines that are next to each other (separated by f_ceo) contribute. For convenience, let us introduce the shorthand q(m) m - m_0. Inserting only the relevant terms of g_1(t) into (<ref>), evaluating the sum of integrals, and applying the limit T_hf_rep≫ 1 we get *Q_h = T_hf_rep∑_m []Â_m^†B̂_q(m). If we now define the steady-state photocurrent to be *I_h*Q_h/T_h, then we exactly get the expression (<ref>) from the main text. The calculation for []δ Q_h^2 is similar. Inserting (<ref>) and (<ref>) into (<ref>), []δ Q_h^2 = ∑_ℓ=0^T_hf_rep∫_ℓ^ℓ+1cos^2(ϕ_ceos) g_1(t) s + ∑_ℓ=0^T_hf_rep∫_ℓ^ℓ+1∫_ℓ^ℓ+1cos(ϕ_ceos)cos(ϕ_ceos') g_2(t,t') s s', where, again, t = s/f_rep and t' = s'/f_rep. This time, the first integral only picks out components of g_1(t) oscillating at DC. These terms are ∑_m *[]Â_m^†Â_m + []B̂_m^†B̂_m. The second integral is more intensive: the relevant terms of g_2(t,t') are those whose time dependence takes the form -2π f_ceo(t ± t'), which consists of the terms ∑_m,m'**[]Â_m^†Â_m'^†B̂_q(m)B̂_q(m') - []Â_m^†B̂_q(m)[]Â_m'^†B̂_q(m')-2π f_ceo(t+t') + c.c. + ∑_m,m'**[]Â_m^†B̂_q(m')^†Â_m'B̂_q(m) - []Â_m^†B̂_q(m)[]Â_m'B̂_q(m')^†-2π f_ceo(t-t') + c.c.. We then need to insert all these terms for both g_1(t) and g_2(t,t') into (<ref>), evaluate the sums of integrals, apply the limit T_hf_rep≫ 1, and perform some algebra. The end result is that we can define the variance of the photocurrent to be *δ I_h^2[]δ Q_h^2/T_h^2, which is given by (<ref>) in the main text. It is interesting to note it turns out that the second term depends continuously on ϕ_ceo, while the first term does not (except for some discrete, edge cases such as ϕ_ceo = π/2, π, etc., which we neglect). We end this section with a remark on evaluating the fourth-order moments that show up in (<ref>). For a Gaussian state, they can always be simplified into sums over products of second-order moments, using the relationship *x̂ûv̂ŷ - *x̂ŷ*ûv̂ = []*x̂*û + *x̂û*v̂ŷ + *v̂*ŷ*x̂û + []*x̂*v̂ + *x̂v̂*ûŷ + *û*ŷ*x̂v̂. 82 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Clerk et al.(2010)Clerk, Devoret, Girvin, Marquardt, and Schoelkopf]Clerk2010 author author A. A. Clerk, author M. H. Devoret, author S. M. Girvin, author F. Marquardt, and author R. J. Schoelkopf, @noop journal journal Rev. Mod. Phys. volume 82, pages 1155 (year 2010)NoStop [Walls(1983)]Walls1983 author author D. F. Walls, @noop journal journal Nature volume 306, pages 141 (year 1983)NoStop [Hong et al.(1987)Hong, Ou, and Mandel]Hong1987 author author C. K. Hong, author Z. Y. Ou, and author L. Mandel, @noop journal journal Phys. Rev. Lett. volume 59, pages 2044 (year 1987)NoStop [Gordon and Haus(1986)]Gordon1986 author author J. P. Gordon and author H. A. Haus, @noop journal journal Opt. Lett. volume 11, pages 665 (year 1986)NoStop [Bao et al.(2021)Bao, Suh, Shen, Şafak, Dai, Wang, Wu, Yuan, Yang, and Vahala]Bao2021 author author C. Bao, author M.-G. Suh, author B. Shen, author K. Şafak, author A. Dai, author H. Wang, author L. Wu, author Z. Yuan, author F. X. Yang, Kärtner, and author K. J. Vahala, @noop journal journal Nat. Phys. volume 17, pages 462 (year 2021)NoStop [Giovannetti et al.(2011)Giovannetti, Lloyd, and Maccone]Giovannetti2011 author author V. Giovannetti, author S. Lloyd, and author L. Maccone, @noop journal journal Nat. Photon. volume 5, pages 222 (year 2011)NoStop [Pezzé and Smerzi(2008)]Pezze2008 author author L. Pezzé and author A. Smerzi, @noop journal journal Phys. Rev. Lett. volume 100, pages 073601 (year 2008)NoStop [The LIGO Scientific Collaboration(2013)]LIGO2013 author author The LIGO Scientific Collaboration, @noop journal journal Nat. Photon. volume 7, pages 613 (year 2013)NoStop [Ozeki et al.(2020)Ozeki, Miyawaki, and Taguchi]Ozeki2020 author author Y. Ozeki, author Y. Miyawaki, and author Y. Taguchi, @noop journal journal J. Opt. Soc. Am. B volume 37, pages 3288 (year 2020)NoStop [Zhong et al.(2020)Zhong, Wang, Deng, Chen, Peng, Luo, Qin, Wu, Ding, Hu, Hu, Yang, Zhang, Li, Li, Jiang, Gan, Yang, You, Wang, Li, Liu, Lu, and Pan]Zhong2020 author author H.-S. Zhong, author H. Wang, author Y.-H. Deng, author M.-C. Chen, author L.-C. Peng, author Y.-H. Luo, author J. Qin, author D. Wu, author X. Ding, author Y. Hu, author P. Hu, author X.-Y. Yang, author W.-J. Zhang, author H. Li, author Y. Li, author X. Jiang, author L. Gan, author G. Yang, author L. You, author Z. Wang, author L. Li, author N.-L. Liu, author C.-Y. Lu, and author J.-W. Pan, @noop journal journal Science (year 2020)NoStop [Asavanant et al.(2019)Asavanant, Shiozawa, Yokoyama, Charoensombutamon, Emura, Alexander, Takeda, Yoshikawa, Menicucci, Yonezawa, and Furusawa]Asavarant2019 author author W. Asavanant, author Y. Shiozawa, author S. Yokoyama, author B. Charoensombutamon, author H. Emura, author R. N. Alexander, author S. Takeda, author J. Yoshikawa, author N. C. Menicucci, author H. Yonezawa, and author A. Furusawa, @noop journal journal Science volume 366, pages 373 (year 2019)NoStop [Zhang et al.(2014)Zhang, Li, Yu, Gu, Peng, and Guo]Zhang2014 author author Y.-C. Zhang, author Z. Li, author S. Yu, author W. Gu, author X. Peng, and author H. Guo, @noop journal journal Phys. Rev. A volume 90, pages 052325 (year 2014)NoStop [Arrazola et al.(2021)Arrazola, Bergholm, Brádler, Bromley, Collins, Dhand, Fumagalli, Gerrits, Goussev, Helt, Hundal, Isacsson, Israel, Izaac, Jahangiri, Janik, Killoran, Kumar, Lavoie, Lita, Mahler, Menotti, Morrison, Nam, Neuhaus, Qi, Quesada, Repingon, Sabapathy, Schuld, Su, Swinarton, Száva, Tan, Tan, Vaidya, Vernon, Zabaneh, and Zhang]Arrazola2021 author author J. M. Arrazola, author V. Bergholm, author K. Brádler, author T. R. Bromley, author M. J. Collins, author I. Dhand, author A. Fumagalli, author T. Gerrits, author A. Goussev, author L. G. Helt, author J. Hundal, author T. Isacsson, author R. B. Israel, author J. Izaac, author S. Jahangiri, author R. Janik, author N. Killoran, author S. P. Kumar, author J. Lavoie, author A. E. Lita, author D. H. Mahler, author M. Menotti, author B. Morrison, author S. W. Nam, author L. Neuhaus, author H. Y. Qi, author N. Quesada, author A. Repingon, author K. K. Sabapathy, author M. Schuld, author D. Su, author J. Swinarton, author A. Száva, author K. Tan, author P. Tan, author V. D. Vaidya, author Z. Vernon, author Z. Zabaneh, and author Y. Zhang, @noop journal journal Nature volume 591, pages 54 (year 2021)NoStop [Takeda and Furusawa(2019)]Takeda2019 author author S. Takeda and author A. Furusawa, @noop journal journal APL Photonics volume 4, pages 060902 (year 2019)NoStop [Jankowski et al.(2021)Jankowski, Mishra, and Fejer]Jankowski2021-review author author M. Jankowski, author J. Mishra, and author M. M. Fejer, @noop journal journal J. Phys. Photon. volume 3, pages 042005 (year 2021)NoStop [Lu et al.(2020)Lu, Li, Zou, Al Sayem, and Tang]Lu2020 author author J. Lu, author M. Li, author C.-L. Zou, author A. Al Sayem, and author H. X. Tang, @noop journal journal Optica volume 7, pages 1654 (year 2020)NoStop [Zhao and Fang(2022)]Zhao2022 author author M. Zhao and author K. Fang, @noop journal journal Optica volume 9, pages 258 (year 2022)NoStop [Yanagimoto et al.(2022)Yanagimoto, Ng, Jankowski, Mabuchi, and Hamerly]Yanagimoto2022-temporal author author R. Yanagimoto, author E. Ng, author M. Jankowski, author H. Mabuchi, and author R. Hamerly, @noop journal journal Optica volume 9, pages 1289 (year 2022)NoStop [Yanagimoto et al.(2021a)Yanagimoto, Ng, Wright, Onodera, and Mabuchi]Yanagimoto2021_mps author author R. Yanagimoto, author E. Ng, author L. G. Wright, author T. Onodera, and author H. Mabuchi, @noop journal journal Optica volume 8 (year 2021a)NoStop [Gilchrist et al.(1997)Gilchrist, Gardiner, and Drummond]Gilchrist1997 author author A. Gilchrist, author C. W. Gardiner, and author P. D. Drummond, @noop journal journal Nat. Phys. volume 55, pages 3014 (year 1997)NoStop [Drummond and Hillery(2014)]Drummond2014 author author P. D. Drummond and author M. Hillery, @noop title The Quantum Theory of Nonlinear Optics (publisher Cambridge University Press, year 2014)NoStop [Nehra et al.(2022)Nehra, Sekine, Ledezma, Guo, Gray, Roy, and Marandi]Nehra2022 author author R. Nehra, author R. Sekine, author L. Ledezma, author Q. Guo, author R. M. Gray, author A. Roy, and author A. Marandi, @noop journal journal Science volume 377, pages 1333 (year 2022)NoStop [Kashiwazaki et al.(2020)Kashiwazaki, Takanashi, Yamashima, Kazama, Enbutsu, Kasahara, Umeki, and Furusawa]Kashiwazaki2020 author author T. Kashiwazaki, author N. Takanashi, author T. Yamashima, author T. Kazama, author K. Enbutsu, author R. Kasahara, author T. Umeki, and author A. Furusawa, @noop journal journal APL Photon. volume 5, pages 036104 (year 2020)NoStop [Vahlbruch et al.(2007)Vahlbruch, Chelkowski, Danzmann, and Schnabel]Vahlbruch2007 author author H. Vahlbruch, author S. Chelkowski, author K. Danzmann, and author R. Schnabel, @noop journal journal New J. Phys. volume 9, pages 371 (year 2007)NoStop [Liu et al.(2018)Liu, Raja, Karpov, Ghadiani, Pfeiffer, Du, Engelsen, Guo, Zervas, and Kippenberg]Liu2018 author author J. Liu, author A. S. Raja, author M. Karpov, author B. Ghadiani, author M. H. P. Pfeiffer, author B. Du, author N. J. Engelsen, author H. Guo, author M. Zervas, and author T. J. Kippenberg, @noop journal journal Optica volume 5, pages 1347 (year 2018)NoStop [Yanagimoto et al.(2021b)Yanagimoto, Ng, Yamamura, Onodera, Wright, Jankowski, Fejer, McMahon, and Mabuchi]Yanagimoto2021-non-gaussian author author R. Yanagimoto, author E. Ng, author A. Yamamura, author T. Onodera, author L. G. Wright, author M. Jankowski, author M. M. Fejer, author P. L. McMahon, and author H. Mabuchi, @noop journal journal Optica volume 9, pages 379 (year 2021b)NoStop [Yanagimoto et al.(2021c)Yanagimoto, Ng, Onodera, and Mabuchi]Yanagimoto2021-spie author author R. Yanagimoto, author E. Ng, author T. Onodera, and author H. Mabuchi, @noop journal journal Proc. SPIE volume 11684, pages 11684D (year 2021c)NoStop [Weedbrook et al.(2012)Weedbrook, Pirandola, García-Patrón, Cerf, Ralph, Shapiro, and Lloyd]Weedbrook2012 author author C. Weedbrook, author S. Pirandola, author R. García-Patrón, author N. J. Cerf, author T. C. Ralph, author J. H. Shapiro, and author S. Lloyd, @noop journal journal Rev. Mod. Phys. volume 84, pages 621 (year 2012)NoStop [Olivares(2012)]Olivares2012 author author S. Olivares, @noop journal journal Eur. Phys. J. Special Topics volume 203, pages 3 (year 2012)NoStop [Braunstein and van Loock(2005)]Braunstein2005 author author S. L. Braunstein and author P. van Loock, @noop journal journal Rev. Mod. Phys. volume 77, pages 513 (year 2005)NoStop [Lugiato and Lefever(1987)]Lugiato1987 author author L. A. Lugiato and author R. Lefever, @noop journal journal Phys. Rev. Lett. volume 58, pages 2209 (year 1987)NoStop [Zakharov and Shabat(1972)]Zakharov1972 author author V. E. Zakharov and author A. B. Shabat, @noop journal journal Sov. Phys. JETP volume 34, pages 62 (year 1972)NoStop [Meng et al.(2021)Meng, Lapre, Billet, Sylvestre, Merolla, Finot, Turitsyn, Genty, and Dudley]Meng2021 author author F. Meng, author C. Lapre, author C. Billet, author T. Sylvestre, author J.-M. Merolla, author C. Finot, author S. K. Turitsyn, author G. Genty, and author J. M. Dudley, @noop journal journal Nat. Commun. volume 12, pages 5567 (year 2021)NoStop [Tlidi and Taki(2022)]Tlidi2022 author author M. Tlidi and author M. Taki, @noop journal journal Adv. Opt. Photonics volume 14, pages 87 (year 2022)NoStop [Kivshar and Agrawal(2003)]Kivshar2003 author author Y. S. Kivshar and author G. P. Agrawal, @noop title Optical Solitons (publisher Academic Press, year 2003)NoStop [Kivshar and Luther-Davies(1998)]Kivshar1998 author author Y. S. Kivshar and author B. Luther-Davies, @noop journal journal Phys. Rep. volume 298, pages 81 (year 1998)NoStop [Kippenberg et al.(2018)Kippenberg, Gaeta, Lipson, and Gorodetsky]Kippenberg2018 author author T. J. Kippenberg, author A. L. Gaeta, author M. Lipson, and author M. L. Gorodetsky, @noop journal journal Science volume 361, pages 567 (year 2018)NoStop [Grelu and Akhmediev(2012)]Grelu2012 author author P. Grelu and author N. Akhmediev, @noop journal journal Nat. Phys. volume 6, pages 84 (year 2012)NoStop [Herr et al.(2014)Herr, Brasch, Jost, Wang, Kondratiev, Gorodetsky, and Kippenberg]Herr2014 author author T. Herr, author V. Brasch, author J. D. Jost, author C. Y. Wang, author N. M. Kondratiev, author M. L. Gorodetsky, and author T. J. Kippenberg, @noop journal journal Nat. Photon. volume 8, pages 145 (year 2014)NoStop [Zhang et al.(2017)Zhang, Wang, Cheng, Shams-Ansari, and Lončar]Zhang2017 author author M. Zhang, author C. Wang, author R. Cheng, author A. Shams-Ansari, and author M. Lončar, @noop journal journal Optica volume 4, pages 1536 (year 2017)NoStop [Quesada et al.(2022)Quesada, Helt, Menotti, Liscidini, and Sipe]Quesada2022 author author N. Quesada, author L. G. Helt, author M. Menotti, author M. Liscidini, and author J. E. Sipe, @noop journal journal Adv. Opt. Photon. volume 14, pages 291 (year 2022)NoStop [Hosaka et al.(2016)Hosaka, Kawamori, and Kannari]Hosaka2016 author author A. Hosaka, author T. Kawamori, and author F. Kannari, @noop journal journal Phys. Rev. A volume 94, pages 053833 (year 2016)NoStop [Haus and Lai(1990)]Haus1990 author author H. A. Haus and author Y. Lai, @noop journal journal J. Opt. Soc. Am. B volume 7, pages 386 (year 1990)NoStop [Helt and Quesada(2020)]Helt2020 author author L. G. Helt and author N. Quesada, @noop journal journal J. Phys. Photonics volume 2, pages 035001 (year 2020)NoStop [Schack and Schenzle(1990)]Schack1990 author author R. Schack and author A. Schenzle, @noop journal journal Phys. Rev. A volume 41, pages 3847 (year 1990)NoStop [Verstraelen and Wouters(2018)]Verstraelen2018 author author W. Verstraelen and author M. Wouters, @noop journal journal Appl. Sci. volume 8, pages 1427 (year 2018)NoStop [Verstraelen et al.(2020)Verstraelen, Rota, Savona, and Wouters]Verstraelen2020 author author W. Verstraelen, author R. Rota, author V. Savona, and author M. Wouters, @noop journal journal Phy. Rev. Research volume 2, pages 022037(R) (year 2020)NoStop [Huang et al.(2022)Huang, Li, Lin, Zhang, Guo, and Zou]Huang2022 author author Y.-X. Huang, author M. Li, author K. Lin, author Y.-L. Zhang, author G.-C. Guo, and author C.-L. Zou, @noop journal journal Phys. Rev. A volume 105, pages 043707 (year 2022)NoStop [Navarrete-Benlloch et al.(2014)Navarrete-Benlloch, Roldán, Chang, and Shi]Navarrete-Benlloch2014 author author C. Navarrete-Benlloch, author E. Roldán, author Y. Chang, and author T. Shi, @noop journal journal Opt. Express volume 22, pages 24010 (year 2014)NoStop [Jankowski et al.(2022)Jankowski, Jornod, Langrock, Desiatov, Marandi, Lončar, and Fejer]Jankowski2022 author author M. Jankowski, author N. Jornod, author C. Langrock, author B. Desiatov, author A. Marandi, author M. Lončar, and author M. M. Fejer, @noop journal journal Optica volume 9, pages 273 (year 2022)NoStop [Ledezma et al.(2022)Ledezma, Sekine, Guo, Nehra, Jahani, and Marandi]Ledezma2022 author author L. Ledezma, author R. Sekine, author Q. Guo, author R. Nehra, author S. Jahani, and author A. Marandi, @noop journal journal Optica volume 9, pages 303 (year 2022)NoStop [Flórez et al.(2022)Flórez, Lundeen, and Chekhova]Florez2020 author author J. Flórez, author J. S. Lundeen, and author M. V. Chekhova, @noop journal journal Opt. Lett. volume 45, pages 4264 (year 2022)NoStop [Xing and Ralph(2023)]Xing2022 author author W. Xing and author T. C. Ralph, @noop journal journal Phys. Rev. A volume 107, pages 023712 (year 2023)NoStop [Kinsler et al.(1993)Kinsler, Fernée, and Drummond]Kinsler1993 author author P. Kinsler, author M. Fernée, and author P. D. Drummond, @noop journal journal Phys. Rev. A volume 48, pages 3310 (year 1993)NoStop [Jankowski et al.(2020)Jankowski, Langrock, Desiatov, Marandi, Wang, Zhang, Phillips, Lonc̆ar, and Fejer]Jankowski2020 author author M. Jankowski, author C. Langrock, author B. Desiatov, author A. Marandi, author C. Wang, author M. Zhang, author C. R. Phillips, author M. Lonc̆ar, and author M. M. Fejer, @noop journal journal Optica volume 7, pages 40 (year 2020)NoStop [Jankowski et al.()Jankowski, Langrock, Desiatov, Lončar, and Fejer]Jankowski2021 author author M. Jankowski, author C. Langrock, author B. Desiatov, author M. Lončar, and author M. M. Fejer, https://arxiv.org/abs/2102.12856 arXiv:2102.12856 [physics.optics] NoStop [Krämer et al.(2018)Krämer, Plankensteiner, Ostermann, and Ritsch]Kraemer2018 author author S. Krämer, author D. Plankensteiner, author L. Ostermann, and author H. Ritsch, @noop journal journal Comput. Phys. Commun volume 227, pages 109 (year 2018)NoStop [Note1()]Note1 note Interestingly, it turns out in that in the single-mode case, such higher-order corrections only affect the equation for â, and in fact is unchanged. However, in the general multimode case, higher-order moments enter into the evolution of all non-diagonal moments in general.Stop [Note2()]Note2 note More generally, for a Gaussian state, δẑ_1 ⋯δẑ_n = ∑@ @ _p ∈ℙ_n∏@ @ _(i,j) ∈ pẑ_iẑ_j, where ℙ_n denotes the set of all order-preserving pair partitions of {1, … , n}. For example, for n = 4, the elements of ℙ_4 are the 3 pair partitions {(1,2),(3,4)}, {(1,3),(2,4)}, and {(1,4),(2,3)}. Taking these elements in the sum-of-products produces .Stop [Agrawal(2019)]Agrawal2019 author author G. P. Agrawal, @noop title Nonlinear Fiber Optics, 6th edition (publisher Academic Press, year 2019)NoStop [Hult(2007)]Hult2007 author author J. Hult, @noop journal journal J. Light. Technol. volume 25, pages 3770 (year 2007)NoStop [Besard et al.(2019)Besard, Foket, and De Sutter]Besard2018 author author T. Besard, author C. Foket, and author B. De Sutter, @noop journal journal IEEE Trans. Parallel Distrib. Syst. volume 30, pages 827 (year 2019)NoStop [Carter et al.(1987)Carter, Drummond, Reid, and Shelby]Carter1987 author author S. J. Carter, author P. D. Drummond, author M. D. Reid, and author R. M. Shelby, @noop journal journal Phys. Rev. Lett. volume 58, pages 1841 (year 1987)NoStop [Drummond and Carter(1987)]Drummond1987 author author P. D. Drummond and author S. J. Carter, @noop journal journal J. Opt. Soc. Am. B volume 4, pages 1565 (year 1987)NoStop [Guidry et al.(2022)Guidry, Lukin, Yang, Trivedi, and Vučković]Guidry2022 author author M. A. Guidry, author D. M. Lukin, author K. Y. Yang, author R. Trivedi, and author J. Vučković, @noop journal journal Nat. Photon. volume 16, pages 52 (year 2022)NoStop [Guidry et al.(2023)Guidry, Lukin, Yang, and Vučković]Guidry2023 author author M. A. Guidry, author D. M. Lukin, author K. Y. Yang, and author J. Vučković, @noop journal journal Optica volume 10, pages 694 (year 2023)NoStop [Wasilewski et al.(2006)Wasilewski, Lvovsky, Banaszek, and Radzewicz]Wasilewski2006 author author W. Wasilewski, author A. I. Lvovsky, author K. Banaszek, and author C. Radzewicz, @noop journal journal Phys. Rev. A volume 73, pages 063819 (year 2006)NoStop [Lvovsky et al.(2007)Lvovsky, Wasilewski, and Banaszek]Lvovsky2007 author author A. I. Lvovsky, author W. Wasilewski, and author K. Banaszek, @noop journal journal J. Mod. Opt. volume 54, pages 721 (year 2007)NoStop [Gouzien et al.(2020)Gouzien, Tanzilli, D’Auria, and Patera]Gouzien2020 author author E. Gouzien, author S. Tanzilli, author V. D’Auria, and author G. Patera, @noop journal journal Phys. Rev. Lett. volume 125, pages 103601 (year 2020)NoStop [Degenfeld-Schonburg et al.(2015)Degenfeld-Schonburg, Navarrete–Benlloch, and Hartmann]Degenfeld-Schonburg2015 author author P. Degenfeld-Schonburg, author C. Navarrete–Benlloch, and author M. J. Hartmann, @noop journal journal Phy. Rev. A volume 91, pages 053850 (year 2015)NoStop [Veits and Fleischhauer(1995)]Veits1995 author author O. Veits and author M. Fleischhauer, @noop journal journal Phy. Rev. A volume 52, pages R4344 (year 1995)NoStop [Dudley et al.(2006)Dudley, Genty, and Coen]Dudley2006 author author J. M. Dudley, author G. Genty, and author S. Coen, @noop journal journal Rev. Mod. Phys. volume 78, pages 1135 (year 2006)NoStop [Jones et al.(2000)Jones, Diddams, Ranka, Stentz, Windeler, Hall, and Cundiff]Jones2000 author author D. J. Jones, author S. A. Diddams, author J. K. Ranka, author A. Stentz, author R. S. Windeler, author J. L. Hall, and author S. T. Cundiff, @noop journal journal Science volume 288, pages 635 (year 2000)NoStop [Helbing et al.(2003)Helbing, Steinmeyer, and Keller]Helbing2003 author author F. Helbing, author G. Steinmeyer, and author U. Keller, @noop journal journal IEEE J. Sel. Top. Quantum Electron. volume 9, pages 1030 (year 2003)NoStop [Diddams et al.(2000)Diddams, Jones, Ye, Cundiff, Hall, Ranka, Windeler, Holzwarth, Udem, and Hänsch]Diddams2000 author author S. A. Diddams, author D. J. Jones, author J. Ye, author S. T. Cundiff, author J. L. Hall, author J. K. Ranka, author R. S. Windeler, author R. Holzwarth, author T. Udem, and author T. W. Hänsch, @noop journal journal Phys. Rev. Lett. volume 84, pages 5102 (year 2000)NoStop [Holzwarth et al.(2000)Holzwarth, Udem, Hänsch, Knight, Wadsworth, and Russell]Holzwarth2000 author author R. Holzwarth, author T. Udem, author T. W. Hänsch, author J. C. Knight, author W. J. Wadsworth, and author P. S. J. Russell, @noop journal journal Phys. Rev. Lett. volume 85, pages 2264 (year 2000)NoStop [Corwin et al.(2003)Corwin, Newbury, Dudley, Coen, Diddams, Weber, and Windeler]Corwin2003 author author K. L. Corwin, author N. R. Newbury, author J. M. Dudley, author S. Coen, author S. A. Diddams, author K. Weber, and author R. S. Windeler, @noop journal journal Phys. Rev. Lett. volume 90, pages 113904 (year 2003)NoStop [Ames et al.(2003)Ames, Ghosh, Windeler, Gaeta, and Cundiff]Ames2003 author author J. N. Ames, author S. Ghosh, author R. S. Windeler, author A. L. Gaeta, and author S. T. Cundiff, @noop journal journal Appl. Phys. B volume 77, pages 279 (year 2003)NoStop [Walschaers et al.(2020)Walschaers, Parigi, and Treps]Walschaers2020 author author M. Walschaers, author V. Parigi, and author N. Treps, @noop journal journal PRX Quantum volume 1, pages 020305 (year 2020)NoStop [Bourassa et al.(2021)Bourassa, Alexander, Vasmer, Patil, Tzitrin, Matsuura, Su, Baragiola, Guha, Dauphinais, Sabapathy, Menicucci, and Dhand]Bourassa2021 author author J. E. Bourassa, author R. N. Alexander, author M. Vasmer, author A. Patil, author I. Tzitrin, author T. Matsuura, author D. Su, author B. Q. Baragiola, author S. Guha, author G. Dauphinais, author K. K. Sabapathy, author N. C. Menicucci, and author I. Dhand, @noop journal journal Quantum volume 5, pages 392 (year 2021)NoStop [Note3()]Note3 note In general, under quantum input-output theory, the evolution is governed by a quantum stochastic differential equation (i.e., a Heisenberg-Langevin equation) via dx̂ = i/ħĤ, x̂ +.1667emdt + 1/2∑@ @ _ℓ *L̂_ℓ ^†x̂, L̂_ℓ + L̂_ℓ ^† , x̂L̂_ℓ +.1667emdt + ∑@ @ _ℓ *L̂_ℓ ^† , x̂ +.1667emdŴ_ℓ + x̂, L̂_ℓ +.1667emdŴ_ℓ ^†.Stop [Collett et al.(1987)Collett, Loudon, and Gardiner]Collett1987 author author M. Collett, author R. Loudon, and author C. Gardiner, @noop journal journal J. Mod. Opt. volume 34, pages 881 (year 1987)NoStop
http://arxiv.org/abs/2307.04652v1
20230710154947
Winding number and circular 4-coloring of signed graphs
[ "Anna Gujgiczer", "Reza Naserasr", "Rohini S", "S Taruni" ]
math.CO
[ "math.CO", "cs.DM" ]
Winding number and circular 4-coloring of signed graphs Sven Burger August 12, 2023 ======================================================= Concerning the recent notion of circular chromatic number of signed graphs, for each given integer k we introduce two signed bipartite graphs, each on 2k^2-k+1 vertices, having shortest negative cycle of length 2k, and the circular chromatic number 4. Each of the construction can be viewed as a bipartite analogue of the generalized Mycielski graphs on odd cycles, M_ℓ(C_2k+1). In the course of proving our result, we also obtain a simple proof of the fact that M_ℓ(C_2k+1) and some similar quadrangulations of the projective plane have circular chromatic number 4. These proofs have the advantage that they illuminate, in an elementary manner, the strong relation between algebraic topology and graph coloring problems. § INTRODUCTION The problem of building graphs of high girth and high chromatic number is one of the basic questions of graph coloring and its study has led to many further developments. In particular, the original proof of Erdős for the existence of such graphs has led to the development of probabilistic methods in graph theory. Since then several constructive methods were presented, but none are easy to grasp. With a weaker condition of high odd girth instead of high girth, there are several natural classes of graphs. In particular, in the family of the Kneser graphs one can find examples of high odd girth and high chromatic number. The proof of the lower bound for the chromatic number of the Kneser graphs, by L. Lovász <cit.>, was the birthplace of the connection between algebraic topology and graph coloring. Further developing this method, Stiebitz introduced a generalization of the Mycielski construction in <cit.> to build small graphs of high odd girth and high chromatic number. Generalized Mycielski on odd cycles have been studied independently by many authors and several results on their chromatic number <cit.>, circular chromatic number <cit.> and on various other related parameters <cit.> are proved. In this work, building on the ideas from several works in the literature, we first present a relatively short proof that the generalized Mycielski graphs on odd cycles have circular chromatic number 4. The proof has the advantage of capturing the connection between algebraic topology and graph coloring with elementary techniques. We then present three similar classes of signed graphs of high negative girth and circular chromatic number 4. The graphs are built similarly to the generalized Mycielski on odd cycles when viewed as a quadrangulation of the projective plane, the main difference being that the subgraph induced by the outer layer induces a Möbius ladder. In Section <ref>, we give the necessary notation and the terminology. In Section <ref>, we provide a historical account of what is known. In Section <ref>, we discuss three families of signed graphs and in the Section <ref>, we prove that their circular chromatic number is 4. Finally, we conclude our paper with the Section <ref>. § NOTATION We consider simple graphs unless clearly stated otherwise. A signed (simple) graph (G,σ) is a graph G together with the assignment σ of signs to the edges. We denote by (G, -) the signed graph G with all edges negative (and (G, +) accordingly). If G is bipartite, then (G, σ) is called a signed bipartite graph (in some literature, this term is used to refer to a balanced signed graph, that is a signed graph with no negative cycle). The sign of a structure in (G, σ) (such as a cycle, a closed walk, a path) is the product of the signs of edges in the said structure counting multiplicity. Given an integer n, n≥ 3, we denote by C_n the cycle (graph) on n vertices. That is a 2-regular connected graph on n vertices. Furthermore, we view C_n as a plane graph, that is, the graph together with a planar embedding. For topological use of C_n, one may identify it with the regular polygon on n vertices. Vertices of C_n are normally labeled as v_1, v_2, …, v_n. The exact square of C_n, denoted C^#2_n, is the graph on the same set of vertices where two vertices are adjacent if they are at a distance (exactly) 2 in C_n. Observe that for odd values of n, C^#2_n is also a cycle of length n. For even values of n, C^#2_n consists of two connected components, each isomorphic to a cycle of length n/2. They are induced on sets of vertices with odd and even indices and will be denoted, respectively, by C^#2o_n and C^#2e_n. Given a positive real number, we denote by O_r the (geometric) circle of circumference r. That would be a circle of radius r/2π. The antipodal of a point x on O_r is the unique point x on O_r which is collinear with x and the center of the circle. Given a real number r, r≥ 2, a circular r-coloring of a signed graph (G, σ) is a mapping ψ of the vertices of G to the points of O_r in such a way that when xy is a negative edge, then the distance of ψ(x) from ψ(y) on O_r is at least 1 and if xy is a positive edge, then the distance of ψ(x) from ψ(y) is at least 1, equivalently, the distance between ψ(x) and ψ(y) is at most r/2-1. The circular chromatic number of (G, σ), denoted χ_c(G, σ), is the infimum of r such that (G, σ) admits a circular r-coloring. When restricted to signed graphs where all edges are negative, we have the classic notion of circular coloring of graphs. This extension to signed graphs is first presented in <cit.> noting that a different but similar parameter under a similar name has been introduced in <cit.>. However, compared to <cit.>, the role of positive and negative edges are exchanged for better suitability with literature on structural theory on signed graphs, especially in regard to the minor theory of signed graphs. Among basic results, the following should be noted for the purpose of this work. The infimum in the definition is always attained for finite graphs, even allowing multi-edges and positive loops, but a negative loop cannot be colored with a finite r. For the class of signed bipartite (multi)graphs, we have the trivial upper bound of χ_c(G, σ)≤ 4, to see this, map the vertices of one part of G to the north pole of O_4 and the vertices of the other part to the east point. Even with such a strong upper bound the problem of determining the exact value of the circular chromatic number of a given signed bipartite graph is of high importance and, in general, quite a difficult problem. In particular, as it is pointed out in <cit.>, using some basic graph operations, namely indicators, one can transform a graph G into a signed bipartite graph F(G) such that the circular chromatic number of F(G) determines the circular chromatic number of G. A basic example of this sort is the construction S(G), which is obtained from a given graph G by replacing each edge uv of G with a negative 4-cycle ux_uvvy_uv where x_uv and y_uv are new and distinct vertices. It is then shown in <cit.> that χ_c(S(G))=4-4/χ_c(G)+1. Further connections with some well-known study and theorems, such as the four-color theorem, is discussed in <cit.> and <cit.>. Motivated by these observations and in connection with some other studies, some of which are mentioned in the last section, the question of constructing signed bipartite graphs of high negative girth but circular chromatic number 4 is of high interest. In this work, we present two bipartite analogues of the generalized Mycielski graph on odd cycles as examples of signed bipartite graphs. The proofs also lead to an elementary understanding of the relation between coloring problems of graphs and basic notions of algebraic topology, namely the winding number. Recall that given a closed curve γ on the plane, the winding number of γ, defined rather intuitively, is the number of times γ is winded around the origin in the clockwise direction, noting that: if the origin is not in the part bounded by γ, then the winding number is 0 and that winding in anticlockwise direction is presented by a negative number. Here the closed curves we work with are mappings to O_r with the center of O_r being the center of the plane. They can be thought of as continuous mappings of [0,1] to O_r with the condition that the two endpoints, i.e., 0 and 1 are mapped to the same point. § A HISTORICAL NOTE In 1955 Mycielski introduced the construction <cit.> that is now known as the Mycielski construction. His goal of the construction was to build triangle-free graphs of high chromatic number. In this construction, given a graph G one adds a vertex v' for each vertex v of G, which is joined to all neighbors of v in G and then adds a vertex u which is joined to all vertices v'. It is not difficult to prove that the resulting graph has chromatic number χ(G)+1. Generalization of the construction, where one adds several layers of copy vertices before adding a universal vertex to the last layer, was first considered independently in Habilitation thesis of M. Stiebitz <cit.> and Ph.D. thesis of N. Van Ngoc <cit.>. (The former is written in German, but its result can also be found in <cit.> and the latter is in Hungarian.) Stiebitz applied methods of algebraic topology to prove that if one starts with K_2 and iteratively builds a generalized Mycielski, at each step the chromatic number would increase by 1. This does not hold for every graph, though. For example, the chromatic number of the complement of C_7 is 4, and any generalized Mycielski of it, except the original one, is also of chromatic number 4. It has been shown recently in <cit.> that the result of Stiebitz is equivalent to the Borsuk-Ulam theorem. First English publications of the fact that the generalized Mycielski based on an odd cycle has chromatic number 4 appeared independently in <cit.>. The proof of Payan <cit.> is about the special case of M_k(C_2k+1) as they appear as subgraphs of nonbipartite Cayley graphs on binary groups, but it works the same for any M_ℓ(C_2k+1). This proof has strongly motivated the work presented here. The proof of <cit.> is presented quite differently, but the hidden idea behind the proof is the same. The result of <cit.> is more general. It is shown that if G is not bipartite but admits an embedding on the projective plane where all facial cycles are 4-cycles, then χ (G)=4. That such structures are necessary for 4-chromatic triangle-free projective planar graphs was conjectured in <cit.> and proved in <cit.>. The well-known fact that M_ℓ(C_2k+1) quadrangulate the projective plane is evident from our presentation of these graphs in the next section. The circular chromatic number of Mycielski constructions was first studied in <cit.>. That of the generalized Mycielski is studied in <cit.> among others. In particular, that χ_c(M_ℓ(C_2k+1))=4 follows, independently, from the general results of <cit.> and of <cit.>. In the latter, it is shown that if the lower bound of 2k for the chromatic number is proved using topological connectivity, then the same lower bound works for the circular chromatic number as well. § THE CONSTRUCTION The main body of the construction we will work with is an almost quadrangulation of the cylinder which we define here. Given positive integers ℓ and k, C__ℓ× (2k+1) is the graph whose vertex set is V={v__i,j| 1 ≤ i ≤ℓ, 1≤ j ≤ 2k+1} with the edge set E={ v__i,jv__i+1, j-1, v__i,jv__i+1, j+1| 1 ≤ i ≤ℓ-1, 1≤ j ≤ 2k+1}. Here, and in the rest of this work, the addition on the indices is taken modular the maximum value of the said index, which is 2k+1 in this case. We note that, as a graph C__ℓ× (2k+1) is isomorphic to the categorical product P_ℓ× C_2k+1, but the standard labeling of this product does not fit well with our purpose. A general picture of this graph is depicted in Figure <ref> where the dashed circles are only presenting the layers, but they will play a key role. §.§ M_ℓ(C_2k+1) Given positive integers ℓ and k, the generalized Mycielski graph of the odd cycle C_2k+1, M_ℓ(C_2k+1) is built from C__ℓ× (2k+1) by the following two steps: * Connect v__1,j to v__1,j+k (Figure <ref>, right). * Add a new vertex u and connect it to all vertices v__ℓ,j, j=1,…, 2k+1 (Figure <ref>, left). Observe that the added edges in the first item form an isomorphic copy of C_2k+1. One can easily observe that starting with this cycle, the classic definition of a generalized Mycielski graph results in the same graph. The graph M_1(C_3) is K_4. The graph M_2(C_5) is the well-known Grözsch graph. To show that M_2(C_5) is the smallest 4-chromatic triangle-free graph is proposed as an exercise in <cit.>. Furthermore, Chvátal showed in <cit.> that M_2(C_5) is the only 4-chromatic triangle-free graph on 11 vertices. The following is a key property of M_ℓ(C_2k+1). The shortest odd cycle of M_ℓ(C_2k+1) is the minimum of 2k+1 and 2l+1. Since this is a folklore fact, we do not provide a proof but we note that the main idea to verify it is also presented in the next proposition. §.§ BQ(ℓ,2k-1) Next, given integers ℓ and k satisfying ℓ, k ≥ 2, we define the signed bipartite graph BQ(ℓ,2k-1) also from C_ℓ× (2k-1) as follows. * Edges of C_ℓ× (2k-1) are all negative. * Connect v__1,j to v__2,j+k by a positive edge (Figure <ref>, right). * Add a new vertex u and connect it to each of the vertices v__ℓ,j, j=1,…, 2k-1, with a negative edge (Figure <ref>, left). We view this construction as one of the bipartite analogues of the generalized Mycielski. The second item of the construction, which is presented in Figure <ref> (right) is the main difference with the previously known constructions: While in construction of M_ℓ(C_2k+1) we add some edges between vertices of the first layer, in this new construction we add some connection between vertices of the first layer and the second layer. Therefore this operation preserves the bipartition. The underlying graph of the induced subgraph on the first two layers is isomorphic to what is known as the Möbuis ladder with 2k-1 steps. We will refer to it as such. The case of BQ(2,3) is (K_3,4,M) depicted in Figure <ref>. It is the signed bipartite graph where each one of the edges of a maximum matching of K_3,4 is assigned a positive sign and all the other edges are assigned a negative sign. The fact that the underlying graph of BQ(ℓ,2k-1) is bipartite is easily observed. The parity of the levels gives a natural bipartition of the graph. We show that based on the choice of k and l this signed bipartite graph does not have a short negative cycle. Given integers l and k where l,k≥ 2, the shortest negative cycle of BQ(ℓ,2k-1) is of length min{2l, 2k}. We first present two natural choices for a negative cycle, one of length 2k and another of length 2l. The first is a negative cycle on the first two layers. Take a positive edge and connect its two ends with one of the two paths using only the negative edges that connect the two layers. This would result in a negative cycle of length 2k. The second negative cycle we consider is by taking a positive edge and connecting each of its ends to the vertex u by a shortest path (all edges negative). One of these paths will be of length l and the other would be of length l-1. Together with the first chosen edge itself then, they form a negative cycle of length 2l. It remains to show that the shortest of these two types of cycles gives us the negative girth. To that end, we will first show that a shortest negative cycle can only use one positive edge of BQ(ℓ,2k-1). Towards a contradiction, let C be a negative cycle with more than two positive edges. We aim to present a negative cycle C' whose length is at most |C|-2. We take two positive edges of C that come consecutively on the cyclic order. Assume xy and x'y' are these two edges and that x' is followed by y in the cyclic order of C (that is to say, there is no positive edge in the x'-y path in C). We remove the two positive edges xy and x'y' and the x'y path connecting them in C, but then we add a xy' copy of this path (which also has no positive edge). The result is a closed walk whose sign is the same as that of C, and whose length is |C|-2. But then this closed walk must contain a negative cycle, whose length then is also at most |C|-2, a contradiction. Finally, if C is a cycle that uses exactly one negative edge, say xy, then the x-y path P_xy=C-xy either passes through u in which case we have at least 2l edges in C, or the natural image of P_xy to the cycle in between the first and second layers also connects x to y. But the shortest such path is of length 2k-1, thus P_xy is of length at least 2k-1, and the negative cycle is of length at least 2k. §.§ BQ(ℓ,2k) The third family of (signed) graphs we consider in this work are built quite similar to the previous construction. More precisely, given integers ℓ and k satisfying ℓ, k ≥ 2, we define the (signed) graph BQ(ℓ,2k) from C_ℓ× (2k) as follows. * Edges of C_ℓ× 2k are all negative. * Connect v__1,j to v__1,j+k and v__2,j to v__2,j+k by negative edges (Figure <ref>, right). * Add a new vertex u and connect it to each of the vertices v__ℓ,j, j=1,…, 2k, with a negative edge (Figure <ref>, left). As an example, the (signed) graphs BQ(3,4) and BQ(4,6) are depicted in Figures <ref> and <ref> respectively. Given integers l and k, where l,k ≥2, the shortest negative cycle of BQ(l, 2k) is of length min{2l-1, 2k+1}. A cycle of BQ(l, 2k) which does not contain any step of the Möbius ladder induced by the first two layers is even. That is to say any odd cycle has at least one step of this Möbius ladder. A step of the Möbius ladder together with one of the two paths that are connecting the end vertices of this step through the first two layers form an odd cycle of length 2k+1. Also, there is another natural choice for an odd cycle constituted by this step and the shortest path which contains the universal vertex u connecting the end vertices of this step. This cycle is of length 2l-1. In a similar way as in the proof of Proposition <ref> one may conclude that one of these two odd cycles of BQ(l,2k) is the shortest. The two constructions BQ(ℓ, 2k-1) and BQ(ℓ, 2k) can be defined uniformly as follows. Starting with an i-star (i=2k-1 or i=2k) on the projective plane, we complete it to quadrangulation of the planar part except for the vertices on the outer layer which are at distance ℓ or ℓ-1 from the center of the star, and assign a negative sign to everything so that all facial 4-cycles are positive. We then complete the outer layer to a Möbuis ladder, choosing signs for the crossing edges so that all faces are positive 4-cycles but the non-contractible cycles are negative. We view this class of signed graphs as Basic Qudrangulations of the projective plane and thus use the notation BQ(ℓ, i). §.§ BM(ℓ,2k) The last construction we present here, BM(ℓ,2k), is built from C_ℓ, 2k as follows. Taking all the edges of this graph as negative edges, on the last layer of the cylinder, as in the other cases, we add a (universal) vertex which is joined to all vertices of this layer with negative edges. On the first layer we add a set {u_1, …, u_k} of vertices, then join each u_i, i=1, … k, to v_1, i and v_1, i+1 with negative edges and to v_1, i+k and v_1, i+k+1 with positive edges. See Figure <ref> for a depiction. We leave it to the reader to check the following. Given integers l,k ≥ 2, the shortest negative cycle of the signed bipartite graph BM(l,2k) is of length min{2l+2,2k}. In particular, BM(k-1,2k) has 2k^2-k+1 vertices and its shortest negative cycles is of length 2k. § WINDING NUMBER AND COLORING Given a simple closed curve γ on the plane, and a continuous mapping φ of γ to O_r, we define the winding number of the pair (γ, φ) to be the winding number of the curve φ(γ) with center of O_r considered as the center of the plane. Intuitively speaking, (γ, φ) tells us how many times the curve γ is wrapped around O_r in the clockwise direction noting that a negative number reflects an anticlockwise mapping. This value then will be denoted by ω(γ, φ). A mapping c of the vertices of the cycle C_n to the points of O_r can be extended to a continuous mapping of C_n to O_r with the former being viewed as the closed curve or the polygon. There are 2^n natural ways to do this. For each pair v_i,v_i+1 of the vertices of C_n, the pair c(v_i), c(v_i+1) partitions the circle O_r into two parts. The segment of the polygon that represents the edge v_iv_i+1 can be projected into one of these two parts. We note that c is allowed to map several vertices of C_n to the same point and that even if v_i and v_i+1 are mapped to the same point, in our view, they partition the circle O_r into two parts: a part of length 0 and a part of length r. These 2^n extensions are in a one-to-one correspondence with the 2^n possible orientations of C_n: orient the edge v_iv_i+1 in such a way that the mapping follows the clockwise direction of O_r. Given a coloring c of the vertices of the cycle C_n, two extensions of c to a mapping of the polygon to O_r are of special importance. The first is the extension corresponding to the directed cycle C_n. Here v_iv_i+1 is mapped to the part of the circle where c(v_i+1) follows c(v_i) in the clockwise direction. Let us denote this extension by c^D. A trivial observation here is that the winding number of (C_n, c^D) is never 0. The other natural extension is to choose the shortest of the two parts of the circle determined by c(v_i) and c(v_i+1) and project the line v_iv_i+1 onto it. The orientation corresponding to this extension then depends on whether c(v_i) is the start or the end of this shorter part of the circle with respect to the clockwise orientation. We denote this extension by c^sh and observe that this extension may result in winding number 0 for some choices of c (and r). Given the cycle C_n, a mapping c of its vertices to O_r and an extension φ of c to the polygon, a combinatorial way to compute ω(C_n, φ) is as follows: take an (open) interval I on O_r which does not contain any image of the vertices of C_n. Then in an extension φ of c to a mapping of the polygon to O_r, each edge of C_n either traverses I completely or does not touch any point of it. Now the winding number ω(C_n, φ) is the number of edges that traverse I in the clockwise direction minus the number of edges that traverse it in the anticlockwise direction (and thus independent of the choice of I). Let c be a mapping of the vertices of a cycle C_n to the circle O_r. Consider the continuous mapping (C_n, c^D) and an (open) interval I of O_r which does not contain any point c(v_i). Color the edges of C_n with two colors, say green and orange, as follows: if the image of an edge e under c^D contains I, then color it green, otherwise, color it orange. We are interested in the pairs of consecutive edges v_i-1v_i and v_iv_i+1, which are colored differently. If in such a pair, the first edge is colored green, then in the next pair of this sort (next in the cyclic order of indices), the first edge must be orange and vice versa. Thus, the total number of such pairs is even, that is regardless of the choices of n and c. To use this observation, we will work with certain types of mappings c. We say a mapping c of the vertices of C_n to the points on O_r is far-polar if the followings hold: for each i the pair of the points c(v_i-1) and c(v_i+1) on O_r partitions O_r into two unequal parts and that c(v_i) is on the larger of the two parts. More generally, a mapping ϕ of the vertices of a graph G to the circle O_r is called far-polar if for each vertex x of G there is a diameter D_x which separates ϕ(x) from ϕ(y) for all neighbors y of x. In the following, we present how the condition of c being a far-polar mapping provides a connection between c^D extension of c on C_n and c^sh extension of the mapping c on C_n^#2. Let c be a far-polar mapping of C_n to O_r and let I be an interval of O_r which does not contain any c(v_i). Then in the extension c^sh of a mapping of the one or two cycles in C_n^#2 to O_r, the number of edges v_i-1v_i+1 that does not cross over I is an even number. Consider three consecutive vertices v_i-1, v_i, v_i+1 of the cycle. If, following the c^D extension of C_n, both edges v_i-1v_i and v_iv_i+1 are colored orange, that is to say, in the extension they do not pass through I, then since c is far-polar, c(v_i) must be on the longer of c(v_i-1)c(v_i+1) or c(v_i+1)c(v_i-1) and thus I is on the shorter part. Thus in the extension c^sh of C_n^#2, c(v_i-1)c(v_i+1) passes through I. Similarly, if both edges v_i-1v_i and v_iv_i+1 are colored green, then c(v_i-1)c(v_i+1) passes through I. On the other hand, if one of the edges is green and the other orange, then together, they must cover more than half of the O_r. Implying that c(v_i-1)c(v_i+1) does not pass through I. Overall the number of edges of C_n^#2 that do not pass through I in c^sh extension is the number of vertices of C_n incident with both green and orange edges, where the colors are determined by the extension c^D of C_n. The number of such pairs then must be even as it is observed above. We may now observe that, given a real number r, r<4, any circular r-coloring of C_n must be a far-polar coloring. Thus we have the following two consequences depending on the parity of n. Let c be a circular r-coloring of an even cycle C_n. Let c_o (resp. c_e) be its restriction on the vertices with odd (resp. even) indices. Then the winding numbers of (C^#2o, c^sh_o) and (C^#2e, c^sh_e) are of the same parity. That is because after choosing a suitable interval I, by Lemma <ref>, the total number of edges of C^#2 that does not cross over I in the extension c^sh is even. As the total number of edges is also even (that is n), the number of edges of C^#2 that cross over I is also even. However, the winding number of each of (C^#2o, c^sh_o) and (C^#2e, c^sh_e), which is the difference of the number of edges crossing I in the clockwise direction and the number of edges crossing it in the anticlockwise direction, has the same parity as the total number of the edges of the cycle in consideration that cross over I (in the c^sh extension). This proves our claim as the sum of the two winding numbers is an even number. Using this lemma, we can build a cylinder of many layers, as shown in the example of Figure <ref>, with the property that in any circular r-coloring c of the red graph (r<4), all of the dashed grey cycles must have winding numbers of the same parity. Observe that in this construction, the zigzag red cycle between two consecutive layers is an even cycle, and its exact square consists of the two grey cycles presenting the two layers. If we then add structures to the two ends in such a way that one force an odd winding number on one of the grey cycles and the other forces an even winding number on another one of them, then the result would be a graph which admits no circular r-coloring for r<4. A basic method to achieve these conditions is presented next. Given an odd integer n, a positive real number r, and a far-polar mapping c of C_n to O_r, the winding number ω(C^#2_n, c^sh) is an odd number. By Lemma <ref>, the total number of edges of C^#2_n that does not cross over I is even. As n is an odd number, C^#2_n is isomorphic to C_n, and, hence, the number of edges crossing over I is odd. This is the sum of the number of edges crossing over I in the clockwise direction and in the anticlockwise direction. Thus the winding number, which is the difference between these two numbers, is also an odd number. Applying this lemma on circular r-coloring for r<4 we have the following. Given an odd integer n, n=2k+1, a real number r satisfying 2+1/k≤ r <4, and a circular r-coloring c of C_n, the winding number ω(C^#2_n, c^sh) is an odd number. We observe that if r<4 and c is a circular r-coloring of C_n, then it is, in particular, a far-polar mapping of C_n. That is because for three consecutive vertices v_i-1, v_i, and v_i+1, having partitioned O_r to two parts based on c(v_i-1) and c(v_i+1), the part that contains c(v_i) must be of length at least 2. As r<4, this must be the larger part. Then the statement follows from the previous lemma. Let G be the star K_1,n with u being the central vertex and A being the independent set of order n. Let c be a circular r-coloring of G with r<4. Then for any cycle C built on A, the winding number of (C, c^sh) is 0. This is observed by taking a small interval I sufficiently close to c(u) and noting that first of all, a vertex of C cannot be mapped to c(u); secondly, since r<4, for any pair x and y of vertices in A in the partition of O_r to two parts by c(x) and c(y), the part containing c(u) is of length at least 2 and thus it is the larger of the two, meaning in the shortest extension, c(x)c(y) will never cross over I. We may now give a new proof of the following theorem. For any positive integers ℓ and k, we have χ_c(M_ℓ(C_2k+1))=4. It is enough to observe that M_ℓ(C_2k+1) is obtained from the l× (2k+1) cylindrical grid of Figure <ref> by adding diagonal edges to the bottom layer (that is connecting pairs at a distance k of the grey cycle) and adding a universal vertex to the top layer (as mentioned in the previous section). As any circular r-coloring with r<4 is also far-polar, any such a coloring would imply an odd winding number for the layers in c^sh extension from one end and an even winding number for the layers from the other end. So a proper mapping to O_r where r<4 is impossible. On the other hand, one can easily color M_ℓ(C_2k+1) with 4 colors, which gives the upper bound 4 on the circular chromatic number as well. Next, we show that BQ(ℓ, 2k-1) shares the same property. We will note later that Theorem <ref> follows from the next theorem. For given positive integers ℓ and k, satisfying l,k ≥ 2, we have χ_c(BQ(ℓ, 2k-1))=4. Towards a contradiction, let c be a circular r-coloring of BQ(ℓ, 2k-1) with r<4. We will have a contradiction if we show that the cycle C' formed on v__1,1v__1,2⋯ v__1,2k-1 in this cyclic order has an odd winding number under the mapping c^sh (restricted on the vertices of this cycle). We emphasize that edges of C' are not in BQ(ℓ, 2k-1). To this end we first consider another cycle, C^⋆, (also not part of our graph) by considering the following sequence of vertices of the first layer of BQ(ℓ, 2k-1): v__1,1v__1,k+1v__1,2v__1,k+2⋯ v__1,k. Note that in this cycle v__1,j is followed by v__1,j+k where the addition is taken 2k-1. We may also note that this is the diagonally drawn cycle on the first layer of Figure <ref> (right). Our claim is that the mapping c, viewed as a mapping of the vertices of C^⋆ to O_r, is a far-polar mapping. Toward proving the claim, we consider c(v__1,j), c(v__1,j+k), and c(v__1,j+1). The first observation is that since v__2,j is adjacent to both v__1,j and v__1,j+1 with negative edges, the points c(v__1,j) and c(v__1,j+1) of O_r partition O_r in such a way that the part containing c(v__2,j) is at least 2. As r<4, it follows that c(v__2,j) is on the larger part of O_r when it is partitioned by c(v__1,j) and c(v__1,j+1). It remains to show that c(v__1,j+k) is also on the same part. If not, that is if c(v__1,j+k) is on the shorter side of O_r, then one of the arcs c(v__1,j+k)c(v__2,j) and c(v__2,j)c(v__1,j+k) contains the shorter side of c(v__1,j)c(v__2,j) and the other contains the shorter side of c(v__1,j+1)c(v__2,j). As each of these shorter arcs is of length at least one, we conclude that the distance of c(v__1,j+k) and c(v__2,j) is at least one. However, since c is a circular r-coloring where r<4 and v__1,j+kv__2,j is a positive edge, they should be at a distance at most r/2-1<1, a contradiction. Finally, observing that C' is the exact square of C^⋆, and by Lemma <ref>, we conclude that the winding number of C' is odd. To prove that χ_c(BQ(ℓ,2k)=4 we need a few more lemmas. Let c be a far-polar mapping of C_4 to O_r. Then the winding number w(C_4, c^D) is 2. The points c(v_1) and c(v_3) partition O_r into two unequal parts. Since c is a far-polar coloring, we know that c(v_2) and c(v_4) both should be on the larger of these two parts. Without loss of generality, we may assume that images of the vertices are in the following cyclic order: c(v_1), c(v_3), c(v_2), c(v_4). Let I be an interval of O_r that does not contain any c(v_i). As the winding number is independent of the choice of I we can choose I to be in c(v_3)c(v_2). Then following the orientation of C_4 and clockwise direction of O_r, the arcs c(v_1)c(v_2) and c(v_3)c(v_4) contain the interval I while the arcs c(v_2)c(v_3) and c(v_4)c(v_1) do not intersect it. Let c be a far-polar mapping of C_2k to O_r and let the edges of C_2k be e_1, e_2, …, e_2k. The number of edges colored green in the extension c^D of c has the same parity as the number of odd (or even) indexed vertices being incident to both green and orange edges. Let J be the set of vertices of C_2k incident to both green and orange edges. Let J_o and J_e be the partition of J into odd and even indexed vertices (a natural bipartition of on C_2k). Consider a maximal green path in C_2k. Thus the two ends of each such path are in J. Moreover, if the length of the path is even, then both ends of the path belong to the same subset J_o or J_e of J. Thus each even green path contributes 0 to one of J_o or J_e and 2 to the other. If the length of the path is odd, then one of its ends is in J_o and the other is in J_e, thus contributing 1 to each of these two sets. The claim then follows as the odd length green-paths determine the parity of the total number of green edges. Let c be a far-polar coloring of the cycle C_4k. The number of edges colored green in the extension c^D of c and the winding number w(C_4k^#2e, c^sh) (or similarly w(C_4k^#2o, c^sh)) are of the same parity. For each edge v_i-1v_i+1 of C_4k^#2e there is an odd indexed vertex v_i of C_4k corresponding to it. (And similarly, for each edge v_i-1v_i+1 of C_4k^#2o there is an even indexed vertex v_i of C_4k.) As we have observed before, an edge v_i-1v_i+1 in C^#2 does not cross over the interval I in the c^sh extension if and only if the edges incident to v_i are colored differently (i.e. one of v_i-1v_i and v_iv_i+1 is green, the other is orange). So the number of non-crossing edges in C_4k^#2e in the c^sh extension is just the number of odd indexed vertices being incident to both green and orange edges in C_4k. From the previous lemma, we know that this number has the same parity as the total number of green edges in the cycle C_4k. As C_4k^#2e is a cycle on 2k vertices, its total number of edges is an even number, so the total number of edges which cross I should also have the same parity, and so does the difference of the number of edges cross I in the clockwise direction and anticlockwise direction. This completes the proof. We use M_2k to denote the Möbuis ladder with 2k steps. As a graph that is isomorphic to the graph build on C_4k by adding an edge between each pair of vertices at a distance 2k. In the next lemma we show that the Möbuis ladder M_2k can replace the role of the odd cycle in Lemma <ref>. It can then be used similarly to build families of graphs with circular chromatic number at least 4. For any far-polar mapping c of M_2k to O_r, the winding number w(C_4k^#2e, c^sh) (or similarly w(C_4k^#2o, c^sh)) is odd. By Lemma <ref>, it is enough to prove that the number of green edges of C_4k in the extension c^D of c is odd. We use the notation C_1,2,3,⋯ t for oriented cycle with vertices v_1,v_2, ⋯ v_t and directed edges v_iv_i+1 for i t. We will view M_2k as union of 2k 4-cycles, see Figure <ref> for reference. Consider all oriented 4-cycles formed by two consecutive steps of ladder, C_1 : C_1,2,(2k+2),(2k+1), C_2 : C_2,3,(2k+3),(2k+2), ⋯ C_2k : C_2k,(2k+1),1,4k. By Lemma <ref> we know that each C_i's has two green edges in c^D extension. Therefore, in total, the sum of the number of their green edges is an even number as well. To prove our claim, we present a different counting of this number. Consider the oriented 4k-cycle, C_1,2,3,⋯ 4k, half of its edges (from v_1v_2 to v_2kv_2k+1) agree in orientation with the one in the corresponding C_i, but the other half is oriented the opposite direction. So if we want to get back the same orientation as in the C_i's, we should switch 2k edges. As changing the orientation of a green edge makes it orange and vice versa, we switch the parity of the number of green edges an even number of times. Now we have to consider the steps of the ladder as well. Except for the edge between v_1 and v_2k+1, every other step v_iv_2k+i is oriented as v_2k+iv_i in C_i and as v_iv_2k+i in C_i-1 (for 1<i ≤ 2k). So they contribute exactly one green edge (in one of their orientations) to the total sum. The edge between v_1 and v_2k+1 is oriented as v_2k+1v_1 in both C_1 and C_2k, contributing 0 or 2 to the total sum. Therefore the contribution of the steps is odd in total. So, in summary, starting with the oriented C_4k, changing the orientation of an even number of its edges, and then adding the steps of the ladder, we should get back the same number of green edges as we had in total in the C_i's. Since that is an even number, the oriented C_4k must have an odd number of green edges. We can now state our theorem for BQ(ℓ,2k). For any positive integers ℓ and k, we have χ_cBQ(ℓ,2k)=4. As in the previous cases, we can consider BQ(ℓ,2k) as a graph obtained from the l × 2k cylindrical grid by adding a universal vertex on the first layer and completing the last two layers into a Möbuis ladder. Any r < 4 r-coloring would be a far-polar coloring of this graph, which, by Lemma <ref> would imply an odd winding number for each of the last layers in the C^sh extension, but by Observation <ref> the first layer has the winding number 0, but by Lemma <ref> all layers have the same parity of winding number. Finally, we use this to prove that BM(ℓ, 2k) also has the same circular chromatic number. For any positive integers ℓ and k, we have χ_c(BM(ℓ,2k))=4. As BM(ℓ, 2k) is a signed bipartite graph, 4 is an upper bound on its circular chromatic number. To prove that it is also a lower bound, we consider BQ(ℓ+2,2k) and switch at first 2k vertices of the Möbuis ladder built on the first two layers. These are vertices labelled v_1,1, v_2,1, v_1,2, v_2,2, …, v_1,k, v_2,k. At the end all diagonal edges of the Möbuis ladder are positive. We consider a homomorphic image of this signed graph by identifying two ends of each diagonal edge of the Möbuis ladder. That is v_1,1 is identified with v_1,k+1, v_2,1 is identified with v_2,k+1 and so on. Then, we can identify each v_1,i with v_3,i+k as well (i ∈{1,2, … k}). It can then be verified that the image is the signed graph obtained from BM(ℓ, 2k) by adding a positive loop to each of the k vertices u_j and to the next layer. However, a positive loop does not change the circular chromatic number of a signed graph. Thus we have χ_c(BM(ℓ,2k))≥χ_c(BQ(ℓ+2,2k))=4. We note that by identifying the two ends of each positive edge in BQ(ℓ, 2k+1) we get a copy of M_ℓ(C_2k+1) together with some positive loops. Thus, in a similar fashion, one can view Theorem <ref> as a corollary of Theorem <ref>. § CONCLUDING REMARKS The special subclass of M_k(C_2k+1), on 2k^2+k+1 vertices, is conjectured in <cit.> to have the smallest number of vertices among 4-chromatic graphs of odd-girth 2k+1. In <cit.>, this is verified to be the case with an added assumption that every pair of odd cycles share a vertex. For the general case, a lower bound of (k-1)^2 for the number of vertices of a 4-critical graph of odd girth 2k+1 is given in <cit.> modifying the method of <cit.>. A natural bipartite analogue of this question is to find the smallest number of vertices of a signed bipartite graph of negative girth 2k whose circular chromatic number is 4. Here we gave two families of such graphs, where the graphs of negative girth 2k have 2k^2-k+1 vertices. The starting point of this work has been a joint work of the second author with Lan Anh Pham and Zhouningxin Wang on the study of C_-4-critical signed graphs (for a definition, see <cit.>). In an unpublished work, they have shown that a C_-4-critical signed graph of negative girth at least 2k must have at least k^2 vertices. Based on the fact that χ_c(C_-4)=8/3, our result in this work implies that BQ(k, 2k-1) is a signed bipartite graph of negative girth 2k which does not map to C_-4. Thus BQ(k, 2k-1) contains a C_-4-critical signed graph. As BQ(k, 2k-1) has 2k^2-k+1 vertices, this implies the smallest number of the vertices of a C_-4-critical signed graph of negative girth 2k is somewhere between k^2 and 2k^2-k+1. Acknowledgment. This work is supported by the following grants and projects: 1. ANR-France project HOSIGRA (ANR-17-CE40-0022). 2. Indo-French Center of Applied Mathematics, project AGRAHO “Applications of graph homomorphisms”(MA/IFCAM/18/39). 3. Math-AmSud project PLANNING. 4. National Research, Development and Innovation Office (NKFIH) grant K–120706 of NKFIH Hungary. 5. WLI grant(SB22231494MAIITM008570) of IIT Madras, India. The second author would also like to thank Lan Ann Pham and Zhouningxin Wang for earlier discussions on this subject. plain
http://arxiv.org/abs/2307.04030v1
20230708184619
Adaptive Force-Based Control of Dynamic Legged Locomotion over Uneven Terrain
[ "Mohsen Sombolestan", "Quan Nguyen" ]
cs.RO
[ "cs.RO" ]
Adaptive Force-Based Control of Dynamic Legged Locomotion over Uneven Terrain Mohsen Sombolestan and Quan Nguyen M. Sombolestan and Q. Nguyen are with the Department of Aerospace and Mechanical Engineering, University of Southern California, Los Angeles, CA 90089, email: [email protected], [email protected]. ========================================================================================================================================================================================================================================= Agile-legged robots have proven to be highly effective in navigating and performing tasks in complex and challenging environments, including disaster zones and industrial settings. However, these applications normally require the capability of carrying heavy loads while maintaining dynamic motion. Therefore, this paper presents a novel methodology for incorporating adaptive control into a force-based control system. Recent advancements in the control of quadruped robots show that force control can effectively realize dynamic locomotion over rough terrain. By integrating adaptive control into the force-based controller, our proposed approach can maintain the advantages of the baseline framework while adapting to significant model uncertainties and unknown terrain impact models. Experimental validation was successfully conducted on the Unitree A1 robot. With our approach, the robot can carry heavy loads (up to 50% of its weight) while performing dynamic gaits such as fast trotting and bounding across uneven terrains. Adaptive control, Model predictive control (MPC), Quadruped robots, Unknown impact model. § INTRODUCTION Legged robots have numerous potential uses, from search and rescue operations to autonomous construction. To perform these tasks effectively, it is important for the robot to have an accurate understanding of the environment it will be operating in. However, due to the complexity of the robot and the environment, the model of the robot itself might contain a significant level of uncertainty and affect the robot's stability, particularly when performing agile movements. To overcome these challenges, there is a need for the development of a control framework that can effectively compensate for these uncertainties in real-time. The utilization of convex model predictive control (MPC) with the single rigid body (SRB) model in legged robots <cit.> has greatly enhanced the real-time implementation of diverse walking gaits. Unlike the balance controller based on quadratic programming <cit.>, MPC offers the capability to perform agile motions like jumping <cit.> and high-speed bounding <cit.> for quadruped robots. Additionally, MPC exhibits robustness in traversing rough and uneven terrains. However, it is important to note that MPC assumes perfect knowledge of the dynamic model. To enhance trajectory tracking in the presence of unknown and changing disturbances, researchers have explored the combination of MPC with adaptive control techniques <cit.>. Additionally, parameter estimation techniques have been employed to further improve the robustness of the control system <cit.>. These approaches aim to adapt the controller and estimate system parameters to effectively compensate for uncertainties and disturbances, leading to improved trajectory tracking performance. It is worth noting that all of these studies were conducted using a position-based controller model. In this work, we tackle the legged robot locomotion issue in real-world scenarios with a significant level of uncertainty. The uncertainty can come from both the robot model and the environment. Since our proposed method is based on a force controller, it retains the advantage of robustness to uneven terrain. Thanks to MPC as our baseline controller, our framework can be extended to different locomotion gaits and trajectories without adjusting the controller parameters. Additionally, by incorporating the adaptive controller, our control system can handle significant model uncertainty. As a result, our approach enables legged robots to move across different terrains with unknown impact models. §.§ Related Works §.§.§ Offline Learning The offline learner can either leverage a model-based control approach or learn the control system from scratch. Using a model-based method, researchers mainly target learning the dynamic to improve the controller performance <cit.>. One example of this approach is the integration of deep learning with MPC, in which the proposed model tries to learn the cost or dynamic terms of an MPC <cit.>. This hybrid method shows considerable improvement for the aerial robot <cit.> when learning the dynamic model from experimental data. The major limitation of this method is that it is restricted to the dynamic model learned during the training phase. However, the dynamic model is prone to frequent changes in real-world scenarios due to environmental uncertainties and external disturbances. To overcome the limitations of previous approaches, there has been growing interest in utilizing reinforcement learning (RL) to train models from scratch. The key advantage of RL models is their ability to adapt swiftly to changes in real-world environments due to being trained in diverse environments with varying properties. In the case of quadruped robots, an RL model can directly predict appropriate joint torques for traversing different types of terrain, as demonstrated by Chen et al. <cit.>. Additionally, Bellegarda et al. <cit.> enable quadrupeds to run quickly while carrying unknown loads by training the model to learn foot positions. However, these methods heavily rely on domain randomization during training to generalize well to challenging environments. Yang et al. <cit.> also propose an end-to-end RL method that utilizes proprioceptive states and visual feedback to predict environmental changes. §.§.§ Online Learning To address inaccuracies in model-based controllers, researchers have explored an alternative approach using online learning, particularly supervised learning methods <cit.>. In this approach, the focus is on learning disturbances online <cit.>, and in some cases, researchers also aim to learn the dynamics of the system itself <cit.>. Furthermore, this approach has been successfully applied for online calibration of kinematic parameters in legged robots <cit.>. In addition to that, in a recent study, a Lipschitz network method has been developed to bridge the model-reality gap in real-time <cit.>. The online learning method shares a close relationship with adaptive control, and numerous studies have explored the combination of these two approaches <cit.>. This combination aims to leverage the advantages of both methods, allowing for dynamic adaptation and continuous learning from real-time data to improve control system performance. Perhaps closest to our work in terms of online adaption is the learning method presented in <cit.> for legged robots. The authors correct the model behind the controller using a supervised learner while the robot is walking in an unknown environment. The data is collected during the robot's operation to learn a linear residual model which can compensate for system errors. However, in the transition from simulation to experiment, the acceleration estimators make noisy data required for training the model. As a result, the method is only applied to estimate the linear terms since the angular terms data proved to be too noisy to be helpful in the model. §.§.§ Adaptive Control The goal of adaptive control is to tune the controller's variables online during deployment <cit.>. Adaptive control has been applied for manipulation tasks to robotic arms <cit.>, mobile robots <cit.>, and quadruped robots <cit.>. The conventional Model Reference Adaptive Control (MRAC) architecture was originally designed for controlling linear systems in the presence of parametric uncertainties <cit.>. However, it lacks the ability to characterize the input/output performance of the system during the transient phase. To address this limitation and improve the transient performance of adaptive controllers, the L_1 adaptive control offers several advantages over traditional MRAC, such as decoupling adaptation and robustness within a control framework <cit.>. In addition, by incorporating a low-pass filter in adaptation law, the L_1 adaptive control can provide stability <cit.> and transient performance <cit.>. Therefore, the L_1 adaptive control technique guarantees robustness with fast adaptation <cit.>, an essential criterion in dynamic robotics applications. Recently, by integrating L_1 adaptive controller and Bayesian learner, researchers leverage the fast adaption performance of the L_1 adaptive controllers and introduce a safe simultaneous control and learning framework <cit.>. For legged robots, the adaptive controller has also been employed to find the value and location of the center of mass <cit.>. Our work on L_1 adaptive control for bipedal robots <cit.> considers a Control Lyapunov Function (CLF)-based controller as a closed-loop nonlinear reference model for the L_1 adaptive controller. It was validated for the robot's walking <cit.> and running <cit.>. However, the control framework in this prior work is based on Hybrid Zero Dynamics <cit.>, which uses joint position control to track the desired trajectory from optimization for each robot joint. Moreover, in <cit.>, an adaptive control based on a CLF is designed for quadrupeds to interact with unknown objects. Then, they combined the criteria derived by adaptive control as a constraint in an MPC framework. However, adding more inequality constraints to MPC makes the controller more complex in terms of computation. In our approach, we compute a residual vector for compensating dynamic uncertainty, which makes the controller more time-efficient. Additionally, by employing our method, the robot is able to adapt to terrains with unknown impact models. §.§ Contributions A preliminary version of this research previously appeared in <cit.>; however, this paper presents several novel contributions to the prior work. This work incorporates the L_1 adaptive controller into the model predictive control (MPC). The proposed control system leverages MPC due to its robustness to uneven terrain, contact constraint, and generalization to different locomotion gaits. Moreover, by integrating adaptive control into MPC, the proposed model can compensate for significant model uncertainty. In the previous work <cit.>, the robot can only perform quasi-static walking; however, in this work, the robot can perform dynamic motions thanks to MPC. Finally, the authors present new hardware experiments to demonstrate the effectiveness of the proposed adaptive MPC (as illustrated in fig: first fig). The main contributions of the paper are as follows: * We introduce a novel control system that combines the L_1 adaptive control into the force-based control system, designed to address the challenges posed by model uncertainty in real-world applications. * Thanks to MPC, our approach offers greater versatility as it can be adapted to a wide range of locomotion gaits and trajectories. Moreover, our method can handle terrain uncertainty, allowing the robot to navigate rough terrains, such as grass and gravel, as well as high-sloped terrain. * By integrating the adaptive control into MPC, it is possible for quadruped robots to carry an unknown heavy load (up to 50% of the robot's weight) across challenging terrains, with the capability of executing dynamic gaits such as fast trotting and bounding. This is a significant improvement compared to our previous work, which only allowed the robot to perform quasi-static walking. * The combination of using MPC for both the reference model and the real model in the adaptive controller makes the control system computationally expensive, leading to potential delays in computation. To ensure real-time performance, we have developed an update frequency scheme for the control system, which allows for the optimized allocation of processing resources to each control component. * Our proposed approach enables the control system to adapt to terrains with unknown impact models, such as soft terrain. Traversing soft terrain is a challenging task for quadruped robots. The A1 robot can walk on double-foam terrain in different directions using our method. In comparison, the robot cannot maintain its balance using the baseline controller, resulting in a collapse. The remainder of the paper is organized as follows. sec: background presents the baseline control architecture for quadruped robots and provides some knowledge on force-based controllers. In sec: control overview, we will briefly present an overview of our control approach. Then, our proposed adaptive force-based controller using balance controller and MPC will be elaborated in sec: adaptive control and sec: adaptive MPC, respectively. Furthermore, the numerical and experimental validation are shown in sec: Results. Finally, sec: conclusion provides concluding remarks. § PRELIMINARIES In this section, we present the background on the control architecture of quadruped robots and describe each control component. According to <cit.>, the robot's control system consists of several modules, including a high-level controller, low-level controller, state estimation, and gait scheduler as presented in fig: ControlOverview. A reference trajectory can be generated for high-level control from user input and state estimation. The gait scheduler defines the gait timing and sequence to switch between each leg's swing and stance phases. The high-level part controls the position of the swing legs and optimal ground reaction force for stance legs based on the user commands and gait timing. As the baseline for the stance leg controller, we will use two common approaches: 1) quadratic program (QP) based balancing controller <cit.> and 2) model predictive control (MPC) <cit.>. The low-level leg control converts the command generated by high-level control into joint torques for each motor. These modules of the control architecture will be described briefly in the following subsections. More details can be found in <cit.>. §.§ Gait Scheduler The A1's gait is defined by a finite state machine using a leg-independent phase variable to schedule contact and swing phases for each leg <cit.>. The gait scheduler utilizes independent boolean variables to define contact states scheduled s_ϕ∈{1 = contact, 0 = swing} and switch each leg between swing and stance phases. Based on the contact schedule, the controller will execute either position control during swing or force control during stance for each leg. In our previous work <cit.>, we focus on the application of load-carrying tasks, where the load is unknown to the robot or the control system. Having more legs on the ground during walking could also mean that the robot could produce a larger total ground reaction force to support the heavy load. Therefore, we used a quasi-static walking gait to maximize the number of legs on the ground during walking (i.e., 3 stance legs and 1 swing leg throughout the gait). However, in this paper, our framework is not limited by any specific gait. Similar to the baseline MPC control approach <cit.>, the approach can work for different gaits by only changing the gait definition in the gait scheduler. §.§ Desired Trajectory The desired trajectory is generated based on the robot's velocity command. The robot operator commands xy-velocity and yaw rate, then xy-position and yaw are determined by integrating the corresponding velocity. z position contains a constant value of 0.3 m, and the remaining states (roll, roll rate, pitch, pitch rate, and z-velocity) are always zero. §.§ Single Rigid Body (SRB) Model of Robot Due to the complexity of the legged robot, a simplified rigid-body model has been used to present the system's dynamic. This model enables us to calculate the ground reaction forces (GRFs) in real-time. A few assumptions have been made to achieve simplified robot dynamics<cit.>: Assumption 1: The robot has low inertia legs, so their effect is negligible. Assumption 2: For small values of roll (ϕ) and pitch (θ), the rotation matrix R which transforms from the body to world coordinates, can be approximated as the rotation matrix corresponding to the yaw angle (ψ): R≅R_z(ψ) = [[ cos(ψ) -sin(ψ) 0; sin(ψ) cos(ψ) 0; 0 0 1 ]] Therefore, by defining the robot's orientation as a vector of Z-Y-X Euler angles Θ = [ϕ, θ, ψ]^T, the rate of change of the robot's orientation can be approximated as <cit.>: Θ̇≅R_z(ψ) ω_b where ω_b is the robot's angular velocity in the world frame. Assumption 3: For small angular velocity, the following approximation can be made: d/dt(I_G _b) = I_G _b + _b × (I_G _b) ≈I_G _b where I_G∈ℝ^3 × 3 is the moment of inertia in the world frame. Based on the above assumptions, the state representation of the system is as follows <cit.>: [[ ṗ_c; Θ̇; p̈_c; _b ]] = [[ 0_3 0_3 1_3 0_3; 0_3 0_3 0_3 R_z(ψ); 0_3 0_3 0_3 0_3; 0_3 0_3 0_3 0_3 ]]_D∈ℝ^12 × 12[[ p_c; Θ; ṗ_c; _b ]]_X∈ℝ^12 + [[ 0_6 × 12; M^-1A ]]_H∈ℝ^12 × 12F + [[ 0_6 × 1; G ]] with M = [[ m 1_3 0_3; 0_3 I_G ]] ∈ℝ^6 × 6 A = [[ 1_3 … 1_3; [p_1 - p_c] × … [p_4 - p_c] × ]] ∈ℝ^6 × 12 G = [[ g; 0_3 × 1 ]] ∈ℝ^6 where m is the robot's mass, g∈ℝ^3 is the gravity vector, p_c∈ℝ^3 is the position of the center of mass (COM), p_i∈ℝ^3 (i ∈{1,2,3,4}) are the positions of the feet, p̈_c∈ℝ^3 is body’s linear acceleration, _b∈ℝ^3 is angular acceleration, and F = [F_1^T, F_2^T, F_3^T, F_4^T]^T ∈ℝ^12 are the ground reaction forces acting on each of the robot’s four feet. The term [p_i - p_c] × is the skew-symmetric matrix representing the cross product (p_i - p_c) ×F_i. Note that p_i and F_i are presented in the world frame. Therefore, the state representation of the system can be rewritten in the compact form: Ẋ = DX + HF + [[ 0_6 × 1; G ]] §.§ Balance Controller One of the baseline control approach for calculating GRFs for quadruped robots is the balance controller presented in <cit.> based on quadratic program (QP) solver. Based on the assumptions presented in sec: simplified robot dynamic, the approximated dynamic model between the body acceleration and GRFs is as follows: [[ 1_3 … 1_3; [p_1 - p_c] × … [p_4 - p_c] × ]]_A∈ℝ^6 × 12F = [[ m (p̈_c +g); I_G _b ]]_b∈ℝ^6 and the vector b in (<ref>) can be rewritten as: b = M ([[ p̈_c; _b ]] + G). Since the model (<ref>) is linear, the controller can naturally be formulated as the following QP problem <cit.>, which can be solved in real-time at 1 kHz: F^* = F∈ℝ^12argmin (AF - b_d)^T S (AF - b_d) + γ_1 F^2 + γ_2 F - F_prev^* ^2 d≤CF≤d̅ F_swing^z=0 where b_d is the desired dynamics. The idea of designing b_d will be elaborated in sec: closed_loop. The cost function in (<ref>) includes terms that consider three goals, including (1) driving the COM position and orientation to the desired trajectories; (2) minimizing the force commands; and (3) minimizing the change of the current solution F^* with respect to the solution from the previous time-step, F^*_prev. The priority of each goal in the cost function is defined by the weight parameters S∈ℝ^6 × 6, γ_1, γ_2 respectively. The constraints in the QP formulation enforce friction constraints, input saturation, and contact constraints. The constraint d≤CF≤d̅ ensures that the optimized forces lie inside the friction pyramid and the normal forces stay within a feasible range. More details can be found in <cit.>. Besides the friction constraint, we will enforce the force constraints for the swing legs, F_swing=0. The swing legs are then kept in the posing position until it switches to the stance phase. More details on swing leg control are provided in sec: swing leg. §.§ SRB-based Convex MPC The calculation of GRFs in quadruped robots is often approached through Model Predictive Control (MPC) <cit.>. This method determines the optimal sequence of inputs over a finite-time horizon, taking into account any constraints within the dynamic model. Every time MPC is executed in the control system, only the first computed control input from the MPC cycle is applied. The inputs determined over the finite time horizon are only used for the optimization problem and are not directly applied in the control system. To have the dynamic equation in the convenient state-space form, gravity should be added to the state. So, the system can represent as: Ẋ^c = D^c X^c + H^c F where X^c = [[ p_c; Θ; ṗ_c; _b; ||g|| ]] ∈ℝ^13 D^c = [[ 0_3 0_3 1_3 0_3 0_3 × 1; 0_3 0_3 0_3 R_z(ψ) 0_3 × 1; 0_3 0_3 0_3 0_3 g/||g||; 0_3 0_3 0_3 0_3 0_3 × 1; 0_1 × 3 0_1 × 3 0_1 × 3 0_1 × 3 0 ]] ∈ℝ^13 × 13 H^c = [[ 0_6 × 12; M^-1A; 0_1 × 12 ]] ∈ℝ^13 × 12 We consider a linear MPC problem with horizon length k as follows: min_F_i ∑_i=0^k-1e_i+1^T Q_i e_i+1 + F_i^T R_i F_i s.t. X^c_i+1 = D_t,iX^c_i + H_t,iF_i d≤CF_i≤d̅ where F_i is the computed ground reaction forces at time step i, Q_i and R_i are diagonal positive semi-definite matrices, D_t,i and H_t,i are discrete time system dynamics matrices. The e_i+1 is the system state error at time step i define as e = [e_p, ė_p]^T ∈ℝ^12, with e_p = [[ p_c-p_c,d; log(R_d R^T) ]]∈ℝ^6, ė_p = [[ ṗ_c-ṗ_c,d; _b -_b,d ]]∈ℝ^6, where p_c,d∈ℝ^3 is the desired position of COM, ṗ_c,d∈ℝ^3 is the desired body's linear velocity, and _b,d∈ℝ^3 is the desired body's angular velocity. The desired and actual body orientations are described using rotation matrices R_d∈ℝ^3 × 3 and R∈ℝ^3 × 3, respectively. The orientation error is obtained using the exponential map representation of rotations <cit.>, where the log(.):ℝ^3 × 3→ℝ^3 is a mapping from a rotation matrix to the associated rotation vector <cit.>. The constraint d≤CF_i ≤d̅ is equivalent to the constraint in equation (<ref>) at time step i. §.§ Swing Leg Control For the swing legs, the final footstep location for each leg is calculated from the corresponding hip location using a linear combination of Raibert heuristic <cit.>, and a feedback term from the capture point formulation <cit.>. The final footstep locations (p_f,i) are projected on an assumed ground plane and are calculated by: p_f,i = p_h,i + T_c_ϕ/2ṗ_c,d + √(z_0/g)(ṗ_c - ṗ_c,d) where T_c_ϕ is the stance time scheduled, z_0 is the height of locomotion and p_h,i∈ℝ^3 is the position of the corresponding hip i. A Beizer curve calculates the desired swing trajectory (including desired position p_d,i and velocity v_d,i) for swing legs which starts from the initial lift-off position p_0,i and ends at the final touch-down location p_f,i. §.§ Low-level Control The low-level leg control can generate joint torque commands from the high-level controller. For low-level force control, the controller transforms the force vector to the hip frame by rotation matrix R. Then, joint torques are calculated as follows: τ_stance, i = -J(q_i)^TR^TF_i where J(q_i)∈ℝ^3 × 3 is the leg Jacobian matrix and q_i is the joints angle of leg i-th. To track the desired swing trajectory for each foot, a PD controller with a feedforward term is used to compute joint torques <cit.>: τ_swing, i = J(q_i)^T[K_p,p(p_d,i - p_i)+K_d,p(v_d,i-v_i)] where p_d,i and v_d,i are desired foot position and velocity, respectively, p_i and v_i are actual foot position and velocity in the robot's frame, K_p,p∈ℝ^3 × 3 and K_d,p∈ℝ^3 × 3 are the diagonal matrices of the proportional and derivative gains. § OVERVIEW OF THE PROPOSED APPROACH This section will present an overview of our novel control architecture to incorporate adaptive control into the force control framework. While our approach is not limited to any specific adaptive control approach, we decide to use L_1 adaptive control <cit.> thanks to its advancement in guaranteeing fast adaptation and smooth control signals. Note that our proposed control system is designed for the stance leg control part in the control architecture of the quadruped robot (see fig: ControlOverview). Our prior work <cit.> introduced an adaptive control based on Hybrid Zero Dynamics (HZD) <cit.> for bipedal robots. HZD is a common control approach for bipedal robots since it can handle hybrid and underactuated dynamics associated with this kind of robot. In this paper, however, our approach leverages the combination of the adaptive control and force control system, which calculates ground reaction forces (GRFs) to achieve highly dynamic locomotion for quadrupeds <cit.>. The use of force control in legged robot systems has several key benefits, including increased robustness in the presence of challenging terrains <cit.> and the ability to accommodate a wide range of dynamic movements <cit.>, such as various types of locomotion gaits. By combining force control with adaptive control strategies that compensate for model uncertainty, achieving an enhanced control system with these advantages is possible. The overview of our proposed adaptive force-based control system is presented in fig: main adaptive structure. By incorporating a L_1 adaptive controller, we aim to design a combined controller. The force-based controller calculates the optimal GRFs for following the desired trajectory. The adaptive controller calculates the residual parameters for compensating the nonlinear model uncertainty θ in the system dynamic. Therefore, the goal is to adjust adaptive control signal u_a as well as adaptation law to estimate the model uncertainty (θ̂) correctly and make the real model follows the reference model. For the reference model, we employ a similar linear model described in (<ref>), and we will update the reference model in real-time using an ODE solver. Moreover, the vector of uncertainties estimation θ̂ typically has high frequency due to fast estimation in the adaptation law. Thus, we employ a low-pass filter to obtain smooth control signals. We use the same swing leg control to appropriately synchronize the reference and real models. This means that we also use the real model's foot position for the reference model. In the following sections, we will elaborate on integrating two different force-based control as the baseline controller into the adaptive control. First, in sec: adaptive control, we will describe the proposed method using a QP-based balancing controller, as presented in fig: ControlDiagram_QP. Then, in sec: adaptive MPC, we will show how to incorporate MPC into the adaptive controller in detail, as illustrated in fig: ControlDiagram_MPC. § ADAPTIVE FORCE-BASED CONTROL USING THE BALANCE CONTROLLER In this section, we use the balance controller as the force-based controller, previously demonstrated in <cit.>. In sec: adaptive MPC, we will present our control framework for integrating the L_1 adaptive control into MPC. §.§ Closed-loop Dynamics The L_1 adaptive control is basically designed for trajectory tracking; however, the goal of the balance controller is to compute optimal GRFs. Hence, to integrate the balance controller presented in sec: balance controller into L_1 adaptive control, we should relate the linear model described in (<ref>) into the closed-loop dynamics. Let us consider the system state error (e) according to equation (<ref>) as the state variable. Therefore, the closed-loop error dynamics in state-space form can be represented as follow: ė = D_l e + Bu, where D_l = [ [ 0_6 1_6; 0_6 0_6 ]]∈ℝ^12 × 12, B = [ [ 0_6; 1_6 ]] ∈ℝ^12 × 6 and u∈ℝ^6 is the control input function. By employing a PD control law, we have u = [ -K_P -K_D ]e, where K_P ∈ℝ^6 × 6 and K_D ∈ℝ^6 × 6 are diagonal positive definite matrices. According to definition of matrices D_l and B, from equation (<ref>) it can be obtained that: ë_p = [[ p̈_c - p̈_c,d; _b - _b,d ]] = u, where ë_p is the derivative of ė_p presented in (<ref>), p̈_c,d and _b,d are the desired COM linear acceleration and the desired angular acceleration, respectively. Since the desired trajectory is obtained from the velocity command, both desired accelerations p̈_c,d and _b,d are zero vectors. Then from (<ref>) and (<ref>), the desired dynamics can be given by: b_d = M (u + G), where M and G are defined in (<ref>). By substituting (<ref>) into the QP problem (<ref>), we can obtain the optimal GRFs as the input for the low-level leg controller. The objective of the QP formulation in equation (<ref>) is to find a solution that ensures the actual dynamics AF match the desired dynamics b_d. In general, the QP-based balance controller is capable of achieving the desired control input function outlined in equation (<ref>), thus keeping the error e within a certain range. However, if the desired dynamics vector b_d violates any of the inequality constraints, such as force limits or friction constraints, the controller may yield an optimal solution F^* that may not completely align with the desired dynamics. With this solution, the optimal dynamic b_d^* and u^* can be written as: b_d^* = AF^*, u^* = M^-1 b_d^* - G where in the appendix, we will show that the u^* remains within a bounded range. Note that the optimal ground reaction force F^* serves as the control input for the robot and the variable u^* acts as an input for the closed-loop dynamic. The closed-loop structure for the robot is depicted in fig: ControlDiagram_QP (the green dashed line). §.§ Effects of Uncertainty on Dynamic If we consider uncertainty in the dynamic equation (<ref>) and assume that the matrices D and H are not accurate, then we need to present the dynamic based on the nominal matrices D̅, H̅. The model uncertainty mostly comes from inaccurate values for mass, inertia, and foot position with respect to the center of mass. In addition to that, various terrain (e.g., rough terrain or soft terrain) might have a different impact on the robot, and it is unknown in a practical situation. Therefore, terrain uncertainty should also be considered in the dynamic model. In this section, we solely derive our control equations based on the model uncertainty. In sec: terrain, we will elaborate on how our proposed control system can also consider terrain uncertainty. There is another parameter involved in the dynamic equation, namely the yaw angle. This angle is obtained through the state estimation, and we assumed that the state estimation has minimal uncertainty. According to the definition of matrices D and H in (<ref>), the inaccurate value of the dynamic parameter mentioned above reflects on the H matrix. Therefore, the dynamic equation in the presence of uncertainty can be represented as: Ẋ = DX+ (H̅+H̃) F + [[ 0_6 × 1; G ]] where H̃ represent the uncertainty in matrix H. It is worth noting that according to the definition of H in equation (<ref>), the first six rows of H consist of zeros. Thus, we can rephrase the dynamic equation (<ref>) as follows: Ẋ = DX + H̅F +BG + Bθ where θ∈ℝ^6 is the vector of uncertainty for six corresponding equations and is defined as follows: θB^T H̃F With reference to the state representation given by equation (<ref>), the vector θ can be interpreted as a time-varying disturbance affecting the body and orientation accelerations. The uncertainty vector θ depends on both time t and F. Since F is obtained through the QP problem (<ref>), it is a function of b_d. Furthermore, b_d is a function of u according to (<ref>). Considering that u is determined by the PD control (<ref>), we can conclude that θ is a function of both the tracking error e and time t. As a result, for any given time t, it is always possible to find α(t)∈ℝ^6 and β(t)∈ℝ^6 satisfying <cit.>: θ(e,t)=α(t)||e||+β(t). §.§ Designing Adaptive Controller for Compensating the Uncertainty By incorporating L_1 adaptive controller, we want to design a combined controller u=u_1+u_2, where u_1 is the control input to follow the desired trajectory for the nominal model as presented in (<ref>) and u_2 is to compensate the nonlinear model uncertainties θ. Therefore, the goal is to adjust the control signal u_2 so that the real model can follow the reference model. For the reference model, we employ a similar linear model described in (<ref>) which, instead of M, the nominal matrix M̅ is being used. The diagram of our proposed force-based adaptive control based on a balance controller is presented in fig: ControlDiagram_QP. The duplicate version of equation (<ref>) for state space representation presented in (<ref>) by considering combined controller u=u_1+u_2 is as follows: ė=D_l e+Bu_1 + B (u_2+θ). Note that the vector of uncertainty θ in equations (<ref>) and (<ref>) are not the same since the state vector of equation (<ref>) is X while the state vector of equation (<ref>) is system error e. The state representation for the reference model can be expressed as follows: ê̇=D_l ê+Bû_1+B (u_2+θ̂), where, θ̂=α̂||e||+β̂, and û_1 is defined as: û_1 = [ -K_P -K_D ]ê. To compensate the estimated uncertainty θ̂, we can just simply choose u_2=-θ̂ to obtain ê̇=D_l ê+Bû_1. However, θ̂ typically has high frequency due to fast estimation in the adaptation law. Therefore, we employ a low-pass filter to obtain smooth control signals as follows: u_2=-C(s)θ̂, where C(s) is a second-order low-pass filter with a magnitude of 1: C(s) = ω_n^2/s^2 + 2 ζω_n s+ ω_n^2 . According to (<ref>), the b_d for the real model in the presence of uncertainty get the following form: b_d = M̅ (u_1 + u_2 + G). Respectively, b̂_d for reference model is as follows: b̂_d = M̅ (û_1 + u_2 + θ̂ + G). The QP solver outlined in equation (<ref>) allows us to obtain the optimal GRFs for the real model. Similarly, the optimal GRFs F̂ for the reference model can be obtained as follows: F̂^* = F̂∈ℝ^12argmin (ÂF̂ - b̂_d)^T S (ÂF̂ - b̂_d) + γ_1 F̂^2 + γ_2 F̂ - F̂_prev^* ^2 d≤CF̂≤d̅ F̂_swing^z=0 . Define the difference between the real model and the reference model ẽ=ê-e, we then have, ẽ̇=D_l ẽ+Bũ_1+B (α̃||e||+β̃), where ũ_1=û_1-u_1, α̃=α̂-α, β̃=β̂-β. As a result, we will estimate θ indirectly through α and β, or the values of α̂ and β̂ computed by the following adaptation laws based on the projection operators <cit.>, α̇̂̇=ΓProj(α̂,y_α), β̇̂̇=ΓProj(β̂,y_β) where Γ∈ℝ^6 × 6 is a symmetric positive definite matrix. The projection functions y_α∈ℝ^6 and y_β∈ℝ^6 are: y_α =-B^T Pẽ||e||, y_β =-B^T Pẽ, where P∈ℝ^12 × 12 is a positive definite matrix that is defined according to the stability criteria using the Lyapunov equation. Moreover, the stability proof of the system is provided in the appendix. § ADAPTIVE FORCE-BASED CONTROL USING MPC Model predictive control (MPC) has been widely used across various fields, from finance to robotics. One of MPC's main advantages is its ability to handle complex systems with multiple inputs and outputs while considering hard control constraints <cit.>. MPC has also been applied to quadruped robots, providing stable locomotion <cit.>. Thanks to dynamic prediction in MPC, by using the same control framework, it can achieve different dynamic locomotion gaits. However, MPC's limitations become evident when dealing with significant uncertainty in the dynamic model. For instance, in the case of a quadruped robot carrying an unknown heavy load, MPC fails to track the desired state trajectory, resulting in unstable behavior and deviation from the desired trajectory, especially with dynamic gaits like bounding. Furthermore, the ability of a robot to traverse soft terrain, where the impact model is unknown, can present a significant challenge. Our proposed approach can tackle this challenge effectively, and we will discuss the details of how it handles the terrain unknown impact model in sec: terrain. In the previous section sec: adaptive control, we presented an adaptive force-based control framework based on the balance controller. The balance controller relies on a quadratic program (QP) solver, which is simple to put into practice and well-suited for motions that are slow and safe, like standing and quasi-static walking. Additionally, the balance controller is an instantaneous control technique, meaning it does not predict the robot's future movement. As a result, the balance controller proves to be ineffective in fast-paced, highly dynamic scenarios. On the other hand, MPC has shown great potential in handling agile motions, even when it comes to underactuated gaits such as bounding. In this section, we will present a novel control architecture to integrate adaptive control into the MPC framework. By this proposed framework, we can achieve fast and robust locomotion in the presence of uncertainties. This framework can also be extended to accommodate various dynamic gaits, such as trotting and bounding, in legged robots. As we discussed in a previous section, our approach is not restricted to a specific type of adaptive control, but we have chosen to utilize L_1 adaptive control, which has demonstrated advantages over other adaptive control techniques. The first step in integrating L_1 adaptive control and MPC is to understand the importance of a reference model and the challenges in synchronizing the real model and reference model. We then present our proposed adaptive MPC, which combines conventional MPC <cit.> with adaptive control. Finally, we address the challenge of real-time computation while having two MPCs in our control system. We will elaborate on how to adjust the frequency of each control component in an optimized manner to allocate enough computation resources for critical control parts and achieve real-time computation. §.§ Reference Model Our method aims to design a combined controller based on MPC and L_1 adaptive control that the real model follows the reference model. In accordance with our previous discussion in sec: L1_adaptive, the combined controller incorporates a control signal u_2 to account for model uncertainty, as indicated in equation (<ref>). In this section, the auxiliary control signal for this purpose is u_a ∈ℝ^6, thus, the uncertain dynamic equation (<ref>) can be rewritten as follow: Ẋ = DX + H̅F + BG + B (u_a + θ). The reference model is similar to the quasi-linear model described in (<ref>) which, instead of H, the nominal matrix H̅ is being used. The proposed adaptive MPC diagram is presented in fig: ControlDiagram_MPC. We consider a reference model for L_1 adaptive control that arises from MPC. The MPC method is computationally expensive, but replacing it with other simpler control methods, such as the balance controller while simulating the robot's performance using dynamic gaits such as bounding is impossible. The reason is that in bounding gait, the robot's two feet on either the front or rear side touch the ground at each time step, making it challenging to accurately control the height and pitch angle. The MPC approach balances the error in the height and pitch angle and, based on the predicted dynamics of the system in the future, computes the optimal ground reaction forces. As seen in fig: bounding snapshot, the center of mass (COM) height oscillates around the desired value. Thus, the underactuated nature of certain gaits like bounding necessitates the use of MPC as the control system for the reference model. When implementing MPC for a reference model, one challenge is ensuring that the reference model is synchronized with the real model. This is particularly important when the robot performs a gait with a periodic behavior, such as bounding (see fig: bounding snapshot). In order to correctly compare the real model with the reference model, both should have the same gait schedule. Additionally, the adaptive MPC proposed for legs in the stance phase is independent of the swing leg control. However, the foot position is crucial in calculating the moment of ground reaction force around the center of mass. Therefore, to maintain consistency between the real and reference models, it is important to ensure that the real robot's foot position is fed into the reference model as shown in fig: ControlDiagram_MPC. The reference model can be expressed as follows: Ẋ̂̇ = DX̂ + H̅F̂ + BG + B(u_a+θ̂), where θ̂=α̂||e||+β̂. In this case, similar to sec: adaptive control, we use a second-order low-pass filter, same as (<ref>). Therefore, the auxiliary control signal would be: u_a=-C(s)θ̂. By defining the difference between the real model and the reference model X̃=X̂-X, we then have: Ẋ̃̇=DX̃+H̅F̃+B(α̃||e||+β̃), where F̃=F̂-F, α̃=α̂-α, β̃=β̂-β. Since the desired trajectory for both the real model and the reference model is the same (X_d = X̂_d), the difference between the real model and reference model can be defined as: X̃ = (X̂ - X̂_d) - (X - X_d) = ê - e = ẽ. Therefore, equation (<ref>) is equal to the following equation: ẽ̇=Dẽ+H̅F̃+B(α̃||e||+β̃). The adaption laws and projection functions for computing the value of α and β are the same as equations (<ref>) and (<ref>), respectively. Moreover, the stability of the control system can be proven using the same logic provided in the appendix. §.§ Adaptive MPC After computing the auxiliary control signal u_a using the adaptive controller presented in the previous subsection, we will integrate the u_a with the conventional MPC for legged locomotion <cit.> and propose our adaptive MPC framework. We treat the auxiliary control signal u_a as a residual vector in the system's equation to compensate for dynamic uncertainty. Therefore, the u_a should be combined into the state vector and the equation (<ref>) can be written as follow: η̇ = D^eη + H̅^eF + B^eθ with the following extended matrices: η = [[ X^c; u_a ]] ∈ℝ^19 D^e = [[ D^c_13 × 13 0_6 × 6 1_6 × 6 0_1 × 6; 0_6 × 13 0_6 × 6 ]] ∈ℝ^19 × 19 H̅^e = [[ H̅^c; 0_6 × 12 ]] ∈ℝ^19 × 12 B^e = [[ B; 0_7 × 6 ]] ∈ℝ^19 × 6 where H̅^c is the nominal value of H^c. The definition of X^c, D^c, and H^c can be found in (<ref>). Although u_a is considered a part of the state vector in (<ref>), it is just a residual vector for compensating dynamic uncertainty. Therefore, u_a is constant in the state space equation and over the horizons. To this end, the components associated with u_a in matrices D^e and H̅^e are assigned zero, which means u̇_a = 0. Note that the value of u_a will be updated according to the adaptive law, but it is constant during the prediction horizons. The state representation in (<ref>) is also convenient for discretization methods such as zero-order hold <cit.> for MPC. Therefore, our adaptive MPC can be designed according to (<ref>) and based on the following discrete-time dynamic: η_i+1 = D^e_t,iη_t,i + H̅^e_t,iF_i §.§ Real-time Computation The main challenge in executing our proposed adaptive MPC framework is ensuring that the computation required is fast enough to be performed in real-time for hardware experiments. If the controller is unable to perform updates at a high frequency, it could result in the robot collapsing during dynamic motion. The control system comprises two MPCs, each with 13 to 19 states predicted over ten horizons. To ensure the robot's balance and allocate sufficient computation resources to each control component, we have devised a scheme, as depicted in fig: ControlDiagram_MPC, to update each control component in an optimized manner. The robot's sensory data updates in real-time with a frequency of 1 kHz. Thus, the reference model should update with the same frequency to compare the reference model states (X̂) and real model states (X) correctly. The yellow dashed line in fig: ControlDiagram_MPC indicates the update frequency for the reference model. We use the odeint package from Boost software in C++ <cit.> to solve the ODE problem associated with the dynamic equation for the reference model. One of the critical components in our proposed framework is the adaptive MPC, which is responsible for computing the ground reaction force for the robot, as shown in fig: ControlDiagram_MPC). Through our experimentation, we have determined that for robust locomotion with dynamic gaits, the optimal update frequency for the adaptive MPC should be 300 Hz. In contrast, the reference MPC, which plays a supporting role in the control system, is less sensitive and runs at a slower rate of 30 Hz. In addition, there is a two-millisecond delay between the running of the adaptive MPC and reference MPC to ensure sufficient computational resources are allocated to each component. This means that the two MPC frameworks do not run simultaneously in our control system. § ADAPTATION TO UNKNOWN IMPACT MODEL The dynamic formulation presented in sec: adaptive control and sec: adaptive MPC considers the presence of model uncertainty in real-world situations. It is assumed that the terrain is hard enough to allow the robot receives the desired force as ground reaction forces on its feet. However, this assumption may not hold if the robot walks on soft or elastic terrain with an unknown impact model, which may not generate the desired force needed for stable locomotion. Some previous studies have included terrain knowledge and contact models in their balancing controllers to address the soft terrain challenge, mainly using a spring-damper model to characterize the soft terrain <cit.>. Some control frameworks for adapting to soft terrain in real-time have also been developed using iterative learning <cit.> and whole-body control <cit.>, without prior knowledge about the terrain. This section demonstrates that the proposed method in sections sec: adaptive control and sec: adaptive MPC can also handle unknown impact models from terrain, allowing the robot to maintain stability while walking on soft terrains. Assume the computed force F by MPC in (<ref>) cannot be achieved perfectly due to walking on soft terrain. Therefore, equation (<ref>) can be rewritten as follow: Ẋ = DX + H̅ (F_a + F̃_a) + BG + Bθ which F_a is the actual ground reaction force exerted on the robot and F̃_a is the difference between the desired ground reaction force and actual reaction force. Given that F̃_a depends on the tracking error e and time, the uncertainty vector arising from the ground reaction force can be incorporated with θ. Therefore, we can reformulate equation (<ref>) as follows: Ẋ = DX + H̅F_a + BG + B (θ + θ_F). where the uncertainty vector θ_F is defined as follow: θ_F B^T H̅F̃_a The equation (<ref>) is in the form of equation (<ref>), which uses actual ground reaction force instead of desired ground reaction force. Therefore, all formulations for implementing adaptive controllers are also valid for a situation with an unknown impact model. § RESULTS In this section, we validate our control approach in simulation and hardware experiments on a Unitree A1 robot. All the hardware experiment's computation runs on a single PC (Intel i7-6500U, 2.5 GHz, 64-bit). For simulation, the control system is implemented in ROS Noetic with the Gazebo 11 simulator, which provides a high-fidelity simulation of the A1 robot. A video showcasing the results accompanies this paper[<https://youtu.be/QmwyysdTk1k>]. We set the control parameters for MPC, the adaption law, and the low-pass filter as presented in Table <ref>. We use one set of parameters for all the experiments with different locomotion gaits, indicating that our approach is easily generalizable. The following subsections will introduce different experiment results in terms of model and environment uncertainty (see fig: terrain experiment). In each experiment, the robot starts by using a balance controller to stand up and then switches to the MPC framework for walking or running. §.§ Comparative Analysis In order to evaluate the performance of our proposed adaptive MPC method, we conduct a comparative experiment with the conventional MPC method presented in <cit.>. The objective is to understand the advantages of integrating the adaptive controller into MPC for quadrupedal locomotion. §.§.§ Walking with significant model uncertainty The experiment involves the robot walking and rotating in different directions, using both adaptive and non-adaptive controllers while carrying an unknown load. The results of the experiment show that the adaptive controller provides robust locomotion, with excellent tracking error, even when carrying an unknown 5 kg load. On the other hand, the non-adaptive controller results in a considerable error in the COM height and eventually collapses under the weight of just a 3 kg load. The comparative results for the adaptive and non-adaptive controllers are shown in fig: comparison exp. §.§.§ Walking on soft terrain To evaluate the capability of our proposed control method in handling unknown impact models, we conducted an experiment where the robot was made for walking on a double foam, which symbolizes a soft terrain. The performance of both the adaptive and non-adaptive controllers was evaluated and compared. The results are depicted in fig: soft terrain exp, which represents the robot's roll angle. The figure clearly illustrates that the adaptive controller was able to maintain the robot's balance on the soft terrain, while the non-adaptive controller was unable to do so, leading to the collapse of the robot. §.§ Running with Multiple Gaits To demonstrate the superiority of our proposed approach for dynamic gaits, we conducted experiments with the robot running while carrying an unknown load. These experiments were carried out for both the trotting and bounding gaits, with an unknown load of 5 kg and 3 kg, respectively. The results of these experiments are shown in <ref>. It can be seen from the figure that the tracking of the center of mass height during the bounding gait is more unstable compared to the trotting gait, which is due to the inherent underactuated nature of the bounding gait. §.§ Time-varying Load To demonstrate the effectiveness of our proposed adaptive force control in adapting to model uncertainty, we conducted simulations where the robot carries a time-varying load of up to 92% of its weight during walking. As shown in fig: time_varying result, our approach can enable the robot to adapt to time-varying uncertainty. In the simulation, the robot starts with an unknown 5 kg load. While increasing the robot's velocity, the robot is subjected to a varying external force in the z-direction that rises to 60 N, resulting in an additional unknown 11 kg load. These results indicate that our proposed approach effectively handles high levels of model uncertainty. §.§ Terrain Uncertainty To demonstrate the capability of our proposed method to handle terrain uncertainty, we tested the robot navigating various terrain while carrying an unknown 5 kg load. To this end, we tried walking experiments on multiple rough terrains as well as high-sloped terrain, and we got impressive results. §.§.§ Rough terrain We tested the robot navigating various rough terrains such as grass and gravel. The robot walks and rotates in multiple directions while carrying an unknown 5 kg load. Some snapshots of the robot walking on diverse rough terrain are presented in fig: terrain experiment. Our approach is based on a force controller and retains the robustness features of the baseline framework, allowing the robot to handle the rough terrain effectively. §.§.§ Sloped terrain To enable the robot to climb the sloped terrain perfectly without vision, we need to adjust its orientation to make its body parallel to the walking surface. This is done by using the footstep location to estimate the slope of the ground. For each i-th leg, we can measure the foot position p_i = (p_x,i, p_y,i, p_z,i) and build the vector of feet x-position (p_x), y-position (p_y), and z-position (p_z). Thus, we can model the walking surface as a plane: z(x,y) = a_0 + a_1 x + a_2 y and the coefficients (a_0, a_1, and a_2) will be obtained through the solution of the least square problem using p_x, p_x, and p_x data (see <cit.> for more details). Note that the desired roll and pitch angles for the robot will be modified on the slope according to the following: roll = arctan (a_2) , pitch = arctan(a_1). As a result, the reference model's desired pitch and roll angles must be adjusted to the non-zero values determined as described above. It's important to note that the reference model utilizes the actual foot position of the robot, so there is no need to make any changes to the reference model's footstep planning when the robot is attempting to climb a slope. § CONCLUSION In conclusion, a novel control system has been presented that incorporates adaptive control into force control for legged robots walking under significant uncertainties. We have demonstrated our proposed approach's effectiveness using numerical and experimental validations. The experiments show the success of the implementation of the proposed adaptive force control on quadruped robots, allowing them to walk and run while carrying an unknown heavy load on their trunk. The results are remarkable, with the robot being able to carry a load of up to 5 kg (50% of its weight) while still keeping the tracking error within a small range and maintaining stability even in all directions. The experiment demonstrates that the proposed adaptive force control system cannot only adapt to model uncertainty but also leverage the benefits of force control in navigating rough terrains and soft terrain. On the other hand, the baseline non-adaptive controller fails to track the desired trajectory and causes the robot to collapse under uncertainty. § ACKNOWLEDGMENTS The authors would like to thank Yiyu Chen at Dynamic Robotics and Control lab (DRCL) for his help in conducting the hardware experiments. §.§ Linear Quadratic Lyapunov Theory According to Lyapunov theory <cit.>, the PD control described in (<ref>) will asymptotically stabilize the system if A_m = [ 0_6 1_6; -K_P -K_D ]∈ℝ^12 × 12 is Hurwitz. This means that by choosing a control Lyapunov function candidate as follows: V(e) = e^TPe, where P∈ℝ^12 × 12 is the solution of the Lyapunov equation A_m^T P + PA_m = -Q_L, and Q_L∈ℝ^12 × 12 is any symmetric positive-definite matrix. We then have: V̇(e,u) + λ V(e) =  e^T (D_l^T P + PD_l) e + λ V(e) +2 e^T PBu ≤ 0, where, λ = λ_min(Q_L)/λ_max(P) > 0. As a result, the state variable e and the control input u always remain bounded: e≤δ_η, u≤δ_u. However, the control signal u^* (<ref>) we construct by solving QP problem (<ref>), is not always the same as u. Based on the friction constraints present in equation (<ref>), the value of F^* is always bounded. Besides, according to the definition of A, M, and G, these matrices also have bounded values. Thus, it implies that: u^* ≤δ_u^*. Therefore, the vector of difference between u and u^* can be defined as: Δ = u^* - u which is also bounded according to (<ref>) and (<ref>): Δ≤δ_Δ. By substituting u^* in (<ref>), we have: V̇(e,u^*) + λ V(e) ≤ 2 e^T PBΔ≤ϵ_V, where ϵ_V = 2 Pδ_ηδ_Δ. §.§ Stability Analysis Theorem: Consider the system dynamics with uncertainty described by (<ref>), and a reference model described by (<ref>). Assume the use of an L_1 adaptive controller with the optimal closed-loop control signal given by (<ref>), the adaptive control signal given by (<ref>), and the adaptation laws given by (<ref>). Then, under the aforementioned L_1 adaptive controller, the tracking error between the real model and reference model denoted as ẽ, as well as the errors between the real and estimated uncertainty, denoted as α̃ and β̃, respectively, are bounded. Proof: Let us consider the following control Lyapunov candidate function: Ṽ=ẽ^TPẽ+α̃^TΓ^-1α̃+β̃^TΓ^-1β̃. Therefore, its time derivative will be Ṽ̇=ẽ̇^TPẽ+ẽ^TPẽ̇ + α̇̃̇^TΓ^-1α̃+α̃^TΓ^-1α̇̃̇ + β̇̃̇^TΓ^-1β̃+β̃^TΓ^-1β̇̃̇, in which we have ẽ̇^TPẽ+ẽ^TPẽ̇ = (D_lẽ+BF̃)^TPẽ + ẽ^TP(D_lẽ+BF̃)   + α̃^TB^T||e||Pẽ+ẽ^TPBα̃||e||    +β̃^TB^TPẽ+ẽ^TPBβ̃. Because ẽ=ê-e satisfies the condition imposed by (<ref>), it implies that: (D_lẽ+BF̃)^TPẽ + ẽ^TP(D_lẽ+BF̃) ≤ -λẽ^TPẽ + ϵ_Ṽ, where ϵ_Ṽ = 2 Pδ_ẽδ_Δ̃. Furthermore, with the property of the projection operator <cit.>, we have the following: (α̂-α)^T(Proj(α̂,y_α)-y_α)≤ 0, (β̂-β)^T(Proj(β̂,y_β)-y_β)≤ 0. From (<ref>) and (<ref>), we can imply that α̃^TΓ^-1α̇̃̇≤α̃^Ty_α-α̃^TΓ^-1α̇, β̃^TΓ^-1β̇̃̇≤β̃^Ty_β-β̃^TΓ^-1β̇. We now replace (<ref>), (<ref>) and (<ref>) to (<ref>), which results in Ṽ̇ ≤ -λẽ^TPẽ + ϵ_Ṽ + α̃^T(y_α+B^TPẽ||e||)-α̃^TΓ^-1α̇ + (y_α^T+ẽ^TPB||e||)α̃-α̇^TΓ^-1α + β̃^T(y_β+B^TPẽ)-β̃^TΓ^-1β̇ + (y_β^T+ẽ^TPB)β̃-β̇^TΓ^-1β̃ So, by using the chosen projection functions (<ref>), then we conclude that: Ṽ̇+λṼ≤ϵ_Ṽ + λα̃^TΓ^-1α̃+ λβ̃^TΓ^-1β̃ -α̃^TΓ^-1α̇ -α̇^TΓ^-1α̃ -β̃^TΓ^-1β̇ -β̇^TΓ^-1β̃. We assume that the uncertainties α, β, and their time derivatives are bounded. Furthermore, the projection operators (<ref>) will also keep α̃ and β̃ bounded (see <cit.> for a detailed proof about these properties.) We define these bounds as follows: ||α̃|| ≤ α̃_b ,   ||β̃|| ≤β̃_b , ||α̇|| ≤ α̇_b ,   ||β̇|| ≤β̇_b. Combining this with (<ref>), we have, Ṽ̇+λṼ≤λδ_Ṽ, where δ_Ṽ=2||Γ||^-1(α̃_b^2+β̃_b^2+1/λα̃_bα̇_b+1/λβ̃_bβ̇_b) + 1/λϵ_Ṽ. Thus, if Ṽ≥δ_Ṽ then Ṽ̇≤ 0. As a result, we always have Ṽ≤δ_Ṽ. In other words, by choosing the adaptation gain Γ sufficiently large and P relatively small, we can limit the Control Lyapunov Function (<ref>) in an arbitrarily small neighborhood δ_Ṽ of the origin. According to (<ref>) and (<ref>), achieving a small value for P depends on choosing a proper value for K_P, K_D, and Q_L. Therefore, the value of PD gains affects the stability of the whole system. Finally, the tracking errors between the dynamics model (<ref>) and the reference model (<ref>), ẽ, and the error between the real and estimated uncertainty, α̃, β̃ are bounded as follows: ||ẽ|| ≤√(δ_Ṽ/||P||) , ||α̃|| ≤√(||Γ||δ_Ṽ) ,||β̃|| ≤√(||Γ||δ_Ṽ). [ < g r a p h i c s > ]Mohsen Sombolestan received his B.Sc. degree in mechanical engineering in 2017 from Sharif University of Technology, Tehran, Iran, and his M.Sc. degree in mechanical engineering in 2020 from Isfahan University of Technology, Isfahan, Iran. He is working toward a Ph.D. in mechanical engineering from University of Southern California, Los Angeles, CA, USA. His research interests include control system design in robotic applications, especially legged robots, focusing on adaptive control and reinforcement learning. [ < g r a p h i c s > ]Quan Nguyen is an assistant professor of Aerospace and Mechanical Engineering at the University of Southern California (USC). Before joining USC, he was a Postdoctoral Associate in the Biomimetic Robotics Lab at the Massachusetts Institute of Technology (MIT). He received his Ph.D. from Carnegie Mellon University (CMU) in 2017 with the Best Dissertation Award. His research interests span different control and optimization approaches for highly dynamic robotics, including nonlinear control, trajectory optimization, real-time optimization-based control, and robust and adaptive control. His work on the MIT Cheetah 3 robot leaping on a desk was featured widely in many major media channels, including CNN, BBC, NBC, ABC, etc. Nguyen won the Best Presentation of the Session at the 2016 American Control Conference (ACC) and the Best System Paper Finalist at the 2017 Robotics: Science & Systems Conference (RSS).
http://arxiv.org/abs/2307.06283v1
20230712162821
Tackling Computational Heterogeneity in FL: A Few Theoretical Insights
[ "Adnan Ben Mansour", "Gaia Carenini", "Alexandre Duplessis" ]
cs.LG
[ "cs.LG", "cs.DC" ]
Tackling Computational Heterogeneity in FL: A Few Theoretical Insights Adnan Ben Mansour [email protected] be-ys Research Argonay, 74370, France Gaia Carenini [email protected] ENS - PSL University Paris, 75005, France Alexandre Duplessis [email protected] ENS - PSL University Paris, 75005, France ================================================================================================================================================================================================================================================================================================================================================================== The future of machine learning lies in moving data collection along with training to the edge. Federated Learning, for short FL, has been recently proposed to achieve this goal. The principle of this approach is to aggregate models learned over a large number of distributed clients, i.e., resource-constrained mobile devices that collect data from their environment, to obtain a new more general model. The latter is subsequently redistributed to clients for further training. A key feature that distinguishes federated learning from data-center-based distributed training is the inherent heterogeneity. In this work, we introduce and analyse a novel aggregation framework that allows for formalizing and tackling computational heterogeneity in federated optimization, in terms of both heterogeneous data and local updates. Proposed aggregation algorithms are extensively analyzed from a theoretical, and an experimental prospective. Federated Learning, Model Aggregation, Heterogeneity § INTRODUCTION Until recently, machine learning models were extensively trained in centralized data center settings using powerful computing nodes, fast inter-node communication links, and large centrally-available training datasets. However, with the proliferation of mobile devices that collectively gather a massive amount of relevant data every day, centralization is not always practical [<cit.>]. Therefore, the future of machine learning lies in moving both data collection and model training to the edge to take advantage of the computational power available there, and to minimize the communication cost. Furthermore, in many fields such as medical information processing, public policy, and the design of products or services, the collected datasets are privacy-sensitive. This creates a need to reduce human exposure to data to avoid confidentiality violations due to human failure. This may preclude logging into a data center and performing training there using conventional approaches. In fact, conventional machine learning requires feeding training data into a learning algorithm and revealing information indirectly to the developers. When several data sources are involved, a merging procedure for creating a single dataset is also required, and merging in a privacy-preserving way is still an important open problem [<cit.>]. Recently, <cit.> proposed a distributed data-mining technique for edge devices called Federated Learning (FL), which allows to decouple the model training from the need for direct access to the raw data. Formally, FL is a protocol that operates according to Algorithm <ref>, cf. <cit.> for an overview. The framework involves a group of devices called clients and a server that coordinates the learning process. Each client has a local training dataset that is never uploaded to the server. The goal is to train a global model by aggregating the results of the local training. Parameters fixed by the centralized part of the global learning system include: N clients, the ratio of clients C selected at each round, the set of clients I_t selected at round t, the number of communication rounds T, and the number of local epochs E. A model for a client i at a given instant t is completely defined by its weights w_i^t. At the end of each epoch t ∈{ 0, …, TE -1 }, w_t+1^i indicates the weight of client i ∈ I. For each communication round t ∈{ 0, E, …, (T-1)E }, w_t is the global model detained by the server at time t, and w_TE is the final weight. In the following, we will use the notations given in Table <ref>. Algorithm <ref> describes the training procedure for FL. The framework involves a fixed set of I = {1,…,N} clients, each with a local dataset. Before every communication round t ∈{0, E, …, (T-1)E }, the server sends the current global model state to the clients and requests them to perform local computations based on the global state and their local dataset, and sends back an update. At the end of each round, the server updates the weights of the model by aggregating the clients' updates, and the process repeats. For the client selection procedure (Create-Client-Set), local training procedure (Client-Update), and aggregation of the local updates (Aggregation), several possibilities exist. For some results concerning client selection, see [<cit.>]. Regarding local updates, available methods range from simple variants of SGD, such as mini-batch SGD [<cit.>], to more sophisticated approaches, such as PAGE [<cit.>]; other results are included in [<cit.>]. We will describe in greater detail the existing routines for aggregation, the central topic of this work. In 2017, the seminal work of <cit.> proposed a plain coordinate-wise mean averaging of model weights; later, <cit.> proposed an extension that takes the invariance of network weights under permutation into account. The same year, <cit.> proposed an auto-tuned communication-efficient secure aggregation. More recently, <cit.> extended the coordinate-wise mean averaging approach, substituting it with a term that amplifies the contribution of the most informative terms over less informative ones. Then, <cit.> adjusted this to enforce closeness of local and global updates. Last year, <cit.> introduces an aggregation that allows clients to select what values of the global model are sent to them. Despite methodological advances, there is neither theoretical nor practical evidence for the right criterion for choosing a particular aggregation strategy. §.§ The Challenges from Computational Heterogeneity in FL As we have seen above, several emerging FL algorithms have been proposed. Due to the high cost of real deployment, existing studies in FL usually involves simulations [<cit.>] and have no data to describe how devices participate in FL [<cit.>]. The direct consequence of this approach is that this studies build on excessively ideal assumptions, for instance the one that all the devices are constantly available for training and equipped with the same resources, e.g., the same CPU and RAM capacity [<cit.>]. However, these assumptions can be inadequate for FL deployment in practice. FL, in fact, requires a large number of devices to collaboratively accomplish a learning task, which poses a great challenge, namely heterogeneity [<cit.>], that impacts FL both in terms of accuracy and training time. We can divide heterogeneity in two main macro-classes: the system heterogeneity and the statistical heterogeneity. In federated settings, system heterogeneity points out the significant variability in the systems characteristics across the network, as devices may differ in terms of hardware, network connectivity, and battery power. These systems characteristics make issues such as stragglers significantly more prevalent than in typical data center environments. Several solution to handle systems heterogeneity have been proposed, e.g. asynchronous communication, see [<cit.>], active device sampling, see [<cit.>], and fault tolerance, see [<cit.>]. Statistical heterogeneity deals instead with the challenges that arise when training federated models from data that is not identically distributed across devices, both in terms of modeling the data, and in terms of analyzing the convergence behavior of associated training procedures. There exists a large body of literature in machine learning that has modeled statistical heterogeneity via methods such as meta-learning and multi-task learning [<cit.>]. Despite heterogeneity is associated with several possible problems such as free-riding [<cit.>], theoretical guarantee to convergence of heterogeneous federated learning have been recently found [<cit.>] and approaches to overcome these challenges formalized, e.g. thanks to the introduction of Personalized Federated Learning (PFL) [<cit.>], and the one of heterogeneous ensemble knowledge transfer [<cit.>]. Several methods have been proposed to attack the heterogeneity arising from specific sources such as data, see [<cit.>], partial and biased client participation, see [<cit.>]. In what follows, we will discuss how to possibly tackle heterogeneous local updates performances in edge clients, propose new aggregation methods, test them experimentally, and provide insights on their convergence properties, their stability and the client participation within the training. § TACKLING PERFORMANCE-HETEROGENEITY IN FL: THE THEORETICAL SIDE We study theoretically how the heterogeneous performances of clients can be exploited in aggregation methods (under reasonable assumptions). The analysis presented is fairly general and allows to extract information concerning the existing trade-off between accuracy and efficacy. This analysis can be seen as a remarkable follow-up of <cit.>, the first work presenting a convergence analysis of federated learning with biased client selection that is cognizant of the training progress at each client, and the work where was discovered that biasing the client selection towards clients with higher local losses increases the rate of convergence (compared to unbiased client selection). §.§ Framework of Analysis & Preliminaries Throughout the analysis, we assume that all the clients are involved in each local and global iteration, i.e., C=1. We denote with F_i the loss function of the i-th client, and with F the weighted average of the F_i upon the distribution P:={p_i| i∈ I}. We restrain our analysis to the case in which the Client-Update procedure is mini-batch SGD with learning rate decay η_t and mini-batches ζ_t^i of cardinality b. In particular: g_i(w_t^i):=1/b∑_ζ∈ζ_t^i∇ F_i(w_t^i,ζ) In any iteration, the weights of the model are defined as follows: w_t+1^i := {[ w_t^i - η_t g_i(w_t^i) E|̸t; ; ∑_j∈ Iα_t^j (w_t^j - η_t g_j(w_t^j)) := w_t+1 E | t ]. where α_t^j is the aggregation coefficient referred to client j at communication round t, and where, for each t, the following constraint holds: ∑_j ∈ Iα_t^j = 1 In our mathematical analysis, we introduce a few assumptions: Assumption 1 (L-smoothness) F_1,…,F_N satisfy: ∀ v,w, F_i(v)≤ F_i(w)+v-w∇ F_i(w)+L/2v-w_2^2 Assumption 2 (μ-convexity) F_1,…,F_N satisfy: ∀ v,w, F_i(v)≥ F_i(w)+ v-w∇ F_i(w) +μ/2v-w_2^2 Assumption 3 The variance of the stochastic gradient descent is bounded, more formally, the following condition is satisfied: ∀ i∈ I, Eg_i(w_i)-∇ F_i(w_i)^2≤σ^2 Assumption 4 The stochastic gradient's expected squared norm is uniformly bounded, in mathematical terms: ∀ i∈ I, Eg_i(w_i)^2≤ G^2 What follows is closely related to what was previously done in <cit.>, the novelty arise from the fact that: (a) instead of analyzing the selection of clients, we examine the attribution of the weights to them, and (b) we extensively study the expression of the learning error from which we derive principled aggregation strategies. To facilitate the convergence analysis, we define the quantity w_t (for which t≠ 0 E) as: w_t+1:=w_t - η_t∑_i∈ Iα_t^i g_i(w_t^i) where α_t^i=p_i. Let w^⋆ be the global optimum of F and w^⋆_i the global optimum of F_i. We define F^⋆ as F(w^⋆), F^⋆_i as F(w^⋆_i) and heterogeneity as: Γ:=F^⋆-∑_i∈ I p_i F_i^⋆ We list below a couple of results useful in proving the main theorem. Let f be a L-smooth function with a unique global minimum at w^⋆. Then : ∀ w,||∇ f(w)||^2≤ 2L(f(w)-f(w^⋆)) With the same notations as above and defining E[.] as the total expectation over all random sources, the expected average discrepancy between w_t and w_t^i is bounded: E[∑_i∈ Iα_t^i w_t-w_t^i^2] ≤ 16η_t^2 E^2 G^2 Before presenting the main results, we define the weighting skew ρ[We observe that ρ(t,w) is not defined when F(w)=∑_i∈ I p_i F_i^⋆. This condition will be always assumed below.] as: ρ(t,w):= ∑_i∈ Iα_t^i (F_i(w)-F_i^⋆)/F(w)-∑_i∈ I p_i F_i^⋆ and introduce these notations: ρ := min_t = 0 E ρ(t,w_t), and ρ := t = 0 Emax ρ(t,w^⋆). §.§ Main Theorem and Consequences In the framework outlined, we state an extension of the main theorem presented in <cit.>, that is adapted to our extended goal. The proofs are available in the appendix <ref>. Under assumptions (1 - 4), the following holds: E[w_t+1-w^⋆^2] ≤(1-η_t μ(1+3/8ρ)) E[w_t-w^⋆^2] + η_t^2 (32 E^2 G^2 + 6 ρ L Γ+ σ^2) +2 η_t Γ(ρ-ρ) From Theorem <ref>, we can directly deduce Corollary <ref> below. Assuming η_t = 1/μ (t+γ) and γ = 4L/μ, the following bound holds: E[F(w_T)] - F^⋆≤1/T+γ𝒱(ρ, ρ) + ℰ(ρ, ρ) where: 𝒱(ρ, ρ) = 4L(32τ^2 G^2 + σ^2)/3μ^2 ρ + 8L^2 Γ/μ^2 + Lγw^0-w^⋆^2/2 ℰ(ρ, ρ) = 8LΓ/3μ(ρ/ρ-1) Corollary <ref> implies that: 𝔼[F(w_T)-F^⋆] = O(1/T) The mathematical expressions 𝒱 and ℰ are estimators for the speed of convergence, and the learning error, respectively. A complex multi-objective optimization problem arises when trying to maximize the speed while minimizing the error. We decouple these two quantities and optimize them separately without underestimating the existing trade-off among them. This procedure allows to outline the global trends, but it does not imply the universal optimality of the strategies defined below. Since 8L^2 Γ/μ^2 + Lγw^0-w^⋆^2/2 is a constant depending only on the data and the initial guess, and ρ may be arbitrary large, we can deduce from Corollary <ref> the existence of a minimal value for the convergence speed, given by: 𝒱_min := 8L^2 Γ/μ^2 + Lγw^0-w^⋆^2/2 In this framework, we can analyze all the possible scenarios, starting from the one in which Γ = 0, that can be appointed as error-free case and corresponds to an IID-dataset. §.§ Error-free Framework Under the assumption that Γ = 0, the main theorem can be leveraged as follows: E[w_t+1-w^⋆^2] ≤(1-η_t μ(1+3/8ρ)) E[w_t-w^⋆^2] + η_t^2 (32 E^2 G^2 + σ^2) and applying Corollary <ref>, we derive the following inequality: E[F(w_T)]-F^⋆≤1/T+γ[4L(32τ^2 G^2 + σ^2)/3μ^2 ρ + Lγw^0-w^⋆^2/2] Despite its simplicity, this setting is interesting since the error term vanishes, and therefore we can deduce a truly optimal algorithm given by the maximization of ρ: [ρ is well defined as long as F(w_t)≠ F(w^⋆) for all t, which is a reasonable assumption.], achieved when: α_t^i = {[ 1/|J_t| i∈ J_t; ; 0 ]. where J_t= i∈ I (F_i(w_t)-F_i^⋆). §.§ General Framework In the general case, both 𝒱 and ℰ depend on the choice of the α_i^t. As already noticed before, this raises a multi-objective problem that doesn't allow for a joint optimization of terms 𝒱 and ℰ. Consequently, we provide an approximated optimization that builds upon the existing trade-off between the convergence speed and the accuracy[ It is important to notice that the bounds for 𝒱 and ℰ are not tight. Consequently we cannot guarantee the unconditional optimality of the strategies proposed.]. We observe that optimizing the convergence speed, while "forgetting" about the error, amounts to maximize ρ, exactly as done in the error-free case. Instead, minimizing ℰ(ρ, ρ) neglecting 𝒱, amounts to minimize ρ/ρ-1. This is achieved when α_t^i = p_i, which gives ℰ=0. Now, knowing that α_t^i=p_i ensures obtaining optimal accuracy, we assume α_t^i=κ_t^i p_i. The following notation is used: π_t= i∈ Imin κ_t^i, Π_t = i∈ Imax κ_t^i, π = tmin π_t, and Π = tmax Π^t Without loss of generality, we assume without that ∀ t, π_t > 0. If it were not the case, we would have assigned to the α_t^i equal to zero an infinitesimal value, and increment the other α_t^i substantially. Under these assumptions, we have that ρ/ρ≤Π/π, 1/ρ≤1/π and therefore: E[F(w_T)]-F^⋆≤1/T+γ[C + λ_1/π] + λ_2 Π - π/π where C, λ_1 and λ_2 are constants. Since, Πmin p_i ≤max κ_t^i p_i ≤ 1 -(N-1) min κ_t^i p_i≤ 1 - (N-1)πmin p_i we can infer that Π≤1-(N-1)πmin p_i/min  p_i and ℰ≤1/πmin p_i-N, from which, we obtain: E[F(w_T)]-F^⋆≤1/T+γ[C + λ_1/π] + λ_2 (1/πmin p_i-N) This last inequality has an intrinsic interest; in fact, it allows to state that the new speed and error bounds depend exclusively on π and to ensure a bound on the error term (once set a properly chosen minimal value of the α_t^i). §.§ Derived Aggregation Strategies The theoretical results discussed above provides several important insights for the design of aggregation algorithms. The first algorithm presented is the generalized FedAvg, that corresponds to take α_t^i = p_i for any t and i∈ I. This strategy is inspired by <cit.> and it boils down to consider the weighted average (upon p_i) of the local models as global model. As observed above, this approach is optimal in terms of accuracy (since ℰ=0) and its convergence speed can be bounded as below: 𝒱 = 𝒱_min + 4L-32τ^2 G^2 + σ^2/3μ^2 The second algorithm proposed is called FedMax and it is defined as follows. For any t: α_t^i = {[ 1/|J_t| i∈ J_t; ; 0 ]. where: J_t= i∈ I (F_i(w_t)-F_i^⋆) Note that two distinct clients in practice never have the same value, i.e. |J_t|=1. This strategy is our original algorithmic contribution, and consists in considering as global model the client's local model with the worst performance at the end of the previous communication round. This approach partially leverages the difference among the values of the loss functions of the different clients and, as observed above, this strategy gives an optimal bound on the convergence speed. For improving the performance in real-world applications and for avoiding over-training on outliers, we introduce a couple of variants of the previous algorithm, namely FedMax(k) and FedSoftMax. FedMax(k), instead of taking the client with the highest loss, considers the first k clients when sorted by decreasing order with respect to F_i(w_t)-F_i^⋆. This strategy boils down to the client selection strategy Power-of-Choice, introduced in <cit.>. In FedSoftMax, for any t and i∈ I, we take α_t^i = p_i exp (T^-1 (F_i(w_t) - F^⋆_i)) re-normalized, i.e., the softened version of the original routine. The reason behind the introduction of this method is reinforcing the stability of FedMax, but this has, as well, the theoretical advantage of ensuring nonzero values of the α_i^t. Note that, for this method, we can obtain an upper bound over the error by applying inequality <ref>. § TACKLING PERFORMANCE-HETEROGENEITY IN FL: THE PRACTICAL SIDE One of the greatest difficulties in developing new algorithms in ML is to combine theoretical guarantees with practical requirements. With the aim of providing algorithms suitable for exploitation in applications, we conduct an experimental analysis with a twofold purpose to establish the performance of the proposed strategies and to identify their potential weaknesses and strengths. §.§ Experimental Framework We describe below the full experimental framework involved in the study of the strategies described above. The design of the experimental apparatus is minimal; in fact, the goal is to focus maximally on the effects of the aggregation procedure. Synthetic Data We generate two distinct synthetic datasets, corresponding to the IID and to the non-IID framework. For the first, we sort data according to labels, choose the cardinality of the different local datasets and distribute the items preserving an identical label distribution over the clients. Instead, for the second, we sort the dataset by label and divide it into multiple contiguous shards according to the criteria given in <cit.>. Then, we distribute these shards among our clients generating an unbalanced partition. We do not require clients to have the same number of samples, however each client has at least one shard. We actually also implemented a "hand-pick" splitting system, to enable better control of the distribution of numbers among clients. Both methods were tested and gave similar results for all experiments. Model The model[Much better performance could be achieved using more complex models developed throughout literature. In this work, the performance of the network on the task is secondary and therefore we opt for the simplest model used in practice.] used is fairly basic: a CNN with two 3× 3 convolution layers (the first with 32 channels, the second with 64, each followed with 2× 2 max pooling), a fully connected layer with 1600 units and ReLu activation, and a final softmax output layer. The local learning algorithm is mini-batch SGD with a batch size fixed at 64. Parameters The parameters involved in the experimental analysis are summarized in Table <ref>. Tasks description The task used is a classification task of the images of the datasets [MNIST <cit.>], and [Fashion-MNIST <cit.>], both in IID and in not-IID framework. Evaluation To evaluate the performance of the strategies proposed, we focus our attention on two kinds of measures: the accuracy reached after a fixed number of communication rounds and the index R_90, that corresponds to the number of communication rounds required to reach a 90% accuracy. We furthermore keep track of the accuracy value and of the loss function at each communication round of the global model. Resources All the strategies are implemented within Pytorch and trained on an Intel Xeon E5-2670 2,60 Ghz,, 8 hearts, 64 Go RAM. §.§ Experimental Analysis We focus on results related to FedMax, the main method introduced. Comparative Analysis of the Strategies Proposed We have tested extensively the methods proposed in IID, not-IID and extremely not IID framework. We can observe that, in last two cases, it is sufficient to focus on the first 50 communication rounds in order to encounter a significant discrepancy among the methods. While the difference between the final accuracy obtained through FedSoftMax and FedAvg is rather low in any framework (see Figures <ref> and <ref>), a large gap is evident in how quickly the learning system achieves 90% accuracy in the not-IID and very-not-IID cases (TNIID) (see Figure <ref> and see Table <ref>). Therefore, experiments give a clear confirmation of the theory, and tend to prove that the upper bound provided by the main theorem is quite tight. FedSoftMax has a higher convergence speed compared to FedAvg. The discrepancy increments with the bias of the data with respect to the closest IID distribution. Moreover, FedSoftMax produces a rather small bias that is directly proportional to the distance that the data distribution among the clients has from the IID one. To try to better understand the optimality of FedSoftMax, we have investigated the change in the performance when it is modified the parameter T, the one accounting for the temperature. The experimental results show that, if we restrict the temperatures considered to the range between 5 and 30, an higher temperature entails an higher convergence speed (see Figure <ref>). Weakness and Strength of the Strategies Proposed One potential weakness that has emerged from the theory is that the method potentially converges to a different value of the optimum. We were therefore interested in studying whether experimentally a significant difference could be observed and whether this might preclude the use of the method. With this purpose, we studied the evolution of α_i. The result is extremely positive, not only do we see that the imbalance produced is minimal, but rather we observe that the α_i almost always converges to the p_i with a rate of 1/t (see Figure <ref>). All these entails the following remark: FedSoftMax is natural smooth interpolation between FedAvg and FedMax(k), taking the advantage from the higher convergence speed of FedMax(k) in the initial phase and the stability and correctness of FedAvg in the rest of the learning. In fact FedMax(k) methods, while speeding the process at the beginning, give poor results when it comes to the final accuracy. This can actually be well visualized by analyzing the top losses of clients during FedSoftMax running (see Figure <ref>). We actually see that only a small group of clients are used through all the process, and while this is profitable for speed purposes in the first rounds, it has a huge drawback in the following rounds since we only use a small amount of data that (by non-IIDness) is not representative of the whole dataset. There lies the power of FedSoftMax, which enables to use both the speed-up ability of FedMax, and the data of all clients at the end as in FedAvg. Finally, we became interested in measuring the stability of FedSoftMax when compared to FedAvg. For this purpose we use a lag one autocorrelation measure based on the normalized standard deviation of the variations over one round. The results in this case show a more pronounced tendency toward instability than FedSoftMax, which nevertheless appears to be reasonably stable (see Table <ref>). §.§ Discussion & Final Remarks We have extended the insightful analysis already carried out by <cit.>, and examined further the joint evolution of ρ and ρ, obtaining simpler bounds. Taking advantage of these theoretical insights, we have proposed a family of aggregation strategies, among which FedSoftMax is the most relevant one. Here, we complement our previous work by investigating empirically the latter with the goal of finding out weaknesses and of quantifying its strength for potential exploitation in practice. The experimental results fully confirm the theory and also suggest that the bias introduced by mismatched weighting of the data distribution does not affect the quality of the final results. Moreover, this method seems to naturally converge to FedAvg leveraging the biases introduced in the first communication rounds. §.§ Further directions of research Several aspects emerged may be object of further analysis. We report them as associated research questions. From a theoretical point of view, we propose 3 possible directions to investigate: Proposed Research Question 1 Is it possible to obtain expressive bounds weakening at least one of the 4 assumptions introduced? We believe interesting results could be obtained by weakening of assumption 3. Proposed Research Question 2 Can we substitute the learning algorithm used throughout all the analysis, i.e. SGD mini-batch, with others? We believe that interesting results may come even using fairly natural algorithms such as GD. Proposed Research Question 3 The experiments have shown that the α_i coefficients converge to the p_i in a framework where datasets are not too much non-IID. We might thus be interested in both proving this affirmation under supplementary hypothesis, and to see its consequences when it comes to the adaptation of the main theorem. Moreover, we actually see that in the case of a very non-IID dataset the α_i actually do not converge to the p_i, but they still converge to some fixed limits, and it would be interested to study these limits, and their potential correlation with the F_i^* or other client dependent parameters. Proposed Research Question 4 Figure <ref> showed the correlation between parameter T of the FedSoftMax method and the gain in speed. Further experiments not shown here show that increasing further T^-1 increases the speed up to a certain limits, i.e. the accuracy curves tend to converge to a "maximal-speed" curve. Not only could we empirically study the properties of this limit curve, but we could also try to give theoretical evidence for this observation. Proposed Research Question 5 Can we extend our results to the non-convex setting? We suggest to start introducing some simplifying conditions such as the ones associated to the Polyak-Łojasiewicz inequality. From a practical point of view, it could be interesting to investigate if there is a practical advantage induced by the speed-up given by FedSoftMax. §.§ Acknowledgements This work was granted access to HPC resources of MesoPSL financed by the Region Ile de France and the project Equip@Meso (reference ANR-10-EQPX-29-01) of the programme Investissements d'Avenir supervised by the Agence Nationale pour la Recherche. § PROOF OF THE MAIN THEOREM & ITS COROLLARY Proof (Main Theorem) The first step of the proof consists in rewriting the argument of the expectation at the of Inequality <ref>. In particular, we apply the definition of w_t+1, add and subtract the same quantity, and eventually develop the square. This leads to the following expression. w_t+1-w^⋆^2 = w_t - w^⋆^2 - 2 η_t w_t - w^⋆∑_i∈I α_t^i ∇F_i(w_t^i) + 2 η_t w_t - w^⋆- η_t ∑_i∈I α_t^i ∇F_i(w_t^i)∑_i∈I α_t^i ∇F_i(w_t^i) - ∑_i∈I α_t^i g_i(w_t^i) + η_t ∑_i∈I α_t^i ∇F_i(w_t^i)^2 + η_t^2 ∑_i∈I α_t^i ∇F_i(w_t^i) - ∑_i∈I α_t^i g_i(w_t^i)^2 For seek of simplicity, we denote the addends at the , as follows: =w_t-w^⋆^2 + A_1 +A_2 + A_3 + A_4 The rest of the proof consists in bounding, sequentially, each of these addends. Concerning term A_1, we apply Cauchy-Schwartz inequality and by exploiting the convexity of the functions involved. Then, we derive the consequences of Assumption 1 and take advantage from the fact that ∇ F_i(w_i^⋆)=0. This leads to the following upper bound for A_1: ≤∑_i∈I α_t^i w_t - w_t^i^2 + 2L η_t^2 ∑_i∈I α_t^i (F_i(w_t^i)-F_i^⋆) - 2 η_t ∑_i∈I α_t^i w_t^i - w^⋆∇F_i(w_t^i) then, by using Assumption 2, we can rewrite the last term of the previous inequality and applying Lemma 1, we obtain the final upper bound for A_1. These steps are reported below: A_1 ≤- 2 η_t ∑_i∈I α_t^i (F_i(w_t^i)-F_i(w^⋆) + μ/2 w_t^i-w^⋆^2) ≤η_t^2 E^2 G^2 - η_t μ∑_i∈I α_t^iw_t^i-w^⋆^2 + 2L η_t^2 ∑_i∈I α_t^i (F_i(w_t^i)-F_i^⋆) - 2 η_t ∑_i∈I α_t^i (F_i(w_t^i)-F_i(w^⋆)) For bounding term A_2, we observe that, for the unbiasedness of the gradient estimator, E(A_2)=0. The bound for A_3 requires exclusively the application of Assumption 1: A_3 = η_t ∑_i∈I α_t^i ∇F_i(w_t^i)^2 ≤2Lη_t^2 ∑_i∈I α_t^i (F_i(w_t^i)-F_i^⋆) Then, we obtain the bound for A_4 by applying Jensen's inequality, exploiting the linearity of the expected value and using Assumption 3. A_4 ≤η_t^2 ∑_i∈I α_t^i σ^2 ≤η_t^2 σ^2 This sequence of bounds allows us to write the following expression: 𝔼[w_t+1-w^⋆^2] ≤ (1-η_t μ) 𝔼[w_t-w^⋆^2] + 16η_t^2 E^2 G^2 + η_t^2 σ^2 + 4 L η_t^2 𝔼[∑_i∈I α_t^i (F_i(w_t^i)-F_i^⋆)] - 2 η_t 𝔼[∑_i∈I α_t^i (F_i(w_t^i)-F_i(w^⋆))] Renaming the latter terms, we have that: = (1-η_t μ) 𝔼[w_t-w^⋆^2] + 16η_t^2 E^2 G^2 + η_t^2 σ^2 + A_5 Bounding A_5, it's slightly more complicated, but the sequence of operations required is fairly similar to the one done above. The final upper bound is the following: A_5 ≤η_t^2 (16 E^2 G^2 + 6 ρ L Γ) - 3/8η_t μρ 𝔼[w_t-w^⋆^2] + 2 η_t Γ(ρ-ρ) The proof is completed as follows: w_t+1-w^⋆^2 ≤(1-η_t μ) 𝔼[w_t-w^⋆^2] + 16η_t^2 E^2 G^2 + η_t^2 σ^2 + A_5 ≤(1-η_t μ(1+3/8ρ)) 𝔼[w_t-w^⋆^2] + η_t^2 (32 E^2 G^2 + 6 ρ L Γ+ σ^2) + 2 η_t Γ(ρ-ρ) Proof (Corollary) The proof is fairly simple and brief. We start rewriting the main Theorem, as follows: Δ_t+1 ≤(1-η_tμB)Δ_t + η_t^2 C + η_t D where: B = (1+3/8ρ), C = 32 E^2 G^2 + 6 ρ L Γ + σ^2, and D = 2 Γ (ρ-ρ) Let ψ be the max{γw_0-w^⋆^2, 1/βμ B-1(β^2 C+D β(t+γ))}, where β>1/μ B, γ>0. The proof proceed by induction; from this argument, we derive that: ∀ t, Δ_t≤ψ/t+γ Then, by the L-smoothness of F, we obtain the following upper bound that concludes the proof. 𝔼[F(w_t)]-F^⋆≤L/2 Δ_t ≤L/2 ψ/γ+t
http://arxiv.org/abs/2307.05571v1
20230710071220
Average of Central L-values for GL(2)$\times$GL(1), Hybrid Subconvexity, and Simultaneous Nonvanishing
[ "Liyang Yang" ]
math.NT
[ "math.NT" ]
plain thmTheorem[section] cor[thm]Corollary thmyTheorem thmxthm coryCorollary corxthm hy[thm]Hypothesis *thmaTheorem A *corbCorollary B *thmcTheorem C lemma[thm]Lemma prop[thm]Proposition conj[thm]Conjecture fact[thm]Fact claim[thm]Claim definition defn[thm]Definition example[thm]Example remark remark[thm]Remark equationsection ]Average of Central L-values for GL(2)×GL(1), Hybrid Subconvexity, and Simultaneous Nonvanishing We employ a regularized relative trace formula to establish a second moment estimate for twisted L-functions across all aspects over a number field. Our results yield hybrid subconvex bounds for both Hecke L-functions and twisted L-functions, comparable to the Weyl bound in suitable ranges. Moreover, we present an application of our results to address the simultaneous nonvanishing problem. [ Liyang Yang August 12, 2023 =================== § INTRODUCTION Central L-values of modular forms play important roles in number theory and arithmetic geometry. The relative trace formula, introduced in <cit.>, has emerged as a powerful analytic tool for studying the average behavior of central L-values for holomorphic cusp forms. Building upon this, <cit.> extended the analysis to include Hilbert modular forms over total real fields. In this article, we employ a regularized relative trace formula to investigate central values of general automorphic L-functions for GL(2)×GL(1) over a number field. Our approach yields several new results, including a second moment estimate that encompasses all aspects and incorporates stability concepts from <cit.>, hybrid-type subconvexity bounds for both Hecke L-functions and twisted L-functions that can rival the strength of the Weyl bound in the appropriate range, and an improved bound on simultaneous nonvanishing in the level aspect. §.§ Hybrid Second Moment Involving Stability Our first result is the following bound towards the second moment of twisted L-functions. Let F be a number field with ring of adeles 𝔸_F. Let χ be a Hecke character of 𝔸_F^×/F^× with arithmetic conductor Q=C_(χ). Let 𝔐 be an integral ideal of norm M. For v|∞, let c_v, C_v, T_v>0. Set T=∏_v|∞T_v. Let Π_∞=⊗_v|∞Π_v be an irreducible admissible generic representation of GL(2)/F_∞. Let 𝒜_0(Π_∞,𝔐;χ_∞,ω) be the set of cuspidal automorphic representations π=⊗_vπ_v of GL(2)/F with central character ω such that π_=⊗_v<∞π_v has arithmetic conductor dividing 𝔐, and π_v⊗χ_v≃Π_v has uniform parameter growth of size (T_v;c_v,C_v), for all v|∞, cf. §<ref>. Then ∑_π∈𝒜_0(Π_∞,𝔐;χ_∞,ω)|L(1/2,π×χ)|^2≪ (TMQ)^ε(TM+T^1/2Q·1_M≪ Q^2(M,Q)), where the implied constants depend on ε, F, c_v, and C_v, v|∞. Theorem <ref> is an analog and extension of the results in <cit.> from Hilbert modular forms on an anisotropic quaternion algebra to cuspidal automorphic representations of GL(2) over general number fields. The estimate (<ref>) incorporates the explicit dependence on the spectral parameter T by utilizing Nelson's test function at the archimedean places. Notably, there are no restrictions on the arithmetic conductors, allowing M and Q to be arbitrary. The condition 1_M≪ Q^2(M,Q) in (<ref>) captures the stability of regular orbital integrals, akin to the treatment in <cit.>, although the specific regular orbital integrals under consideration differ significantly. For F=ℚ, with Π_∞ being a holomorphic discrete series of SL(2), and χ as a Dirichlet character, Theorem <ref> implies the following. Let k≥ 2 and N≥ 1. Let χ be a primitive Dirichlet character modulo q. Then ∑_f∈ℱ_k^(N)|L(1/2,f×χ)|^2≪ (kNq)^ε(kN+k^1/2q·1_N≪ q^2(N,q)), where the implied constant depends only on ε. Here ℱ_k^new(N) is an orthogonal basis of normalized new forms that are holomorphic Hecke eigenforms with weight k and level N, and have trivial nebentypus. Note that (<ref>) improves <cit.> by explicating the dependence on k, and allowing for arbitrary values of N and q. §.§ Hybrid Weyl Subconvex Bounds Dropping all but one terms on the left hand side of (<ref>) we then obtain the following hybrid bound for twisted L-functions. Let F be a number field with ring of adeles 𝔸_F. Let π be either a unitary cuspidal automorphic representation of GL(2)/F or a unitary Eisenstein series. Let χ be a Hecke character of 𝔸_F^×/F^×. Suppose that π_v⊗χ_v has uniform parameter growth of size (T_v;c_v,C_v), for all v|∞, cf. §<ref>. Then L(1/2,π×χ)≪ C(π⊗χ)^ε[T^1/2C_(π)^1/2+T^1/4C_(χ)^1/2], where the implied constant depends on ε, F, c_v, and C_v, v|∞. In particular, L(1/2,π×χ)≪_π_∞,χ_∞,F,ε C_(π×χ)^1/6+ε if (C_(π),C_(χ))=1 and C_(π)^1-ε≪ C_(χ)≪ C_(π)^1+ε. When considering a CM extension E/F, where π corresponds to a Hilbert modular form over F and σ_Ω represents the theta series associated with an ideal class group character Ω of E, a hybrid variant of (<ref>) for L(1/2, π×σ_Ω) has been established in <cit.> through the utilization of a relative trace formula on a quaternion algebra. This relative trace formula, together with a selection of local test function, has further been employed in <cit.> to derive a hybrid subconvexity outcome in a similar fashion. In the case of GL(2)×GL(1) over F=ℚ, the Weyl bound L(1/2,π×χ)≪ C_(χ)^1/3+ε was established by <cit.> for a fixed cusp form π of PGL(2) and a quadratic Dirichlet character χ. This result was further generalized by <cit.>, where the Weyl bound L(1/2,π×χ)≪ C_(π×χ)^1/6+ε is proven under the conditions χ^2≠ 1, π has a level dividing C_(χ), and π has a central character χ^2. In particular, C_(π) is not coprime to C_(χ). Consequently, (<ref>) addresses a complementary case to <cit.>. By taking ω=η^2 for some Hecke character η and π=η⊞η, we obtain the following bound for Hecke L-functions. Let F be a number field with ring of adeles 𝔸_F. Let η and χ be Hecke character of 𝔸_F^×/F^× with coprime arithmetic conductors. Then L(1/2,ηχ)≪min{C_(η)^1/2+ε+C_(χ)^1/4+ε,C_(η)^1/4+ε+C_(χ)^1/2+ε}, where the implied constant depends on F, ε, η_∞, and χ_∞. In particular, L(1/2,χ)≪_F,χ_∞,εC_(χ)^1/6+ε if χ=χ_1χ_2 with (C_(χ_1),C_(χ_2))=1 and C_(χ_1)^2-ε≪ C_(χ_2)≪ C_(χ_1)^2+ε. §.§ Applications to Simultaneous Nonvanishing Corollary <ref> serves as a versatile alternative to multiple third moment estimates in certain applications. It replaces Young's third moment bound (cf. <cit.>) in <cit.> and provides a substantial improvement to the level aspect simultaneous nonvanishing result (cf. <cit.>), replacing Petrow-Young's third moment estimate <cit.> with the use of Corollary <ref>. Let k∈{2,3,4,5,7}. Let N≥ 2 be a prime. Denote by ℱ_2k^new(N) an orthogonal basis of normalized new forms that are holomorphic Hecke eigenforms with weight 2k and level N, and have trivial nebentypus. Let f∈ℱ_2k^new(N). Then there exists a nontrivial primitive quadratic character χ such that #{g∈ℱ_2k^(N): L(1/2,f×χ)L(1/2,g×χ)≠ 0}≫_εN^1-ε, where the implied constant depends on ε. The lower bound N^1-ε in Corollary <ref> significantly improves the main result in <cit.>, where the lower bound achieved was N^1/2-ε. From (<ref>) we obtain subconvex bounds for L(1/2,f×χ) in the range q^δk^δ-1≪ N≪ q^2-δk^-δ, δ>0. This generalizes <cit.>. §.§ Discussion of the Proofs Let A=(GL(1), 1), G=GL(2), and G=PGL(2). Let f be a nice function on G(𝔸_F). Denote by (g_1,g_2)=∑_γ∈G(F)f(g_1^-1γ g_2), g_1, g_2∈ G(𝔸_F) the associated kernel function, which also admits a spectral expansion. By substituting these expansions of (x,y) into the integral ∫_A(F)\ A(𝔸_F)∫_A(F)\ A(𝔸_F)(x,y)χ(x)χ(y)d^×xd^×y, we obtain a formal equality between two divergent expressions. To regularize it, we establish an identity between two holomorphic functions on ℂ^2 in the form of J_^,(f,s,χ)=J_^,(f,s,χ), s∈ℂ^2, where evaluating this identity at 𝐬=(0,0) provides a regularization of (<ref>). §.§.§ The spectral side: a lower bound We will prove a lower bound J_^,(f,0,χ)≫ T^-1/2-ε(MQ)^-ε∑_π∈𝒜_0(Π_∞,𝔐;χ_∞,ω)|L(1/2,π×χ)|^2. A more comprehensive version that includes the continuous spectrum is given by Theorem <ref> in §<ref>. §.§.§ The geometric side: an upper bound According to types of orbital integrals, we decompose the geometric side into three integrals J_^,(f,0,χ)=J^_,(f,χ)+J^_,(f,χ)+J^,2_,(f,0,χ). * The terms J^_,(f,χ) and J^_,(f,χ) correspond to irregular orbital integrals, exhibiting an asymptotic magnitude of T^1/2+o(1)M^1+o(1). * The term J^,2_,(f,0,χ) represents the contribution from regular orbital integrals, which constitutes the main focus of this paper. We establish that it is bounded by ≪ T^εM^εQ^1+ε·1_M≪ Q^2(M,Q). Based on the above estimates, we obtain an upper bound for the geometric side J_^,(f,0,χ)≪ T^1/2+εM^1+ε+T^εM^εQ^1+ε·1_M≪ Q^2(M,Q), Using equations (<ref>), (<ref>), and (<ref>), we establish Theorem <ref> for the case where π is cuspidal. §.§.§ Some Remarks The approach utilized in this work exhibits similarities to that of <cit.>, albeit with notable distinctions in the treatment of test functions at ramified places. In <cit.>, the focus is primarily on the case of joint ramification, where Q| M, resulting in relatively simpler regular orbital integrals that can be further improved through nontrivial bounds on specific character sums. However, in the case of totally disjoint ramification, where (M,Q)=1, the regular orbital integrals do not exhibit any oscillatory behavior, and the trivial bound becomes optimal. This paper addresses the most general situation, allowing M and Q to take arbitrary values. Another difference from the aforementioned work is that we evaluate the expressions at s=(0,0) (instead of some s_0=(s_0,s_0) with s_0>0) in order to compute the second moment over the family. This necessitates careful consideration of singularity matching when computing the main term J^_,(f,χ)+J^_,(f,χ). By employing a straightforward `trivial' estimate of the regular orbital integrals, we establish convexity in the χ-aspect and achieve strong hybrid subconvexity. This represents one of the key advantages of the relative trace formula. The robust nature of this approach holds promise for deriving bounds for higher rank Rankin-Selberg L-functions in the level aspect. In future work, we intend to extend the techniques presented in this paper to higher ranks, building upon the general regularized relative trace formula introduced in <cit.>. §.§ Outline of the Paper §.§.§ The Regularized Relative Trace Formula In §<ref>, we introduce the notations that will be consistently used throughout the paper, along with setting up the local and global data. Additionally, we define the test functions that will play a crucial role in the relative trace formula. Moving to §<ref>, we derive the regularized relative trace formula summarized in Theorem <ref> and Corollary <ref> in §<ref>. §.§.§ The Spectral Side In §<ref>, we explore the spectral side J_^,(f,0,χ). Its meromorphic continuation is obtained in §<ref>. By combining this with the local estimates developed in §<ref>–§<ref>, we establish a lower bound for the spectral side (cf. Theorem <ref>) in terms of the second moment of central L-values. §.§.§ The Geometric Side In §<ref>–§<ref> we handle the geometric side J_^,(f,0,χ)=J^_,(f,χ)+J^_,(f,χ)+J^,2_,(f,0,χ). * The small cell orbital integral J^_,(f,χ), one of the main terms, is addressed in Proposition <ref> in §<ref>, utilizing local estimates from §<ref>–§<ref>. * The dual orbital integral J_,^(f,χ) is bounded by Proposition <ref> in §<ref>. This integral is considered `dual' to J_,^(f,χ) through Poisson summation and contributes as the other main term. * The regular orbital integrals J^,2_,(f,0,χ) present the most challenging aspect of the geometric side J_^,(f,0,χ). Their behaviors are outlined in Theorem <ref> in §<ref>. §.§.§ Proof of Main Results With the aforementioned preparations, we are able to prove the main results in §<ref>. In §<ref>–§<ref> we put estimates from the spectral and geometric side all together, obtaining Theorem <ref>, which yields Theorem <ref>. §.§ Notation §.§.§ Number Fields and Measures Let F be a number field with ring of integers 𝒪_F. Let N_F be the absolute norm. Let 𝔒_F be the different of F. Let 𝔸_F be the adele group of F. Let Σ_F be the set of places of F. Denote by Σ_F, (resp. Σ_F,∞) the set of nonarchimedean (resp. archimedean) places. For v∈Σ_F, we denote by F_v the corresponding local field and 𝒪_v its ring of integers. For a nonarchimedean place v, let 𝔭_v be the maximal prime ideal in 𝒪_v. Given an integral ideal ℐ, we say v|ℐ if ℐ⊆𝔭_v. Fix a uniformizer ϖ_v∈𝔭_v. Denote by e_v(·) the evaluation relative to ϖ_v normalized as e_v(ϖ_v)=1. Let q_v be the cardinality of 𝒪_v/𝔭_v. We use v|∞ to indicate an archimedean place v and write v<∞ if v is nonarchimedean. Let |·|_v be the norm in F_v. Put |·|_∞=∏_v|∞|·|_v and |·|_=∏_v<∞|·|_v. Let |·|_𝔸_F=|·|_∞⊗|·|_. We will simply write |·| for |·|_𝔸_F in calculation over 𝔸_F^× or its quotient by F^×. Let ψ_ℚ be the additive character on ℚ\𝔸_ℚ such that ψ_ℚ(t_∞)=exp(2π it_∞), for t_∞∈ℝ↪𝔸_ℚ. Let ψ_F=ψ_ℚ∘_F, where _F is the trace map. Then ψ_F(t)=∏_v∈Σ_Fψ_v(t_v) for t=(t_v)_v∈𝔸_F. For v∈Σ_F, let dt_v be the additive Haar measure on F_v, self-dual relative to ψ_v. Then dt=∏_v∈Σ_Fdt_v is the standard Tamagawa measure on 𝔸_F. Let d^×t_v=ζ_F_v(1)dt_v/|t_v|_v, where ζ_F_v(·) is the local Dedekind zeta factor. In particular, (𝒪_v^×,d^×t_v)=(𝒪_v,dt_v)=N_F_v(𝔇_F_v)^-1/2 for all finite place v. Moreover, (F\𝔸_F; dt_v)=1 and (F\𝔸_F^(1),d^×t)=s=1 ζ_F(s), where 𝔸_F^(1) is the subgroup of ideles 𝔸_F^× with norm 1, and ζ_F(s)=∏_v<∞ζ_F_v(s) is the finite Dedekind zeta function. Denote by F^×\𝔸_F^(1) the Pontryagin dual of F^×\𝔸_F^(1). Note that at a ramified place v|𝔇_F_v, the conductor of ψ_v is precisely the inverse different 𝔒_F_v^-1. Write 𝔒_F_v^-1=ϖ_v^-d_v𝒪_v for some integer d_v≥ 1. Set ψ=⊗_v∈Σ_Fψ_v, where ψ_v is the additive character of F\𝔸_F defined by * at v|𝔇_v, ψ_v(x):=ψ_v(ϖ_v^-d_vx), where x∈ F_v; * at v|∞ or v∤𝔇_v, ψ_v(x):=ψ_v(x), where x∈ F_v. Then ψ is unramified everywhere. Let D=N_F/ℚ(𝔇_F) be the absolute discriminant. §.§.§ Reductive Groups For an algebraic group H over F, we will denote by [H]:=H(F)\ H(𝔸_F). We equip measures on H(𝔸_F) as follows: for each unipotent group U of H, we equip U(𝔸_F) with the Haar measure such that, U(F) being equipped with the counting measure and the measure of [U] is 1. We equip the maximal compact subgroup K of H(𝔸_F) with the Haar measure such that K has total mass 1. When H is split, we also equip the maximal split torus of H with Tamagawa measure induced from that of 𝔸_F^×. In this paper we set A=(GL(1),1), and G=GL(2). Let B be the group of upper triangular matrices in G. Let G=Z\ G and B_0=Z\ B, where Z is the center of G. Let T_B be the diagonal subgroup of B. Then A≃ Z\ T_B. Let N be the unipotent radical of B. Let K=⊗_vK_v be a maximal compact subgroup of G(𝔸_F), where K_v=U_2(ℂ) is v is complex, K_v=O_2(ℝ) if v is real, and K_v=G(𝒪_v) if v<∞. For v∈Σ_F,, m∈ℤ_≥ 0, define K_v[m]:={[ a b; c d ]∈ G(𝒪_v): c∈𝔭_v^m}. §.§.§ Automorphic Data Let s=(s_1, s_2)∈ℂ^2. Let ω∈F^×\𝔸_F^(1). Denote by 𝒜_0([G],ω) the set of cuspidal representations on G(𝔸_F) with central character ω. For η_1, η_2∈F^×\𝔸_F^(1), let (η_1⊗η_2) be the unitary parabolic induction from B(𝔸_F) to G(𝔸_F) associated with η_1⊗η_2, and let η_1⊞η_2 be Langlands sum. Let Φ∈𝒮(𝔸_F) with Fourier transform Φ and let ω”=|·|^inα be a unitary character of 𝔸_F^×. Define an Eisenstein series E(s,x;Φ,ω”)=∑_δ∈ B_0(F)\G'(F)∫_𝔸_F^×Φ(zηδ x)| zx|^sω”(z)d^×z on [G']. Then E(s,x;Φ,ω”) converges absolutely in (s)>1 and admits a meromorphic continuation to ℂ, given by E(s,x;Φ,ω”)=E_+(s,x;Φ,ω”)+E_+^∧(s,x;Φ,ω”)+E_(s,x;Φ,ω”), where E_(s,x;Φ,ω”):=-Φ(0)| x|^s/s+iα+Φ(0)| x|^s-1/s-1+iα E_+(s,x;Φ,ω”):=∑_δ∈ B_0(F)\G'(F)∫_|z|≥ 1Φ(zηδ x)| zx|^sω”(z)d^×z, E_+^∧(s,x;Φ,ω”):=∑_δ∈ B_0(F)\G'(F)∫_|z|≥ 1Φ(zηδ x)| zx|^1-sω”^-1(z)d^×z. Moreover, E_+(s,x;Φ,ω”) and E_+^∧(s,x;Φ,ω”) converges absolutely for all s. §.§.§ Other Conventions For a function h on G(𝔸_F), we define h^* by assigning h^*(g)=h(g^-1), g∈ G(𝔸_F). Let F_1(s), F_2(s) be two meromorphic functions. Write F_1(s)∼ F_2(s) if there exists an entire function E(s) such that F_1(s)=E(s)F_2(s). Denote by α≍β for α, β∈ℝ if there are absolute constants c and C such that cβ≤α≤ Cβ. Throughout the paper, we adhere to the ε-convention, wherein ε denotes a positive number that can be chosen arbitrarily small, though it may vary between different instances. Acknowledgements I am grateful to Dinakar Ramakrishnan for his helpful discussions. I would also like to extend my thanks to Caltech for their warm hospitality during my visit, where this paper was written. § CHOICE OF THE TEST FUNCTION The notations introduced in this section will be extensively utilized throughout the remainder of this paper. §.§ Intrinsic Data Let F be a number field. Let χ=⊗_vχ_v and ω=⊗_vω_v be primitive unitary Hecke characters of F^×\𝔸_F^×. Let 𝔐 be an integral ideal of norm |𝔐|:=N_F(𝔐). §.§.§ Analytic Conductor of Hecke Characters Let C(χ):=⊗_v∈Σ_FC_v(χ) be the analytic conductor of χ, where each local conductor C_v(χ) is defined as follows. * For F_v≃ℝ, χ_v=^n_v'|·|^iκ_v, n_v'∈{0,1}, we define C_v(χ)=1+|n_v'+iκ_v/2|. * For F_v≃ℂ, and χ_v(a)=(a/|a|)^n_v'|a|^2iκ_v, a∈ F_v^×, we define C_v(χ):=(1+|iκ_v+|n_v'|/2|)^2. * For v<∞, let n_v the exponent of χ_v, namely, r_χ_v is the smallest nonnegative integer such that χ_v is trivial over 1+ϖ_v^r_χ_v𝒪_v^× but not over 1+ϖ_v^r_χ_v-1𝒪_v^×. Let C_v(χ)=q_v^r_χ_v. Denote by C_∞(χ):=⊗_v|∞C_v(χ) and C_(χ):=⊗_v<∞C_v(χ). §.§.§ Analytic Conductor of Automorphic Representations of GL(2)/F Let π=⊗_vπ_v be an automorphic representation of G(𝔸_F) with central character ω_π=ω=⊗_vω_v. Let C(π):=⊗_vC_v(π) be the analytic conductor of π, where each local conductor C_v(π) is defined as follows. * Let v<∞. We denote by r_π_v≥ 0 the exponent of π_v, which is the least integer such that π_v has a vector that is K_v[r_π_v]-invariant (as defined in (<ref>)). The local conductor of π_v is defined as C_v(π):=q_v^r_π_v. * For v|∞, the local L-function of π_v can be expressed as a product of shifted Gamma factors, given by L_v(s,π_v)=Γ_v(s+β_1,v)Γ_v(s+β_2,v), where β_1,v, β_2,v∈ℂ, and Γ_v represents the Gamma function over F_v. Let C_v(π):=[(1+|β_1,v|)(1+|β_2,v|)]^[F_v:ℝ]. Let C_(π)=∏_v<∞C_v(π) be the arithmetic conductor of π and let C_∞(π)=∏_v|∞C_v(π) be the archimedean conductor of π. §.§.§ Uniform Parameter Growth Let Π_∞=⊗_v|∞Π_v be an irreducible admissible generic representation of GL(2)/F_∞. For v|∞, let L_v(s,Π_v)=Γ_v(s+γ_1,v)Γ_v(s+γ_2,v) be the associated L-factor of Π_v. For v|∞, we say that Π_v has uniform parameter growth of size (T_v;c_v,C_v) for some constants c_v and C_v, and parameters T_v, if c_vT_v≤ |γ_j,v|≤ C_vT_v. §.§.§ Ramification Parameters For v∈Σ_F,, let e_v(·) be the normalized evaluation of F_v such that e_v(ϖ_v)=1. Following the notation in §<ref>, let r_χ_v (resp. r_ω_v) be the exponent of χ_v (resp. ω_v). We set m_v:=e_v(𝔐) and n_v:=r_χ_v. Let Σ_^+:={v∈Σ_F_: m_v≥ n_v≥ 1}, and Σ_^-:={v∈Σ_F_: m_v< n_v, n_v≥ 1}. Let K_v[m_v] and K_v[n_v] be defined by (<ref>). Denote by 𝔔=∏_v<∞𝔭_v^n_v. For simplicity we write Q=C_(χ), M=|𝔐|:=N_F(𝔐), and M'=C(ω_). Suppose that Q>1. Note that M'| M. §.§.§ The Family of Automorphic Forms Let c_v and C_v be positive constants for each v|∞, and let T_v>0. In this paper, we will vary T_v as needed, while keeping c_v and C_v fixed. Let T=∏_v|∞ T_v. For v|∞, let Π_v be an irreducible admissible generic representation of GL(2)/F_v, which uniform parameter growth of size (T_v;c_v,C_v), cf. §<ref>. * Let 𝒜_0(Π_∞,𝔐;χ_∞,ω) be the set of cuspidal automorphic representations π=⊗_vπ_v of GL(2)/F such that * π has central character ω, * for all v<∞, π_v has a K_v[m_v]-invariant vector, i.e., r_π_v≤ e_v(𝔐). * π_v⊗χ_v≃Π_v at each v|∞. Note that Weyl law yields #𝒜_0(Π_∞,𝔐;χ_∞,ω)=(T|𝔐|)^1+o(1). * Let 𝒳_0(Π_∞,𝔐;χ_∞,ω) be the set of Hecke characters η=⊗_vη_v∈F^×\𝔸_F^(1) such that * for all v<∞, the representation η_v⊞ω_vη_v has a K_v[m_v]-invariant vector, i.e., r_η_v+r_ω_vη_v≤ m_v, * η_vχ_v⊞ω_vη_vχ_v≃Π_v at each v|∞. By <cit.> there exists some d'∈ [10^-1exp(-3√(log T^2MQ^2)),exp(-3√(log T^2MQ^2))], which may be determined by π and χ, such that for all s with |s-1/2|=d', |L(1/2,π×χ)|≪exp(log^3/4C(π×χ))· |L(s,π×χ)|. Here the implied constant depends only on F. §.§.§ Other Notations For a function h on G(𝔸_F) or G(F_v), v∈Σ_F, define h^*(g)=h(g^-1) and (h*h^*)(g)=∫ h(gg'^-1)h^*(g')dg'=∫ h(gg')h(g')dg'. §.§ Construction of Test Functions We construct a test function f on G(𝔸_F) using the following procedure: * For the archimedean places (cf. §<ref>), we rely on Nelson's work <cit.> (cf. §1.5.2 and §14 on p.80) and follow the approach described in <cit.>, §1.10. Additional information can be found in <cit.>, Part 2. * For the finite places, we employ the test function constructed in <cit.>, which involves a double average over unipotent translations weighted by characters (cf. §<ref>). §.§.§ Construction of f_∞ Let v|∞. Recall that Π_v has uniform parameter growth of size (T_v;c_v,C_v) (cf. Definition <ref> in §<ref>). Then Π_v has uniform parameter growth of size (T_v;c_v/2,2C_v), where s_0 is the parameter defined by (<ref>) in §<ref>. Let 𝔤 (resp. 𝔤') be the Lie algebras of G(F_v) (resp. A(F_v)), with imaginal dual 𝔤̂ (resp. 𝔤̂'). One can choose an element τ∈𝔤̂ with the restriction τ'=τ|_A∈𝔤̂', so that τ (resp. τ') lies in the coadjoint orbit 𝒪_Π_v of Π_v (resp. 𝒪_1_v of 1_v the trivial representation of A(F_v)). Let f̃^∧_v: 𝔤̂→ℂ be a smooth bump function concentrated on {τ+(ξ,ξ^): ξ≪ T_v^1/2+ε, ξ^≪ T_v^ε}, where ξ lies in the tangent space of 𝒪_Π_v at τ, and ξ^ has the normal direction. Let f̃_v∈ C_c^∞(G(F_v)) be the pushforward of the Fourier transform of f̃_v^∧ truncated at the essentially support, namely, f̃_v⊆{g∈ G(F_v): g=I_n+1+O(T_v^-ε), ^*(g)τ=τ+O(T_v^-1/2+ε)}, where the implied constants rely on c_v and C_v. Then, in the sense of <cit.>, §2.5, the operator π_v(f̃_v) is approximately a rank one projector with range spanned by a unit vector microlocalized at τ. Let f_v(g):=f_v(g,χ_v)*f_v(g,χ_v)^*, where v|∞, g∈ G(F_v), and f_v(g,χ_v):=χ_v( g)∫_Z(F_v)f̃_v(zg)ω_v(z)d^×z. Due to the support of f̃, the function f_v(g) is non-zero unless | g|_v > 0. Therefore, (| g|_v) = 1. As a result, the function f_v is smooth on G(F _v). §.§.§ Application of Transversality By definition, one has (cf. (14.13) in <cit.>) f̃_v_∞≪_ε T_v^1+ε, v|∞, where ·_∞ is the sup-norm. For g∈G(F_v), we may write g=[ a b; c d ]∈ G(F_v), g^-1=[ a' b'; c' d' ]∈ G(F_v). Define d_v(g):=min{1, |d^-1b|_v+ |d^-1c|_v+ |d'^-1b'|_v+|d'^-1c'|_v }, if dd'≠ 0, 1, if dd'=0. Let notation be as above. Then there is a fixed neighborhood 𝒵 of the identity in A(F_v) with the following property. Let g be in a small neighborhood of I_n+1 in G(F_v). Let δ_v>0 be small. Then ({z∈𝒵: (gzτ, A(F_v)τ)≤δ_v})≪δ_v/d_v(g). Here (⋯) denotes the infimum over g'∈ A(F_v) of gzτ-g'τ, where · is a fixed norm on 𝔤̂. Proposition <ref> (with δ_v=T_v^-1/2+ε) will be used to detect the restriction ^*(g)τ=τ+O(T_v^-1/2+ε) in the support of f̃_v. By (<ref>), (<ref>), and (<ref>), |f̃_v(g)|≪ T^1+ε·1_|^*(g)τ-τ|≪ T_v^-1/2+ε·1_|g-I_2|≪ T_v^-ε·{1,T_v^-1/2+ε/d_v(g)}. §.§.§ Finite Places For v∈Σ_F,, we define a function on G(F_v), supported on Z(F_v)\ K_v[m_v], by f_v(z_vk_v;ω_v)=(K_v[m_v])^-1ω_v(z_v)^-1ω_v(E_2,2(k_v))^-1, where K_v[m_v] is the image of K_v[m_v] in G(F_v), and E_2,2(k_v) is the (2,2)-th entry of k_v∈ K_v[m_v]. For g_v∈ G(F_v), define by f_v(g_v)=1/|τ(χ_v)|^2∑_α∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×∑_β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×χ_v(α)χ_v(β)f_v(g_α,β,v;ω_v), where τ(χ_v)=∑_α∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×ψ_v(αϖ_v^-n_v)χ_v(α) is the Gauss sum relative to the additive character ψ_v, and g_α,β,v:=[ 1 αϖ_v^-n_v; 1 ]g_v[ 1 βϖ_v^-n_v; 1 ]. Note that n_v=0 for almost all v∈Σ_F,. Hence, for all but finitely many v∈Σ_F,, the test function f_v(·)=f_v(·;ω_v) (cf. (<ref>)) supports in Z(F_v)\ K_v[m_v]. §.§.§ Construction of the Test Function Let f=⊗_v∈Σ_Ff_v, where f_v is constructed in §<ref> and §<ref>. Note that f_∞ is determined by Π_∞. § THE REGULARIZED RELATIVE TRACE FORMULA §.§ Fourier Expansion of the Kernel Function Let f=⊗_vf_v be defined in §<ref>. Then f defines an integral operator R(f)ϕ(g)=∫_G(𝔸_F)f(g')ϕ(gg')dg' on the space L^2([G],ω) of functions on [G] which transform under Z(𝔸_F) by ω and are square integrable on [G]. This operator is represented by the kernel function (g_1,g_2)=∑_γ∈G(F)f(g_1^-1γ g_2), g_1, g_2∈ G(𝔸_F). It is well known that L^2([G],ω) decomposes into the direct sum of the space L_0^2([G],ω) of cusp forms and spaces L_^2([G],ω) and L_^2([G],ω) defined using Eisenstein series and residues of Eisenstein series respectively. Then _0(g_1,g_2)+_(g_1,g_2)=(g_1,g_2)=∑_γ∈G(F)f(g_1^-1γ g_2), where _0(g_1,g_2) (resp. _(g_1,g_2)) is the contribution from the cuspidal (resp. non-cuspidal) spectrum. Explicit expansions of _0(g_1,g_2) and _(g_1,g_2) will be given in §<ref>. By Bruhat decomposition (g_1,g_2)=_(g_1,g_2)+_(g_1, g_2), where _(g_1,g_2)=∑_γ∈ B_0(F)f(g_1^-1γ g_2), _(g_1,g_2)=∑_γ∈ B_0(F)wN(F)f(g_1^-1γ g_2). Let 𝒦(·,·)∈{(·,·), _0(·,·), _(·,·), _(·,·),_(·,·)}. Define ℱ_0ℱ_1𝒦(g_1,g_2):= ∫_[N]𝒦(g_1,u_2g_2)du_2, ℱ_1ℱ_0𝒦(g_1,g_2):=∫_[N]𝒦(u_1g_1,g_2)du_1, ℱ_1ℱ_1𝒦(g_1,g_2):= ∫_[N]∫_[N]𝒦(u_1g_1,u_2g_2)du_2du_1, ℱ_2ℱ_2(g_1,g_2):= ∑_α∈ A(F)∑_β∈ A(F)∫_[N]∫_[N](u_1α g_1,u_2β g_2)θ(u_1)θ(u_2)du_2du_1. Using Poisson summation twice the integral ℱ_2ℱ_2𝒦(g_1,g_2) is equal to 𝒦(g_1,g_2)-ℱ_0ℱ_1𝒦(g_1,g_2)-ℱ_1ℱ_0𝒦(g_1,g_2)+ℱ_1ℱ_1𝒦(g_1,g_2). By <cit.> we have, for x, y∈ A(𝔸_F), that ℱ_0ℱ_1_(x,y)=ℱ_1ℱ_0_(x,y)=ℱ_1ℱ_1_(x,y)≡ 0. Along with (<ref>) we then obtain that ℱ_2ℱ_2_(x,y)=_(x,y). Note that (<ref>) only holds over (x,y)∈ A(𝔸_F)× A(𝔸_F). §.§ The Relative Trace Formula §.§.§ The Spectral Side Let (s_1)≫ 1 and (s_2)≫ 1. Define J_^(f,s,χ):=J_0^(f,s,χ)+J_^(f,s,χ), the spectral side, where s=(s_1, s_2)∈ℂ^2, and J_0^(f,s,χ):= ∫_[A]∫_[A]_0(x,y)| x|^s_1| y|^s_2χ(x)χ(y)d^×xd^×y, J_^(f,s,χ):= ∫_[A]∫_[A]ℱ_2ℱ_2_(x,y)| x|^s_1| y|^s_2χ(x)χ(y)d^×xd^×y. By Proposition 6.4 in <cit.> (cf. §6.2), the integral J_^(f,s,χ) converges absolutely in (s_1), (s_2)≫ 1. In addition, J_^(f,s,χ) admits a holomorphic continuation J_^,(f,s,χ) to 𝐬∈ℂ^2. We will see in §<ref> that J_^,(f,s,χ) is roughly an average of L(1/2+s_1,π×χ)L(1/2+s_2,π×χ) as π varies over families of unitary automorphic representations of GL(2)/F. §.§.§ The Geometric Side By (<ref>) and the decomposition (x,y)=_(x,y)+_(x,y), the geometric side is J_^(f,s,χ):=J^_,(f,s,χ)+J^_,(f,s,χ), where (s_1)≫ 1, (s_2)≫ 1, and J^_,(f,s,χ):= ∫_[A]∫_[A]ℱ_2ℱ_2_(x,y)| x|^s_1| y|^s_2χ(x)χ(y)d^×xd^×y , J^_,(f,s,χ):= ∫_[A]∫_[A]_(x,y)| x|^s_1| y|^s_2χ(x)χ(y)d^×xd^×y. As in <cit.>, we have J^_,(f,s,χ)=J^_,(f,s,χ)+J^,2_,(f,s,χ), where J_,^(f,s,χ) is defined by ∫_𝔸_F^×∫_𝔸_F^×f([ 1; x 1 ][ y; 1 ])|x|^s_1+s_2|y|^s_2χ(y)d^×yd^×x, and the regular orbital J^,2_,(f,s,χ) is defined by ∑_t∈ F-{0,1}∫_𝔸_F^×∫_𝔸_F^×f([ y x^-1t; xy 1 ])|x|^s_1+s_2|y|^s_2χ(y)d^×yd^×x. Note that (<ref>) converges absolutely in (s_1+s_2)>1, and by <cit.> the integral J^,2_,(f,s,χ) converges absolutely in 𝐬∈ℂ^2, and in particular, the sum over t∈ F-{0,1} is finite, which is called stability of the regular orbital integrals (cf. <cit.>, <cit.>, <cit.>). Therefore, the geometric side J_^(f,s,χ) admits a holomorphic continuation J_^,(f,s,χ) to 𝐬∈ℂ^2. We shall investigate it in §<ref>-§<ref>. §.§.§ The Regularized Relative Trace Formula Note that _0(x,y)=ℱ_2ℱ_2_0(x,y), _0(x,y)+_(x,y)=(x,y)=_(x,y)+_(x,y). Then by (<ref>), _0(x,y)+ℱ_2ℱ_2_(x,y)=ℱ_2ℱ_2(x,y)=ℱ_2ℱ_2_(x,y)+_(x,y). As a consequence, when (s_1)≫ 1 and (s_2)≫ 1, J_^(f,s,χ)=J_^(f,s,χ). By applying the singularity matching process described in <cit.>, the equality (<ref>) extends to its holomorphic continuation, leading to the following equality between two holomorphic functions: Let notation be as before. Then J_^,(f,s,χ)=J_^,(f,s,χ)<ref>. In this paper, our focus is on evaluating the above regularized RTF at 𝐬 = 0 = (0,0). Write 𝐬'=(s,0). Define the following normalized integrals J^_,(f,χ):= [J^_,(f,s',χ)-s^-1s=0 J^_,(f,s,χ)]_s=0, J^_,(f,χ):= [J^_,(f,s',χ)-s^-1s=0 J^_,(f,s,χ)]_s=0. Notice that s=0 J^_,(f,s,χ)+s=0 J^_,(f,s,χ)≡ 0. Therefore, J_^,(f,0,χ)=J^_,(f,χ)+J^_,(f,χ)+J^,2_,(f,0,χ)<ref>. Let notation be as before. Then J_^,(f,0,χ)=J^_,(f,χ)+J^_,(f,χ)+J^,2_,(f,0,χ). § THE SPECTRAL SIDE: MEROMORPHIC CONTINUATION AND BOUNDS In this section we shall show that J_^,(f,s,χ) admits a holomorphic continuation to 𝐬∈ℂ^2. Moreover, we derive a lower bound of it as follows. []thmthmf Let notation be as in §<ref>. Then J_^,(f,0,χ)≫ T^-1/2-ε(MQ)^-ε∑_π∈𝒜_0(Π_∞,𝔐;χ_∞,ω)|L(1/2,π×χ)|^2 +T^-1/2-ε(MQ)^-ε∑_η∫_ℝ|L(1/2+it,ηχ)L(1/2+it,ωηχ)|^2/|L(1+2it,ωη^2)|^2dt, where η∈𝒳_0(Π_∞,𝔐;χ_∞,ω), and the implied constant depends only on F, ε, c_v and C_v at v|∞. §.§ Spectral Side: Meromorphic Continuation §.§.§ Spectral Expansion of the kernel functions Let notation be as in §<ref>. Let f=⊗_vf_v be defined in §<ref>. Let _0(x,y) and _(x,y) be defined by (<ref>) in §<ref>. Then by the spectral decomposition we have (e.g., cf. <cit.>) _0(x,y)=∑_σ∈𝒜_0([G],ω)∑_ϕ∈𝔅_σσ(f)ϕ(x)ϕ(y), _(x,y)=1/4π∑_η∈F^×\𝔸_F^(1)∫_iℝ∑_ϕ∈𝔅_σ_0,ηE(x,ℐ(λ,f)ϕ,λ)E(y,ϕ,λ)dλ. Here, 𝔅_σ denotes an orthonormal basis of the cuspidal representation σ, and σ_0,η is given by σ_0,η=(η,η^-1ω). §.§.§ Rankin-Selberg Periods Let θ=⊗_vθ_v be the generic induced by the fixed additive character ψ (cf. §<ref>). For a generic automorphic form φ on G(𝔸_F), define the associated Whittaker function by W_φ(g):=∫_[N]φ(ug)θ(u)du, g∈ G(𝔸_F), Using the multiplicity one property, we can express W_φ(g) as a product over all places v∈Σ_F as W_φ(g)=∏_v∈Σ_FW_φ,v(g_v), where g=⊗_vg_v∈ G(𝔸_F). The local Whittaker function W_φ,v is spherical for all but finitely many places v∈Σ_F. Define Ψ(s,φ,χ):=∫_𝔸_F^×W_φ([ x; 1 ])|x|^sχ(x)d^×x=∏_v∈Σ_FΨ_v(s,φ,χ), where the local integral is defined by Ψ_v(s,φ,χ)=∫_F_v^×W_φ,v([ x_v; 1 ])|x_v|_v^sχ_v(x_v)d^×x_v. The integral Ψ(s,φ,χ) converges absolutely in (s)>1. Furthermore, it is related to L-functions as follows. * If φ∈𝔅_σ, where σ∈𝒜_0([G],ω), then Ψ(s,φ,χ) converges absolutely for all s∈ℂ, making it an entire function. By Hecke's theory, Ψ(s,φ,χ) serves as the integral representation for the complete L-function Λ(s+1/2,σ). * If φ∈𝔅_0,η associated with some η∈F^×\𝔸_F^(1), then as established in <cit.>, the function Ψ(s,φ,χ) converges absolutely in the region (s)_1≫ 1 and (s_2)≫ 1, and it has a meromorphic continuation to s∈ℂ, representing the complete L-function Λ(s+1/2,η)Λ(s+1/2,η^-1ω). Let v∈Σ_F be a place. Let (s)>1. We denote by R_v,λ(s,ϕ,χ):=Ψ_v(s,ϕ,χ)L_v(s+1/2,σ_v×χ_v)^-1, if ϕ∈𝔅_σ,σ∈𝒜_0([G],ω), Ψ_v(s,ϕ,χ)/L_v(s+1/2,η_vχ_v)L_v(s+1/2,η_v^-1χ_vω_v), if ϕ∈𝔅_0,η,η∈F^×\𝔸_F^(1). Let R_λ(s,ϕ,χ)=∏_v∈Σ_FR_v,λ(s,ϕ,χ). Then R_λ(s,ϕ,χ) turns out to be an entire function of s∈ℂ. Denote by R_,λ(s,ϕ,χ)=∏_v∈Σ_F,R_v,λ(s,ϕ,χ), Ψ_∞(s,ϕ,χ):=∏_v∈Σ_F,∞Ψ_v(s,ϕ,χ). §.§.§ Meromorphic Continuation According to the construction of the test function f, the Eisenstein series E(x,ℐ(λ,f)ϕ,λ), ϕ∈𝔅_0,η, vanishes unless ϕ is right invariant under K_v[m_v], where m_v=e_v(𝔐), cf. §<ref>. Substituting the Rankin-Selberg periods (cf. §<ref>) into the decomposition (<ref>) we then obtain J_^(f,s,χ)=J_0^(f,s,χ)+J_^(f,s,χ), where J_0^(f,s,χ)= ∑_σ∈𝒜_0(Π_∞,𝔐;χ_∞,ω)∑_ϕ∈𝔅_σΨ(s_1,σ(f)ϕ)Ψ(s_2,ϕ,χ), J_^(f,s,χ)= 1/4π∑_η∫_iℝ∑_ϕ∈𝔅_σ_0,ηΨ(s_1,σ_0,η(f)E(·,ϕ,λ),χ)Ψ(s_2,E(·, ϕ,λ),χ)dλ, where η ranges through ∈F^×\𝔸_F^(1), (s_1)≫ 1 and (s_2)≫ 1. The function J_0^(f,s,χ) continues to a holomorphic function J_0^,(f,s,χ) in ℂ^2. It is proved in <cit.> that J_^(f,s,χ) extends to a holomorphic function J_^,(f,s,χ) in -1/4<(s_1), (s_2)<1/4 with J_^,(f,s,χ)=1/4π∑_η∫_iℝ∑_ϕ∈𝔅_σ_0,ηΨ(s_1,σ_0,η(f)E(·,ϕ,λ),χ)Ψ(s_2,E(·, ϕ,λ),χ)dλ, where η∈F^×\𝔸_F^(1), and the integrand Ψ(s_1,σ_0,η(f)E(·,ϕ,λ),χ)Ψ(s_2,E(·, ϕ,λ),χ) is identified with its meromorphic continuation. In particular, J_^(f,s,χ) is holomorphic in the region -1/4<(s_1), (s_2)<1/4. §.§ Spectral Side: the Second Moment Let notation be as in §<ref>. Denote by f^(g)=⊗_v|∞f_v(g_v,χ_v)⊗⊗_v∈Σ_F, f_v(g_v;ω_v), where f_v(·;ω_v) is defined by (<ref>), i.e., f_v(z_vk_v;ω_v)=(K_v[m_v])^-1ω_v(z_v)^-1ω_v(E_2,2(k_v))^-1. Define φ^(x):=∫_G(𝔸)f^(g)∏_v|𝔔[1/τ(χ_v)∑_βχ_v(β)σ_v([ 1 βϖ_v^-n_v; 1 ])]σ(g)φ(x)dg, where β_v ranges over ∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×. To simplify notations, we shall still write Ψ(s,ϕ^,χ) and Ψ(s,E(·,ϕ,λ)^,χ) for their holomorphic continuations, respectively. It follows from the construction of f that J_0^,(f,0,χ) and J_^,(f,0,χ) can be written as follows. Let notation be as before. Then J_0^,(f,0,χ)= ∑_σ∈𝒜_0([G],ω)∑_ϕ∈𝔅_σ|Ψ(0,ϕ^,χ)|^2, J_^,(f,0,χ)= 1/4π∑_η∈F^×\𝔸_F^(1)∫_iℝ∑_ϕ∈𝔅_σ_0,η|Ψ(0,E(·,ϕ,λ)^,χ)|^2dλ. §.§.§ Local calculations The non-archimedean calculation presented in <cit.> is as follows: Let notation be as before. Let σ∈𝒜_0([G],ω). Let ϕ∈σ be a pure tensor. Suppose that ϕ^≠ 0. Then for v∈Σ_F,, we have Ψ_v(s,ϕ^,χ)=W_ϕ,v(I_2)L_v(s+1/2,σ_v×χ_v), (s)≥ 0. Let notation be as before. Let ϕ∈σ_λ,η be a pure tensor. Let φ=E(·,ϕ,λ). Suppose that φ^≠ 0. Then for v∈Σ_F,, (s)≥ 0, we have Ψ_v(s,φ^,χ)=W_φ,v(I_2)L_v(s+1/2+λ,η_vχ_v)L_v(s+1/2-λ,η_v^-1χ_vω_v). §.§ Spectral Side: the lower bound In this section we prove Theorem <ref>. Denote by f_v^∘:=∫_Z(F_v)f̃_v(zg)ω_v(z)d^×z. Let π=π_∞⊗π_ be a unitary automorphic representation of GL(2)/F with π_∞⊗χ_∞≃Π_∞. Let v|∞, by the properties of f_v (cf. e.g., <cit.>), T_v^-1/4-ε≪_ε∫_F_v^×(π_v(f_v^∘)(W_v⊗χ_v))([ x_v; 1 ])d^×x_v≪_ε T_v^-1/4+ε for some W_v in the Kirillov model of π_v. By definition (<ref>) in §<ref>, we have π_v(f_v(·,χ_v))W_v([ x_v; 1 ])χ_v(x_v)=(π_v(f_v^∘)(W_v⊗χ_v))([ x_v; 1 ]). Hence, Ψ_v(s_0,π_v(f_v(·,χ_v))W_v,χ_v)≫_ε T_v^-1/4-ε for some W_v in the Kirillov model of π_v. Let ϕ∈π be a cusp form with Petersson norm ⟨ϕ,ϕ⟩=1, and Whittaker function W_ϕ=⊗_vW_ϕ,v (defined by (<ref>)), such that W_ϕ,v=W_v, for all v|∞, and W_ϕ,v is ∏_v<∞K_v[n_v]-invariant. Then Ψ_v(0,ϕ^,χ)=Ψ_v(0,π_v(f_v(·,χ_v))W_v,χ_v)≫_ε T_v^-1/4-ε, where the implied constant depends on ε, c_v and C_v at v|∞. Together with Lemmas <ref>, <ref>, and the bound |W_(I_2)|≫ (TM)^-ε (cf. <cit.>), J_0^,(f,0,χ)≫ T^-1/2-ε(MQ)^-ε∑_π∈𝒜_0(Π_∞,𝔐;χ_∞,ω)|L(1/2,π×χ)|^2, and J_^,(f,0,χ) is ≫ T^-1/2-ε(MQ)^-ε∑_η∫_t∈ℝ|L(1/2+it,ηχ)L(1/2+it,ωηχ)|^2/|L(1+2it,ωη^2)|^2dt, where η∈𝒳_0(Π_∞,𝔐;χ_∞,ω). Therefore, Theorem <ref> follows. § THE GEOMETRIC SIDE: THE ORBITAL INTEGRAL J^_,(F,Χ) Let f=⊗_vf_v be text function constructed in §<ref>. Let 𝐬=(s,0) with s∈ℂ. Recall the definition in §<ref>: J^_,(f,χ):=[J^_,(f,s,χ)-s=0 J^_,(f,s,χ)]_s=0, where, for (s)>1, the small cell orbital integral is defined by (cf. §<ref>): J^_,(f,s,χ):=∫_𝔸_F^×∫_𝔸_F^×f([ y b; 1 ])ψ(xb)|x|^1+sχ(y)d^×xd^×y, which is a Tate integral representing Λ(1+s,1_F). Let notation be as before. Then J^_,(f,χ)≪_ε M^1+εT^1/2+ε, where the implied constant depends only on F, ε, and c_v, C_v at v|∞, cf. §<ref>. §.§ Local calculations at nonarchimedean places Let (s)>0. Let J_,v(s):=∫_F_v^×|x_v|_v^1+2s∫_F_v^×∫_F_vf_v([ y_v b_v; 1 ])ψ_v(x_vb_v)χ_v(y_v)db_vd^×y_vd^×x_v. Take 𝐬=(s,0). By definition, we have J^_,(f,s,χ):= ∏_v∈Σ_FJ_,v(s), (s)>0. By <cit.> we have, for v∈Σ_F,, that J_,v(s)= N_F_v(𝔇_F_v)^-1/2(K_v[m_v])^-11_(𝔇_F_v^-1)^×(x_v), if v|𝔔, |x_v|_v^1+2s_0N_F_v(𝔇_F_v)^-1/2(K_v[m_v])^-11_𝔇_F_v^-1(x_v), if v∤𝔔, where m_v=e_v(𝔐) is defined in §<ref>. Hence, J^_,(f,s,χ)=V_F· N_F(𝔇_F)^1/2+sζ_F(1+2s)∏_v|∞J_,v(s), where V_F:=∏_v<∞(K_v[m_v])^-1≍ |𝔐|^1+o(1). §.§ Local estimates at archimedean places Let notation be as before. Let ε>0 be a constant. Let 𝒞:={s∈ℂ: |s|=ε} be the circle of radius ε. Then for s∈𝒞, we have ∏_v|∞J_,v(s)≪ T^1/2+ε, where the implied constant depends on F, ε, c_v and C_v, v|∞. Let s∈𝒞. Denote by ℐ_v(x_v,s):=|x_v|_v^1+2s∫_F_v^×∫_F_vf_v([ y_v b_v; 1 ])ψ_v(x_vb_v)χ_v(y_v)db_vd^×y_v. By the construction of f we have ℐ_v(x_v,s)≠ 0 unless x_vT_v^-1-γ_v≪_ε T_v^-1/2+ε, where γ_v is determined by τ∈𝔤̂, cf. §<ref>. Moreover, by decaying of Fourier transform of f_v, f_v([ y_v b_v; 1 ])≪ T_v^-∞ if |b_v|_v≫ T_v^-1/2+ε. Together with (<ref>), ℐ_v(x_v,s)≪ |x_v|_v^1+2ε1_x_vT_v^-1-γ_v≪_ε T_v^-1/2+ε· T_v^1+ε· T_v^-1/2+ε· T_v^-1/2+ε+O(T_v^-∞), where the factor T_v^1+ε comes from the sup-norm estimate (cf. (<ref>)), the first factor T_v^-1/2+ε comes from the range of y_v according to the support of f_v (cf. (<ref>)), and the second T_v^-1/2+ε comes from the essential range of b_v, i.e., |b_v|_v≪ T_v^-1/2+ε. In particular, the implied constant in (<ref>) depends only on F_v, ε, and c_v, C_v at v|∞. As a consequence, we have J_,v(s)=∫_F_v^×ℐ_v(x_v,s)d^×x_v≪ T_v^ε∫_F_v|x_v|_v^2ε1_x_vT_v^-1-γ_v≪_ε T_v^-1/2+εdx_v+O(T_v^-∞), which is ≪ T_v^1/2+ε. Then (<ref>) follows. §.§ Proof of Proposition <ref> Let ε>0 be a constant. Let 𝒞:={s∈ℂ: |s|=ε} be the circle of radius ε (cf. Lemma <ref>). By Cauchy formula, J^_,(f,χ)=1/2π i∫_𝒞J^_,(f,s,χ)/sds Substituting (<ref>) into the above integral, J^_,(f,χ)=V_F/2π i∫_𝒞N_F(𝔇_F)^1/2+sζ_F(1+2s)∏_v|∞J_,v(s)/sds. By Lemma <ref> we have J^_,(f,χ)≪ V_F∫_𝒞N_F(𝔇_F)^1/2+ε T^1/2+ε/|ε|·max_s∈𝒞|ζ_F(1+2s)|ds≪ M^1+εT^1/2+ε. Hence, Proposition <ref> follows. § THE GEOMETRIC SIDE: THE ORBITAL INTEGRAL J_,^(F,Χ) Let f=⊗_vf_v be text function constructed in §<ref>. Let 𝐬=(s,0) with s∈ℂ. Recall the definition in §<ref>: J^_,(f,χ):=[J^_,(f,s,χ)-s=0 J^_,(f,s,χ)]_s=0, where, for (s)>0, the dual orbital integral is defined by (cf. §<ref>): J_,^(f,s,χ):=∫_𝔸_F^×∫_𝔸_F^×f([ 1; x 1 ][ y; 1 ])|x|^sχ(y)d^×yd^×x, which is a Tate integral representing Λ(2s,1_F). Let s<0. By Poisson summation (or equivalently, the functional equation), J^_,(f,s,χ) becomes ∫_𝔸_F^×∫_𝔸_F^×∫_𝔸_Ff([ 1; b 1 ][ y; 1 ])ψ(bx)|x|^1-sχ(y)dbd^×yd^×x. Let notation be as before. Then J^_,(f,χ)≪ M^1+εT^1/2+ε, where the implied constant depends only on F, ε, and c_v, C_v at v|∞, cf. §<ref>. Write J^_,(f,s,χ)=∏_v∈Σ_FJ_,v(s), where (s)<0, and each J_,v(s) is defined by ∫_F_v^×∫_F_v^×∫_F_vf_v([ 1; b_v 1 ][ y_v; 1 ])ψ_v(b_vx_v)|x_v|_v^1-sχ_v(y_v)db_vd^×y_vd^×x_v. Similar to (<ref>) and (<ref>) we have J^_,(f,s,χ)=V_F· N_F^(𝔔)(𝔇_F)^1/2+sζ_F^(𝔔)(1+2s)∏_v|𝔔J_,v(s)∏_v|∞J_,v(s), where V_F:=∏_v<∞(K_v[m_v])^-1≍ |𝔐|^1+o(1), N_F^(𝔔)(𝔇_F)=∏_𝔭|𝔇_F 𝔭+𝔔=𝒪_FN_F(𝔭), ζ_F^(𝔔)(1+2s):=∏_𝔭 prime 𝔭+𝔔=𝒪_F1/1-N_F(𝔭)^-1-2s. §.§ Local estimates at ramified places At v|𝔔, by definition f_v([ 1; b_v 1 ][ y_v; 1 ])=0 unless z_v[ 1 αϖ_v^-n_v; 1 ][ 1; b_v 1 ][ y_v; 1 ][ 1 βϖ_v^-n_v; 1 ]∈ K_v[m_v] for some α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×, and z_v∈ F_v^×, i.e., z_v[ y_v+α b_vy_vϖ_v^-n_v (y_v+α b_vy_vϖ_v^-n_v)βϖ_v^-n_v+αϖ_v^-n_v; b_vy_v 1+β b_vy_vϖ_v^-n_v ]∈ K_v[m_v]. Analyzing the (2,1)-th entry of the matrix on the LHS of (<ref>) yields that e_v(z_v)+e_v(b_v)+e_v(y_v)≥ m_v≥ n_v. Hence an investigation of the (1,1)-th and (2,2)-th entry leads to e_v(y_v)+2e_v(z_v)=0 e_v(z_v)+e_v(y_v)≥ 0, e_v(z_v)≥ 0. As a consequence, e_v(z_v)=0, i.e., z_v∈𝒪_v^×. So e_v(y_v)=0, e_v(b_v)≥ m_v. Hence we have f_v([ 1; b_v 1 ][ y_v; 1 ])=1_𝒪_v^×(y_v)1_ϖ_v^m_v𝒪_v(b_v). After a change of variable (i.e., β↦ y_v^-1β), 𝒥_v(x_v)=|τ(χ_v)|^-2/(K_v[m_v])∑_α,β∫_ϖ_v^m_v𝒪_v1_K_v[m_v](X_v)ψ_v(b_vx_v)χ_v(α)χ_v(β)db_v, where X_v denotes the matrix [ 1+α b_vϖ_v^-n_v (1+α b_vϖ_v^-n_v)βϖ_v^-n_v+αϖ_v^-n_v; b_v 1+β b_vϖ_v^-n_v ]. Note that 1_K_v[m_v](X_v)≠ 0 unless (1+α b_vϖ_v^-n_v)β +α∈ϖ_v^n_v𝒪_v. Hence, 𝒥_v(x_v)=|τ(χ_v)|^-2χ_v(-1)/(K_v[m_v])∑_α∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×∫_ϖ_v^m_v𝒪_vψ_v(b_vx_v)χ_v(1+α b_vϖ_v^-n_v)db_v. Write b_v=ϖ_v^mγ_v, γ_v∈𝒪_v^×. Changing the variable α↦γ_v^-1α, 𝒥_v(x_v)=|τ(χ_v)|^-2χ_v(-1)/(K_v[m_v])ζ_F_v(1)∑_m≥ m_vq_v^-mG(m)R(m,x_v). where G is the character sum G(m):=∑_α∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×χ_v(1+αϖ_v^m-n_v), and R(m,x_v) is the Ramanujan sum R(m,x_v):=∫_𝒪_v^×ψ_v(γ_vϖ_v^mx_v)d^×γ_v. Applying the trivial bound G(m)≪ q_v^n_v, and R(m,x_v)=0 if m<-e_v(x_v)-1, R(m,x_v)≪ q_v^-1 if m=-e_v(x_v)-1, and R(m,x_v)=1 if m≥ -e_v(x_v), we then deduce that J_,v(s)≪ (m_v+2n_v)(K_v[m_v])^-1. §.§ Local estimates at archimedean places Similar to Lemma <ref> we have Let notation be as before. Let ε>0 be a constant. Let 𝒞:={s∈ℂ: |s|=ε} be the circle of radius ε. Then for s∈𝒞, we have ∏_v|∞J_,v(s)≪ T^1/2+ε, where the implied constant depends on F, ε, c_v and C_v, v|∞. §.§ Proof of Proposition <ref> Let ε>0 be a constant. Let 𝒞:={s∈ℂ: |s|=ε} be the circle of radius ε (cf. Lemma <ref>). By Cauchy formula, J^_,(f,χ)=1/2π i∫_𝒞J^_,(f,s,χ)/sds Plugging the expression of J^_,(f,s,χ) into the above integral, J^_,(f,χ)=V_F/2π i∫_𝒞N_F^(𝔔)(𝔇_F)^1/2+sζ_F^(𝔔)(1+2s)∏_v|𝔔J_,v(s)∏_v|∞J_,v(s)/sds. By the estimate (<ref>) and Lemma <ref> we have J^_,(f,χ)≪ V_F^1+ε∫_𝒞N_F(𝔇_F)^1/2+ε T^1/2+ε/|ε|·max_s∈𝒞|ζ_F^(𝔔)(1+2s)|ds≪ M^1+εT^1/2+ε. Hence, Proposition <ref> follows. § THE GEOMETRIC SIDE: REGULAR ORBITAL INTEGRALS Recall the definition (<ref>) in §<ref>: J^,2_,(f,0,χ):=∑_t∈ F-{0,1}∏_v∈Σ_Fℰ_v(t), where for v∈Σ_F, ℰ_v(t):=∫_F_v^×∫_F_v^×f_v([ y_v x_v^-1t; x_vy_v 1 ])χ_v(y_v)d^×y_vd^×x_v. By Theorem 5.6 in <cit.> (or <cit.>) the orbital integrals J^,2_,(f,0,χ) converges absolutely. We shall establish an upper bound for it as follows. Let notation be as before. Then J^,2_,(f,0,χ)≪ T^εM^εQ^1+ε·1_M≪ Q^2(M,Q), where the implied constant depends on ε, F, c_v, and C_v, v|∞. Here T, M, and Q are defined in §<ref>. In particular, J^,2_,(f,0,χ)=0 if M is large enough. The observation that J^,2_,(f,0,χ)=0 for large M aligns with the calculation in <cit.>, despite the distinct nature of the regular orbital integrals involved. §.§ Local Estimates: unramified nonarchimedean places The following straightforward calculation can be found in <cit.>. Let v∈Σ_F, be such that v∤𝔔. Then ℰ_v(t)≪(1-e_v(1-t))(1+e_v(t)-2e_v(1-t))/(K_v[m_v])1_e_v(t-1)≤ 0 e_v(t)-e_v(1-t)≥ m_v. Moreover, ℰ_v(t)=1 if e_v(t)=e_v(1-t)=0, m_v=0, and v∤𝔇_F. In particular, ℰ_v(t)=1 for all but finitely many v's. §.§ Local Estimates at Ramified Places Σ_^- In this section, we consider the case where v∈Σ_^-, specifically v |𝔔 and m_v < n_v. The local integrals ℰ_v(t) demonstrate unique characteristics that distinguish them from those discussed in <cit.>. This distinction sets them apart from the analysis presented in the aforementioned work. Let v∈Σ_^-. Then ℰ_v(t)≪ q_v^m_v+k if e_v(1-t)=-2k for m_v-n_v≤ k≤ -1 (e_v(t)-e_v(1-t)+1)q_v^m_v if e_v(t)-e_v(1-t)≥ 0 (1-e_v(t))^2q_v^m_v if e_v(t)≤ -1 0 otherwise, where the implied constant is absolute. By definition, f_v([ y_v x_v^-1t; x_vy_v 1 ])=0 unless ϖ_v^k[ 1 αϖ_v^-n_v; 1 ][ y_v x_v^-1t; x_vy_v 1 ][ 1 βϖ_v^-n_v; 1 ]∈ K_v[m_v] for some k∈ℤ. Write x_v=ϖ_v^r_1γ_1, y_v=ϖ_v^r_2γ_2, where r_1, r_2∈ℤ and γ_1, γ_2∈𝒪_v^×. Then (<ref>) becomes ϖ_v^k[ 1 γ_1αϖ_v^-n_v; 1 ][ ϖ_v^r_2 ϖ_v^-r_1t; ϖ_v^r_1+r_2 1 ][ 1 γ_1γ_2βϖ_v^-n_v; 1 ]∈ K_v[m_v]. Changing variables α↦γ_1^-1α, β↦γ_1^-1γ_2^-1β, the above constraint becomes ϖ_v^kY_α,β,r_1,r_2,t∈ K_v[m_v] for some k∈ℤ, where Y_α,β,r_1,r_2,t is defined by [ ϖ_v^r_2+αϖ_v^r_1+r_2-n_v (ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v; ϖ_v^r_1+r_2 1+βϖ_v^r_1+r_2-n_v ]. By definition the local integral ℰ_v(t) becomes 1/|τ(χ_v)|^2∑_α,βχ(α)χ(β)∑_r_1, r_2∈ℤ f_v(Y_α,β,r_1,r_2,t;ω_v), where f_v(·;ω_v) is defined by (<ref>) in §<ref>. Note that (<ref>) amounts to 2k+r_2+e_v(1-t)=0 k+r_1+r_2≥ m_v ϖ_v^k(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)∈𝒪_v ϖ_v^k(1+βϖ_v^r_1+r_2-n_v)∈𝒪_v ϖ_v^k[(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v]∈𝒪_v. We will categorize our discussion into three cases based on the value of k: the case where k≤ -1 will be addressed in §<ref> below, the case where k=0 will be addressed in §<ref> below, and the case where k≥ 1 will be addressed in §<ref> below. Proposition <ref> can then be readily derived from these discussions. §.§.§ The case that k≤ -1 Suppose k≤ -1. Then (<ref>) simplifies to 2k+e_v(1-t)=0 m_v-n_v≤ k≤ -1 r_2=0, r_1=n_v 1+β∈ϖ_v^-k𝒪_v 1+α∈ϖ_v^-k𝒪_v ϖ_v^k[(1+α )(1+β)ϖ_v^-n_v+ϖ_v^-n_v(t-1)]∈𝒪_v. * Suppose that k=-n_v. Then m_v=0, e_v(1-t)=-2n_v, α=β=-1ϖ_v^n_v. So the contribution from this case is 1/|τ(χ_v)|^21_e_v(1-t)=-2n_v1_m_v=0=q_v^-n_v1_e_v(1-t)=-2n_v1_m_v=0. * Suppose that k>-n_v. Write α=-1+ϖ_v^-kα', and β=-1+ϖ_v^-kβ', where α', β'ϖ_v^n_v+k. Then ϖ_v^k[(1+α )(1+β)ϖ_v^-n_v+ϖ_v^-n_v(t-1)]∈𝒪_v becomes α'β'+(t-1)ϖ_v^2k∈ϖ_v^n_v+k𝒪_v. So the contribution from this case is |τ(χ_v)|^-2/(K_v[m_v])∑_max{m_v-n_v,1-n_v}≤ k≤ -1𝒮(k), where 𝒮(k):=∑_α',β'ϖ_v^n_v+k α'β'=-(t-1)ϖ_v^2kϖ_v^n_v+kχ(1-ϖ_v^-kα')χ(1-ϖ_v^-kβ')ω_v(ϖ_v^-kβ'). Employing the trivial bound to 𝒮(k), we see that the corresponding contribution to ℰ_v(t) in this case (i.e., k>-n_v) is ≪(K_v[m_v])^-1∑_max{m_v-n_v,1-n_v}≤ k≤ -1q_v^k·1_e_v(1-t)=-2k. Therefore, the contribution to ℰ_v(t) in the case that k≤ -1 is ≪q_v^k/(K_v[m_v])∑_m_v-n_v≤ k≤ -11_e_v(1-t)=-2k≪ q_v^m_v+k∑_m_v-n_v≤ k≤ -11_e_v(1-t)=-2k. §.§.§ The case that k=0 Suppose that k=0 in (<ref>), which implies that r_2+e_v(1-t)=0 r_1+r_2≥ m_v min{r_2, r_1+r_2-n_v}≥ 0 (ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v∈𝒪_v. Then r_1+r_2≥max{n_v,m_v}=n_v, e_v(t)-r_1≥ -n_v, and e_v(1-t)=-r_2≤ 0. So 0≤ r_2=-e_v(t-1), and n_v+e_v(1-t)≤ r_1≤ e_v(t)+n_v. Therefore, the contribution to ℰ_v(t) from this case is 1_e_v(t)-e_v(1-t)≥ 0/|τ(χ_v)|^2(K_v[m_v])∑_n_v+e_v(1-t)≤ r_1≤ e_v(t)+n_v𝒥_1(r_1,t), where 𝒥_1(r_1,t) is defined by ∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^× (ϖ_v^r_2+αϖ_v^r_1-e_v(t-1)-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v∈𝒪_vχ(α)χ(β)ω_v(1+βϖ_v^r_1-e_v(t-1)-n_v). * Suppose r_2≥ 1. Then e_v(1-t)≤ -1, implying that e_v(t)=e_v(1-t)=-r_2≤ -1. Hence, -r_1+e_v(t)=-r_1-r_2≤ -n_v (from the third constraint in (<ref>)). Along with the last condition in (<ref>) we have -r_1+e_v(t)≥ -n_v. So -r_1+e_v(t)=-n_v, i.e., r_1+r_2=n_v. Consequently, 𝒥_1(r_1,t)=∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^× (ϖ_v^-e_v(t)+α )β +ϖ_v^n_v-r_1t+α≡ 0ϖ_v^n_vχ(α)χ(β)ω_v(1+β). Write t=ϖ_v^e_v(t)γ under the embedding F^×↪ F_v^×, where γ∈𝒪_v^×. Then 𝒥_1(r_1,t)=∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^× (ϖ_v^-e_v(t)+α )(1+β)≡ -γ+ϖ_v^-e_v(t)ϖ_v^n_vχ(α)χ(β)ω_v(1+β), which, after a change of variables, is equal to 𝒥_1(r_1,t)=∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^× αβ≡ -γ+ϖ_v^-e_v(t)ϖ_v^n_vχ(α-ϖ_v^-e_v(t))χ(β-1)ω_v(β). Since -γ+ϖ_v^-e_v(t)∈𝒪_v^×, by the trivial bound, we have |𝒥_1(r_1,t)|≤ q_v^n_v. * Suppose r_2=0. Then e_v(1-t)=0. Therefore, 𝒥_1(r_1,t) is equal to ∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^× (1+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v∈𝒪_vχ(α)χ(β)ω_v(1+βϖ_v^r_1-n_v). Changing variable α↦α, the sum 𝒥_1(r_1,t) becomes ∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^× (α+ϖ_v^r_1-n_v)(β +ϖ_v^n_v-r_1t) ≡ t-1ϖ_v^n_vχ(α)χ(β)ω_v(1+βϖ_v^r_1-n_v). Changing variables α↦α-ϖ_v^r_1-n_v and β↦β-ϖ_v^n_v-r_1t, 𝒥_1(r_1,t) can be rewritten as ∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^× αβ≡ t-1ϖ_v^n_vχ(α-ϖ_v^r_1-n_v)χ(β-ϖ_v^n_v-r_1t)ω_v(1-t+βϖ_v^r_1-n_v). Since e_v(t-1)=0, then β is uniquely determined by α. Hence the trivial bound yields |𝒥_1(r_1,t)|≤ q_v^n_v. Consequently, substituting the above discussions into (<ref>) we then see that the contribution from this case is ≪1_e_v(t)-e_v(1-t)≥ 0/|τ(χ_v)|^2(K_v[m_v])∑_r_1q_v^m_v≪ (e_v(t)-e_v(1-t)+1)q_v^m_v1_e_v(t)-e_v(1-t)≥ m_v-n_v, where n_v+e_v(1-t)≤ r_1≤ e_v(t)+n_v. §.§.§ The case that k≥ 1 Suppose that k≥ 1 in (<ref>), which implies that 2k+r_2+e_v(1-t)=0 k+r_1+r_2≥max{n_v, m_v}=n_v k+r_2≥ 0 ϖ_v^k[(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v]∈𝒪_v. From the last constraint we conclude that k-r_1+e_v(t)≥ -n_v. Hence e_v(1-t)=e_v(t)≤ -k≤ -1 e_v(t)≤ r_2≤ -e_v(t)-2 n_v+e_v(t)+1≤ r_1≤ n_v k≥ n_v-r_1-r_2≥ 1 [(ϖ_v^k+r_2+α )βϖ_v^-n_v+ϖ_v^k-r_1t+αϖ_v^k-n_v]∈𝒪_v. Therefore, the contribution to ℰ_v(t) from this case is 1_e_v(t)≤ -1/|τ(χ_v)|^2(K_v[m_v])∑_r_1∑_e_v(t)≤ r_2≤ -e_v(t)-2∑_1≤ k≤ -e_v(t)𝒥_2(r_1,r_2,k,t), where n_v+e_v(t)+1≤ r_1≤ n_v, and 𝒥_2(r_1,r_2,k,t) is defined by ∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^× (ϖ_v^k+r_2+α)(β +ϖ_v^k)≡ϖ_v^2k+r_2-tϖ_v^n_v+k-r_1ϖ_v^n_vχ(α)χ(β) ω_v(1+βϖ_v^r_1+r_2-n_v). Note that k≥ 1, β +ϖ_v^k∈(𝒪_v/ϖ_v^n_v𝒪_v)^×. So α is uniquely determined by β. Therefore, by the trivial bound, |𝒥_2(r_1,r_2,k,t)|≤ q_v^n_v. Along with (<ref>), the contribution to ℰ_v(t) from this case is ∑_n_v+e_v(t)+1≤ r_1≤ n_v∑_e_v(t)≤ r_2≤ -e_v(t)-2q_v^m_v1_e_v(t)≤ -1≪ (1-e_v(t))^2q_v^m_v1_e_v(t)≤ -1. A more refined bound can be derived in the case where k≥ 0 by estimating the character sums nontrivially. However, it becomes apparent that the contribution from the k≥ 0 case is overshadowed by the contribution from k≤ -1. Therefore, there is no necessity to further reduce the error term. §.§ Local Estimates at Ramified Places Σ_^+ Consider v∈Σ_^+, which means v|𝔔 and m_v≥ n_v, where m_v=e_v(𝔐) and n_v=r_χ_v (cf. §<ref>). Let v∈Σ_^+. Then ℰ_v(t)≪ (1-e_v(t))^2q_v^m_v if e_v(t)≤ -1,m_v=n_v, (e_v(t)-e_v(1-t)+1+m_v-n_v)q_v^m_v if e_v(t)≥ m_v-n_v, 0 otherwise, where the implied constant depends at most on F_v. Consider the notation used in the proof of Proposition <ref> in §<ref> (or in <cit.>). Since m_v≥ n_v≥ 1, the constraints (<ref>) can be simplified as follows: 2k+r_2+e_v(1-t)=0 k+r_1+r_2≥ m_v ϖ_v^k(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)∈𝒪_v^× ϖ_v^k(1+βϖ_v^r_1+r_2-n_v)∈𝒪_v^× ϖ_v^k[(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v]∈𝒪_v. By considering the second and fourth constraints in (<ref>), we deduce that k≥ 0. We can now proceed to examine the following two cases. §.§.§ The case that k=0 Suppose that k=0 in (<ref>), which implies that r_2+e_v(1-t)=0 r_1+r_2≥ m_v min{r_2, r_1+r_2-n_v}=0 (ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v∈𝒪_v. Then r_1+r_2≥max{n_v,m_v}=m_v, e_v(t)-r_1≥ -n_v, and e_v(1-t)=-r_2≤ 0. So 0≤ r_2=-e_v(t-1), and m_v+e_v(1-t)≤ r_1≤ e_v(t)+n_v. Therefore, the contribution to ℰ_v(t) from this case is 1_e_v(t)-e_v(1-t)≥ m_v-n_v/|τ(χ_v)|^2(K_v[m_v])∑_m_v+e_v(1-t)≤ r_1≤ e_v(t)+n_v𝒥_1(r_1,t), where 𝒥_1(r_1,t) is defined by ∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^× (ϖ_v^r_2+αϖ_v^r_1-e_v(t-1)-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v∈𝒪_vχ(α)χ(β)ω_v(1+βϖ_v^r_1-e_v(t-1)-n_v). By the trivial bound (as in §<ref>) the sum in (<ref>) is ≪ (e_v(t)-e_v(1-t)+1+m_v-n_v)q_v^m_v1_e_v(t)-e_v(1-t)≥ m_v-n_v. §.§.§ The case that k≥ 1 Suppose that k≥ 1 in (<ref>), which implies that 2k+r_2+e_v(1-t)=0 k+r_1+r_2≥ n_v k+r_1+r_2-m_v=0 k+r_2≥ 0 ϖ_v^k[(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v]∈𝒪_v. Since m_v≥ n_v, then by the second and the third constraints in (<ref>) we have m_v=n_v. From the last constraint we conclude that k-r_1+e_v(t)≥ -n_v. Hence e_v(1-t)=e_v(t)≤ -k≤ -1, m_v=n_v e_v(t)≤ r_2≤ -e_v(t)-2 n_v+e_v(t)+1≤ r_1≤ n_v k= n_v-r_1-r_2≥ 1 [(ϖ_v^k+r_2+α )βϖ_v^-n_v+ϖ_v^k-r_1t+αϖ_v^k-n_v]∈𝒪_v. As in §<ref>, the contribution to ℰ_v(t) from this case is 1_e_v(t)≤ -1·1_m_v=n_v/|τ(χ_v)|^2(K_v[m_v])∑_n_v+e_v(t)+1≤ r_1≤ n_v∑_e_v(t)≤ r_2≤ -e_v(t)-2𝒥_2(r_1,r_2,k,t), where 𝒥_2(r_1,r_2,k,t) is defined by ∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^× (ϖ_v^k+r_2+α)(β +ϖ_v^k)≡ϖ_v^2k+r_2-tϖ_v^n_v+k-r_1ϖ_v^n_vχ(α)χ(β) ω_v(1+βϖ_v^r_1+r_2-n_v). By trivial bound the contribution to ℰ_v(t) in this case is ≪ (1-e_v(t))^2q_v^m_v1_e_v(t)≤ -11_m_v=n_v. Therefore, Proposition <ref> follows. Let notation be as before. Then 𝒥_v^(2)(r_1,t)≪ n_vq_v^r_ω_v+n_v/2+n_v-r_1+e_v(t)/2, where the implied constant is absolute. Note that (1+αϖ_v^r_1-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v∈𝒪_v amounts to (α^-1+ϖ_v^r_1-n_v)β +α^-1ϖ_v^n_v-r_1t+1∈ϖ_v^n_v𝒪_v. Changing the variable α↦α^-1, we have 𝒥_v^(2)(r_1,t)=∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^× (α+ϖ_v^r_1-n_v)β +αϖ_v^n_v-r_1t+1∈ϖ_v^n_v𝒪_vχ_v(α)χ_v(β)ω_v(1+βϖ_v^r_1-n_v). Changing variables α↦α-ϖ_v^r_1-n_v and β↦β-ϖ_v^n_v-r_1t, 𝒥_v^(2)(r_1,t) becomes ∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^× αβ≡ t-1ϖ_v^n_vχ_v(α-ϖ_v^r_1-n_v)χ_v(β-ϖ_v^n_v-r_1t)ω_v(1-t+βϖ_v^r_1-n_v). Let h∈𝒪_v^×. Let 𝒥_v^(2)(r_1,t,ψ_v,h) be defined by ∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^× αβ≡ t-1ϖ_v^n_vχ_v(α-ϖ_v^r_1-n_v)χ_v(β-ϖ_v^n_v-r_1t)ψ_v(hβ q_v^-r_ω_v). Here we recall that ψ_v is a fixed unramified additive chatacter of F_v. By definition, we have 𝒥_v^(2)(r_1,t,ψ_v,h)=0 if r_ω_v>r_1. Notice that χ is primitive. By Theorem 2G of <cit.> (cf. p.45) or Deligne's quasi-orthogonality of trace functions (cf. <cit.>) and Lemmas 12.2 and 12.3 in <cit.>, following the proof of Proposition 2 in <cit.>, we have 𝒥_v^(2)(r_1,t,ψ_v,h) ≪ n_v(q_v^n_v-r_1+e_v(t),q_v^r_1-n_v,q_v^n_v)^1/2q_v^n_v/2·1_r_ω_v≤ r_1, where the implied constant is absolute. In particular, (<ref>) yields that 𝒥_v^(2)(r_1,t,ψ_v,h) ≪ n_vq_v^n_v-r_1+e_v(t)/2· q_v^n_v/2·1_r_ω_v≤ r_1. Since ω_v is primitive, we have the Gauss sum expansion ω_v(γ)=1/τ(ω_v)∑_h∈ (𝒪_v/ω_v^r_ω_v𝒪_v)^×ω_v(h)ψ_v(hγ q_v^-r_ω_v), where q_v^r_ω_v is the conductor of ω_v. Hence, (<ref>) follows from (<ref>), (<ref>), triangle inequality, and the fact that |τ(ω_v)|=q_v^r_ω_v/2. * Suppose that k+r_1+r_2=n_v. Then (<ref>) amounts to 2k+r_2+e_v(1-t)=0 k+r_1+r_2=n_v≥ m_v k+r_2≥ 0 ϖ_v^k[(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v]∈𝒪_v. As a consequence, (<ref>) yields e_v(1-t)=e_v(t)≤ -k≤ -1 e_v(t)≤ r_2≤ -e_v(t)-2 n_v+e_v(t)+1≤ r_1≤ n_v k=n_v-r_1-r_2≥ 1 [(ϖ_v^k+r_2+α )βϖ_v^-n_v+ϖ_v^k-r_1t+αϖ_v^k-n_v]∈𝒪_v. From the last constraint we conclude that k-r_1+e_v(t)=-n_v. Hence ∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^× (ϖ_v^k+r_2+α)(β +ϖ_v^k)≡ϖ_v^2k+r_2-tϖ_v^-e_v(t)ϖ_v^n_vχ(α)χ(β) ω_v(1+βϖ_v^-k) Note that 2r_2+k=m_2-r_1+k=-e_v(t)≥ 1. Then γ:=ϖ_v^2k+r_2-tϖ_v^-e_v(t)∈𝒪_v^×. After a change of variables, we obtain 𝒥_v^(3)(r_1,r_2,t)=∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^× αβ≡γϖ_v^n_vχ(α-ϖ_v^k+r_2)χ(β-ϖ_v^k)ω_v(βϖ_v^-k). By <cit.> and the fact that k≤ -e_v(t), 𝒥_v^(3)(r_1,r_2,t)≪ n_vq_v^n_v+r_ω_v/2· q_v^min{k,k+r_2,n_v}/21_r_ω_v≤ n_v-k≤ n_vq_v^n_v+r_ω_v-e_v(t)/21_r_ω_v≤ n_v. 2k+r_2+e_v(1-t)=0 k+r_1+r_2≥ m_v ϖ_v^k(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)∈𝒪_v ϖ_v^k(1+βϖ_v^r_1+r_2-n_v)∈𝒪_v ϖ_v^k[(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v]∈𝒪_v. * Suppose that k+r_1+r_2≥ n_v+1. Then m_v=0, which forces that r_ω_v=0. In this case (<ref>) amounts to 2k+r_2+e_v(1-t)=0 k+r_1+r_2≥ n_v k+r_1+r_2≥ m_χ_v k+r_2≥ 0 ϖ_v^k[(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v]∈𝒪_v^×. e_v(1-t)=e_v(t)≤ -k≤ -1 e_v(t)≤ r_2≤ -e_v(t)-2 n_v+e_v(t)+1≤ r_1≤ n_v k+r_1+r_2≥ n_v+1 [(ϖ_v^k+r_2+α )βϖ_v^-n_v+ϖ_v^k-r_1t+αϖ_v^k-n_v]∈𝒪_v. Since n_v=m_v, then r_ω_v=0, i.e., ω_v is trivial. Hence the contribution from this case to ℰ_v(t) is ℰ^(3)_v(t):=∑_r_1=n_v+e_v(t)+1^n_v∑_r_2=e_v(t)^-e_v(t)-2q_v^-2r_1s_0-r_2s_01_r_1+r_2≤ n_v-1/|τ(χ_v)|^2(K_v[n_v])·𝒥_v^(3)(r_1,r_2,t), where we set k=n_v-r_1-r_2 and 𝒥_v^(3)(r_1,r_2,t):=∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^× (ϖ_v^k+r_2+α)(β +ϖ_v^k)+tϖ_v^-e_v(t)-ϖ_v^2k+r_2∈ϖ_v^n_v𝒪_vχ(α)χ(β). Therefore, we have ℰ^(3)_v(t)≪ n_v(1-e_v(t))^2q_v^-(2n_v+2s_0+3e_v(t))s_0·1_e_v(t)≤ -1· q_v^n_v-e_v(t)/2, where the implied constant is absolute. Then Proposition <ref> follows from (<ref>), (<ref>) and (<ref>). §.§ Local Estimates: archimedean Let v|∞. Define by ℰ_v^†:=∫_F_v^×∫_F_vmax_t∈ F-{0,1}|f_v([ y_v x_v^-1t; x_vy_v 1 ])|dx_vd^×y_v. By <cit.> we have the following estimate. Let notation be as before. Let v|∞. Then ℰ_v^†≪ T_v^ε, where the implied constant depends on ε, F, c_v, and C_v defined in §<ref>. §.§ Bounding Regular Orbital Integrals: Proof of Theorem <ref> §.§.§ The support of the rationals t∈ F-{0,1} Let notation be as before. Suppose t∈ F-{0,1}. Let f be the test function defined in §<ref>. Let 𝔛(𝔔,f):={ξ∈ F^×∩∏_v∈Σ_^-𝔭_v^-2(n_v-m_v)∏_v∤𝔔𝔭_v^m_v𝒪_F: |ξ|_v≪ 1, v|∞}, where the implied constant depends only on f_∞. Then the integral ∏_v∈Σ_Fℰ_v(t) converges absolutely and it vanishes unless t/t-1∈𝔛(𝔔,f). Recall the definition (<ref>): for v∈Σ_F, ℰ_v(t):=∫_F_v^×∫_F_v^×f_v([ y_v x_v^-1t; x_vy_v 1 ])χ_v(y_v)d^×y_vd^×x_v. By Lemma <ref> the integral ℰ_v(t)=1 for all but finitely many v's. It then follows from Propositions <ref> and <ref>, and Lemma <ref> that ∏_v∈Σ_Fℰ_v(t) converges absolutely and it is vanishing unless e_v(t)-e_v(t-1)≥ m_v, if v∤𝔔, e_v(t)-e_v(t-1)≥ 0, if v∈Σ_^+. e_v(t)-e_v(t-1)≥ -2(n_v-m_v), if v∈Σ_^-. Since t/(t-1)∈ F-{0,1}, then (<ref>) follows from (<ref>). §.§.§ Estimate of nonarchimedean integrals Fix an ideal ℜ⊂𝒪_F with the property that e_v(ℜ)=m_v for v∤𝔔, and e_v(ℜ)=0 for all v<∞ and v|𝔔. Fix an ideal 𝔑⊂𝒪_F with the property that e_v(𝔑)=n_v-m_v for v∈Σ_^-, and e_v(ℜ)=0 for all v<∞ and v∉Σ_^-. For t∈ F-{0,1} with t/(t-1)∈𝔛(𝔔,f) (cf. (<ref>)), we may write t/(t-1)=u, u∈ℜ𝔑^-2𝒪_F. Then 1/(t-1)=u-1. Let notation be as above. Let ℰ_v(t) be defined by (<ref>). Set ℰ_(t):=∏_v<∞|ℰ_v(t)|. Let t/(t-1)=u∈ℜ𝔑^-2𝒪_F be as in (<ref>). Then ℰ_(t) is ≪ (MQN_F(u(u-1)))^εM∏_v∈Σ_F, v∤𝔔1_e_v(u)≥ m_v∏_v∈Σ_^-𝒥_v^-(u)∏_v∈Σ_^+𝒥_v^+(u), where M=N_F(𝔐) (cf. (<ref>)), and 𝒥_v^-(u):= 1_e_v(u)≥ 0+∑_m_v-n_v≤ k≤ -1q_v^k1_e_v(u-1)=2k, 𝒥_v^+(u):= 1_e_v(u-1)≥ 11_m_v=n_v+1_e_v(u)≥ m_v-n_v. Here the implied constant in (<ref>) depends on F and ε. By Lemma <ref> we have ℰ_v(t)=1 if e_v(t)=e_v(1-t)=0, m_v=0, n_v=0, and v∤𝔇_F. There are finitely many remaining places v∈𝒱:={v∈Σ_F,:v|𝔐𝔑 or e_v(t)≠ 0 or e_v(t-1)≠ 0}. Let us denote the expression α as follows: ∏_v∈𝒱∩Σ_^+n_v^2(|e_v(t)-e_v(t-1)|+1)^2∏_v∈𝒱-Σ_^+(1+|e_v(t)|+2|e_v(t-1)|)^2, where the terms in the product dominate coefficients in Lemma <ref>, Propositions <ref> and <ref>. Using (<ref>), we observe that e_v(u)≥ -e_v(𝔔). Consequently, we have the estimate: α≪ (MQ)^2ε· (N_F(u)N_F(u-1))^ε, where the implied constants depends on ε. As a consequence, (<ref>) follows from Lemma <ref>, Propositions <ref> and <ref>. For x_∞=⊗_v|∞x_v∈ F_∞. For t∈𝔛(𝔔,f), parametrize t/(t-1) via (<ref>). Let 𝒞(x_∞):=∑_t∈ F-{0,1}, t/t-1=u∈𝔛(𝔔,f) |t/t-1|_v≪ |x_v|_v, v|∞ℰ_(t). Let notation be as before. Let x_∞∈ F_∞^×. Let 𝒞(x_∞) be defined by (<ref>). Then 𝒞(x_∞)≪_ε,F(MQ(1+|x_∞|_∞))^ε· |x_∞|_∞· Q·1_M≪ Q^2(M,Q) |x_∞|_∞, where the implied constant depends on ε and F. Note that u∈ℜ𝔑^-2𝒪_F-{0,1} and N_F(u)≪ |x_∞|_∞. Hence, N_F(ℜ𝔑^-2)≪ |x_∞|_∞, i.e., M/(M,Q)≪ Q^2|x_∞|_∞. By Lemma <ref>, we have 𝒞(x_∞)≪ (MQ(1+|x_∞|_∞))^ε𝒮(x_∞)·∏_v∈Σ_F,q_v^m_v. where the auxiliary sum 𝒮(x_∞) is defined by 𝒮(x_∞):=∑_u∈ℜ𝔑^-2𝒪_F∩ F^× |u|_v≪ |x_v|_v, v|∞ e_v(u)≥ m_v, v<∞, v∤𝔔𝒮^+(u)𝒮^-(u). Here the integral ideals ℜ and 𝔑 are defined in §<ref>, and 𝒮^+(u):= ∏_v∈Σ_^+[1_e_v(u-1)≥ 1·1_m_v=n_v+1_e_v(u)≥ m_v-n_v], 𝒮^-(u):= ∏_v∈Σ_^-[1_e_v(u)≥ 0+∑_m_v-n_v≤ k≤ -1q_v^k1_e_v(u-1)=2k]. We proceed to deal with 𝒮(x_∞). Let 𝔔^+:=∏_v∈Σ_^+𝔭_v and 𝔔^-:=∏_v∈Σ_^-𝔭_v. Expanding the products 𝒮^+(u)𝒮^-(u) we obtain 𝒮(x_∞)=∑_𝔞_1𝔞_2=𝔔^+ m_v=n_v, ∀ v|𝔞_1∑_𝔟_3𝔟_4=𝔔^-𝒮(𝔞_1,𝔟_3), where 𝒮(𝔞_1,𝔟_3):=∑_u∈ℜ𝔑^-2𝒪_F∩ F^× |u|_v≪ |x_v|_v, v|∞ e_v(u)≥ m_v, v<∞, v∤𝔔 e_v_1(u-1)≥ 1, v_1|𝔞_1 e_v_2(u)≥ m_v_2-n_v_2, v_2|𝔞_2 e_v_3(u)≥ 0, v_3|𝔟_3 ∏_v_4|𝔟_4∑_m_v_4-n_v_4≤ k≤ -1q_v_4^k1_e_v_4(u-1)=2k. Write 𝔟_4=𝔔^-𝔟_3^-1=∏_v∈𝒱_4𝔭_v, where 𝒱_4={v_1',⋯,v_l'} is a subset of Σ_^-. Denote by k=(k_1,⋯,k_l)∈ℤ^l. Then ∏_v_4|𝔟_4∑_m_v_4-n_v_4≤ k≤ -1q_v_4^k1_e_v_4(u-1)=2k=∑_k=(k_1,⋯,k_l) m_v_i'-n_v_i'≤ k_i≤ -1, 1≤ i≤ l∏_j=1^lq_v_j'^k_j1_e_v_j'(u-1)=2k_j. Therefore, 𝒮(x_∞)=∑_𝔞_1𝔞_2=𝔔^+ m_v=n_v, ∀ v|𝔞_1∑_𝔟_3𝔟_4=𝔔^-∑_k=(k_1,⋯,k_l) m_v_i'-n_v_i'≤ k_i≤ -1, 1≤ i≤ l∏_j=1^lq_v_j'^k_j·𝒮^†(x_∞), where 𝒮^†(x_∞):=∑_u∈ℜ𝔑^-2𝒪_F∩ F^× |u|_v≪ |x_v|_v, v|∞ e_v(u)≥ m_v, v<∞, v∤𝔔 e_v_1(u-1)≥ 1, v_1|𝔞_1 e_v_2(u)≥ m_v_2-n_v_2, v_2|𝔞_2 e_v_3(u)≥ 0, v_3|𝔟_3 e_v_j'(u-1)=2k_j, 1≤ j≤ l 1. By counting rational lattice points in a bounded region, we have 𝒮^†(x_∞)≪∑_u∈ℜ𝔑^-2𝒪_F∩ F^× |u|_v≪ |x_v|_v, v|∞ e_v(u)≥ m_v, v<∞, v∤𝔔 e_v_2(u)≥ m_v_2-n_v_2, v_2|𝔞_2 e_v_3(u)≥ 0, v_3|𝔟_3 e_v_j'(u)=2k_j 1≪ |x_∞|_∞∏_v∈Σ_F, v∤𝔔q_v^-m_v∏_v_2|𝔞_2q_v_2^n_v_2-m_v_2∏_j=1^lq_v_j'^-2k_j. Therefore, 𝒮(x_∞) is majorized by |x_∞|_∞∏_v∈Σ_F, v∤𝔔1/q_v^m_v∑_𝔞_1𝔞_2=𝔔^+ m_v=n_v, ∀ v|𝔞_1∏_v_2|𝔞_2q_v_2^n_v_2-m_v_2∑_𝔟_3𝔟_4=𝔔^-∑_k=(k_1,⋯,k_l) m_v_i'-n_v_i'≤ k_i≤ -1, 1≤ i≤ l∏_j=1^l1/q_v_j'^k_j. Notice that ∑_𝔞_1𝔞_2=𝔔^+ m_v=n_v, ∀ v|𝔞_1∏_v_2|𝔞_2q_v_2^n_v_2-m_v_2=∑_v|𝔔^+q_v_2^n_v_2-m_v_2∑_𝔞_1𝔞_2=𝔔^+ m_v=n_v, ∀ v|𝔞_11≪ Q^ε∑_v|𝔔^+q_v_2^n_v_2-m_v_2, and ∑_𝔟_3𝔟_4=𝔔^-∑_k=(k_1,⋯,k_l) m_v_i'-n_v_i'≤ k_i≤ -1, 1≤ i≤ l∏_j=1^lq_v_j'^-k_j≪∏_v|𝔔^-q_v^n_v-m_v∑_𝔟_3𝔟_4=𝔔^-∑_k=(k_1,⋯,k_l) m_v_i'-n_v_i'≤ k_i≤ -1, 1≤ i≤ l1, which is ≪ Q^ε∏_v|𝔔^-q_v^n_v-m_v. Therefore, 𝒮(x_∞)≪ |x_∞|_∞Q^ε∏_v∈Σ_F, v∤𝔔q_v^-m_v∏_v|𝔔q_v^n_v-m_v. Then (<ref>) follows from substituting (<ref>) into (<ref>). §.§.§ Proof of Theorem <ref> Recall the definition (<ref>) in §<ref>: J^,2_,(f,0,χ)=∑_t∈ F-{0,1}∫_𝔸_F^×∫_𝔸_F^×f([ y x^-1t; xy 1 ])χ(y)d^×yd^×x. So the regular orbital integrals J^,2_,(f,0,χ) is ≪∫_F_∞^×∫_F_∞^×∑_t∈ F-{0,1} t/t-1∈𝔛(𝔔,f)ℰ_(t)|f_∞([ y_∞ x_∞^-1t; x_∞y_∞ 1 ])|d^×y_∞d^×x_∞, where ℰ_(t):=∏_v<∞|ℰ_v(t)|. By the support of f_∞ (cf. (<ref>) in §<ref>), we have f_∞([ y_∞ x_∞^-1t; x_∞y_∞ 1 ])=0 unless y_∞≍ 1, |x_v|_v≪ 1, and |t/t-1|_v≪ |x_v|_v, for all v|∞. Write t/(t-1)=u𝔑^-2ℜ with u∈𝒪_F as in (<ref>). Then J^,2_,(f,0,χ) is ≪∫_F_∞^×∫_1+o(1)1_|x_v|_v≪ 1 v|∞·𝒞(x_∞)·max_t∈𝔛(𝔔,f)|f_∞([ y_∞ x_∞^-1t; x_∞y_∞ 1 ])|d^×y_∞d^×x_∞, where 𝒞(x_∞) is defined by (<ref>). Note that |x_v|_v≪ 1 for v|∞, yielding that |x_∞|_∞≪ 1. Hence, we may replace 1_M≪ Q^2(M,Q) |x_∞|_∞ with 1_M≪ Q^2(M,Q) in Lemma <ref>. As a consequence, we have J^,2_,(f,s_0,χ)≪_ε(MQ)^ε· Q·1_M≪ Q^2(M,Q)·∏_v|∞ℰ_v^†, where ℰ_v^† is defined by (<ref>). By Lemma <ref>, the above bound becomes J^,2_,(f,s_0,χ)≪ T^εM^εQ^1+ε·1_M≪ Q^2(M,Q), where the implied constant depends on ε, F, c_v, and C_v, v|∞. § PROOF OF MAIN RESULTS Recall the intrinsic data in §<ref>. Let F be a number field. Let χ=⊗_vχ_v be a primitive unitary Hecke character of F^×\𝔸_F^×. §.§ The Spectral Side Recall the lower bound of J_^,(f,0,χ) in §<ref>. * §.§ The Geometric Side Recall the geometric side (<ref>) in §<ref>: J_^,(f,0,χ)=J^_,(f,χ)+J^_,(f,χ)+J^,2_,(f,0,χ). Let notation be as before. Then J_^,(f,0,χ)≪ T^1/2+εM^1+ε+T^εM^εQ^1+ε·1_M≪ Q^2(M,Q), where the implied constant depends on ε, F, c_v, and C_v, v|∞ (cf. §<ref>). By Propositions <ref> and <ref>, we have J^_,(f,χ)+J^_,(f,χ)≪_ε M^1+εT^1/2+ε, where the implied constant depends only on F, ε, and c_v, C_v at v|∞, cf. §<ref>. Moreover, by Theorem <ref> we have J^,2_,(f,0,χ)≪ T^εM^εQ^1+ε·1_M≪ Q^2(M,Q). The estimate (<ref>) follows from the above inequalities. §.§ Put It All Together: Proof of Main Results Substituting Theorem <ref> and Proposition <ref> into the regularized relative trace formula J_^,(f,0,χ)=J_^,(f,0,χ) (cf. Corollary <ref> in §<ref>), we obtain the following. Let the notation be as before. Denote by 𝒜_0(Π_∞,𝔐;χ_∞,ω) the set of cuspidal representations and 𝒳_0(Π_∞,𝔐;χ_∞,ω) the set of Hecke characters, as defined in §<ref>. Then ∑_π|L(1/2,π×χ)|^2≪ T^1+εM^1+εQ^ε+T^1/2+εM^εQ^1+ε·1_M≪ Q^2(M,Q), where π∈𝒜_0(Π_∞,𝔐;χ_∞,ω), and ∑_η∫_ℝ|L(1/2+it,ηχ)L(1/2+it,ωηχ)|^2/|L(1+2it,ωη^2)|^2dt ≪ T^1+εM^1+εQ^ε +T^1/2+εM^εQ^1+ε·1_M≪ Q^2(M,Q), where η∈𝒳_0(Π_∞,𝔐;χ_∞,ω), and the implied constant depends only on F, ε, and c_v, C_v at v|∞, cf. §<ref>. Let notation be as before. Then ∑_π∈𝒜_0(M;χ_∞) σ_π(𝔭)≥σ|L(1/2,π×χ)|^2≪ T^1-σ/2+ε· M^1-2σ+ε· Q^1-σ+ε. If C_(χ)>1, then Theorem <ref> follows from (<ref>). In the case where C_(χ)=1, we replace χ with χχ_0, where χ_0 is a fixed Hecke character induced from a Dirichlet character with a fixed modulus, such as 3. Similarly, we replace π with π⊗χ_0. By applying Theorem <ref> to π⊗χ_0 and χχ_0, we obtain the same bound (with a different implied constant dependent on the modulus of χ_0) for the second term L(1/2,π×χ). Consequently, Theorem <ref> follows. Let π=η⊞η. Then ω=η^2. By <cit.> there exists t_0∈ [2^-1exp(-3√(log C(ηχ))), exp(-3√(log C(ηχ)))] (which might depend on the character ηχ) such that |L(1/2,ηχ)|≪exp(log^3/4C(ηχ))|L(1/2+it_0,ηχ)|, where the implied constant depends only on F. Here C(χη)≪ M^1/2Q is the analytic conductor of ηχ. By <cit.> we have ∫_ℝ|L(1/2+it,ηχ)L(1/2+it,ωηχ)|^2/|L(1+2it,ωη^2)|^2dt≫|L(1/2+it_0,ηχ)L(1/2+it_0,ωηχ)|^2/C(ηχ)^ε. Since ω=η^2, then |L(1/2+it_0,ωηχ)|=|L(1/2+it_0,ηχ)|. So it follows from Theorem <ref> that, for χ∈𝒳_0(Π_∞,𝔐;χ_∞,ω), |L(1/2+it_0,ηχ)|^4≪ T^1+εM^1+εQ^ε+T^1/2+εM^εQ^1+ε·1_M≪ Q^2(M,Q). Suppose η is primitive. Then C_(η)=M^1/2. It then follows from (<ref>) and (<ref>) that L(1/2,ηχ)≪_η_∞,χ_∞ C_(η)^1/2+ε+C_(χ)^1/4+ε. By symmetry we also have L(1/2,ηχ)≪_η_∞,χ_∞ C_(η)^1/4+ε+C_(χ)^1/2+ε. Hence the estimate (<ref>) holds. §.§ Proof of Corollary <ref> Let f∈ℱ_2k^new(N). By Hecke's theorem there exists a primitive quadratic character χ of conductor q ≪ kN^1+ε such that L(1/2,f×χ)≠ 0. Here the implied constant is absolute. Let k∈{2,3,4,5,7}. Denote by 𝒩:=#{g∈ℱ_2k^(N): L(1/2,f×χ)L(1/2,g×χ)≠ 0}. Recall that (e.g., cf. <cit.>) ∑_g∈ℱ_2k^(N)L(1/2,g×χ)≫ N^1-ε. Then by Cauchy-Schwarz inequality and Corollary <ref> we obtain N^1-ε≪𝒩^1/2·[∑_g∈ℱ_2k^(N)|L(1/2,g×χ)|^2]^1/2≪𝒩^1/2· N^1/2+ε, leading to (<ref>). Here the implied constant depends only on ε. In the above proof, a crucial new ingredient is our Corollary <ref>, which effectively replaces the third moment estimate employed in <cit.>: ∑_g∈ℱ_2k^(N)L(1/2,g×χ)^3≪_k,ε(Nq)^1+ε. It is worth noting that Corollary <ref>, given by ∑_g∈ℱ_k^(N)|L(1/2,g×χ)|^2≪ (kNq)^ε(kN+k^1/2q·1_N≪ q^2(N,q)), provides the average Lindelöf estimate in the N-aspect when q≪_k N^1+ε. However, (<ref>) does not yield this bound when q is large. § HYBRID SUBCONVEXITY: PROOF OF THEOREM <REF> In this section, we will establish the validity of Theorem <ref> by presenting a proof that draws upon similar techniques to those used in the proof of Theorem <ref> (cf. §<ref>). However, instead of relying on Theorem <ref> in §<ref>, we will utilize the relative trace formula (i.e. Theorem <ref>) from §<ref>. Notably, the proof is simplified by not requiring amplification, although the overall methodology remains similar. §.§ Notation Recall the data in Theorem <ref>: we let * χ=⊗_vχ_v be a Hecke character of 𝔸_F^×/F^×, and Q:=C_(χ) ; * 𝔐 be an integral ideal of norm |𝔐|:=N_F(𝔐); * 𝒜_0^χ_∞(T;𝔐) be the set of cuspidal automorphic representations π=⊗_vπ_v of PGL(2)/F such that π_=⊗_v<∞π_v has arithmetic conductor dividing 𝔐, and π_v⊗χ_v has uniform parameter growth of size (T_v;c_v,C_v), for all v|∞, cf. §<ref>, where T=∏_v|∞ T_v. Note that Weyl law yields #𝒜_0^χ_∞(T;𝔐)=(T|𝔐|)^1+o(1). §.§.§ Choice of Test Functions Despite potential ambiguity, we will continue to use the notation f=⊗_vf_v to refer to the test function, which is defined as follows. * Let f_∞ be defined as in §<ref>. * For v∈Σ_F,, let m_v'=e_v(𝔐), and n_v=n_v, the local exponent of χ_v (cf. §<ref>). Define a function on G(F_v), supported on Z(F_v)\ K_v[m_v'], by f_v(z_vk_v;1)=(K_v[m_v'])^-1, where K_v[m_v'] is the image of K_v[m_v'] in G(F_v). For g_v∈ G(F_v), define by f_v(g_v)=1/|τ(χ_v)|^2∑_α∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×∑_β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×χ_v(α)χ_v(β)f_v(g_α,β,v;1), where τ(χ_v)=∑_α∈ (𝒪_v/ϖ_v^n_v𝒪_v)^×ψ_v(αϖ_v^-n_v)χ_v(α) is the Gauss sum relative to the additive character ψ_v, and g_α,β,v:=[ 1 αϖ_v^-n_v; 1 ]g_v[ 1 βϖ_v^-n_v; 1 ]. Note that n_v=0 for almost all v∈Σ_F,. Hence, for all but finitely many v∈Σ_F,, the test function f_v(·)=f_v(·;1) (cf. (<ref>)) supports in Z(F_v)\ K_v[m_v]. * Take f=⊗_v≤∞f_v as our test function into Theorem <ref>: J_^,(f,0,χ)=J_^,(f,0,χ), where 0=(s_0,s_0), with s_0:=2^-1exp(-2√(log C(π×χ))) (cf. §<ref>). §.§ The Spectral Side Similar to Theorem <ref>, we have Let notation be as before. Then 𝒥_^(α,χ)≫_εT^-1/2-ε(|𝔐|Q)^-ε∑_π∈𝒜_0^χ_∞(T;𝔐)|L(1/2,π×χ)|^2, where the implied constant depends only on F, ε, and c_v, C_v at v|∞, cf. §<ref>. §.§ The Geometric Side In this section we handle the geometric side J_^,(f,0,χ)= J^_,(f,0,χ)+J^,+_,(f,0,χ)+J^,∧_,(f,0,χ) +J^,_,(f,0,χ)+J^,2_,(f,0,χ). §.§.§ Bounds of Irregular Orbital Integrals The estimates from §<ref> and §<ref> remain valid with T≍ T, 𝒩_f replaced by 1, and [M,M'Q] replaced with |𝔐|. Specifically, Propositions <ref> and <ref>, and Lemmas <ref>, <ref> and <ref> become: Let notation be as before. Then J^_,(f,0,χ)≪ |𝔐|^1+εT^1/2+ε, J^,∧_,(f,0,χ)≪ s_0^-1|𝔐|^1+εT^1/2+ε, J^,+_,(f,0,χ)+J^,,1_,(f,0,χ)≪ T^ε|𝔐|^ε, J^,,2_,(f,0,χ)≪ s_0^-1|𝔐|^1+εT^1/4+ε, where the implied constant depends only on F, ε, and c_v, C_v at v|∞, cf. §<ref>. §.§.§ Bounds of Regular Orbital Integrals We need to do Theorem <ref> in §<ref>. Let notation be as before. Then J^,2_,(f,0,χ)≪ T^ε|𝔐|^εQ^1+ε, where the implied constant depends on ε, F, c_v, and C_v, v|∞. Proposition <ref> in §<ref>. Let v|𝔔. Then ℰ_v(t)≪ n_v q_v^n_v/2-e_v(t)s_0 if e_v(t)≤ -1, κ_vq_v^m_v'+e_v(t)/2 if e_v(t)≥ m_v'-n_v,e_v(t-1)=0, 0 otherwise, where κ_v=n_v(e_v(t)+n_v-m_v'+1), and the implied constant is absolute. Following the proof of Proposition <ref>, the local integral ℰ_v(t) becomes 1/|τ(χ_v)|^2∑_α,βχ(α)χ(β)∑_r_1, r_2∈ℤ q_v^-2r_1s_0-r_2s_01_Y_α,β,r_1,r_2,t∈ Z(F_v)K_v[m_v]f_v(Y_α,β,r_1,r_2,t;1), where f_v(·;1) is defined by (<ref>), and Y_α,β,r_1,r_2,t is defined by [ ϖ_v^r_2+αϖ_v^r_1+r_2-n_v (ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v; ϖ_v^r_1+r_2 1+βϖ_v^r_1+r_2-n_v ]. Note that 1_Y_α,β,r_1,r_2,t∈ Z(F_v)K_v[m_v]≠ 0 unless ϖ_v^kY_α,β,r_1,r_2,t∈ K_v[m_v'] for some k∈ℤ. Similar to (<ref>), the constraint (<ref>) amounts to 2k+r_2+e_v(1-t)=0 k+r_1+r_2≥ m_v' ϖ_v^k(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)∈𝒪_v^× ϖ_v^k(1+βϖ_v^r_1+r_2-n_v)∈𝒪_v^× ϖ_v^k[(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v]∈𝒪_v. Then the estimate of ℰ_v(t) boils down to Proposition <ref> (with m_v replaced by m_v') if m_v'≥ n_v. Now we assume that m_v'<n_v. * Suppose k≥ 1. Then it follows from ϖ_v^k(1+βϖ_v^r_1+r_2-n_v)∈𝒪_v^× that k+r_1+r_2=n_v, which yields that r_1=n_v+k+e_v(1-t). From ϖ_v^k(ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)∈𝒪_v^× we have r_1+k≥ 0, which, in conjunction with the first constraint, leads to k+r_2≥ 0. Therefore, 1≤ k≤ -e_v(t-1) r_1=n_v+k+e_v(1-t) r_2=-2k-e_v(1-t). * Suppose k≤ -1. Then it follows from ϖ_v^k(1+βϖ_v^r_1+r_2-n_v)∈𝒪_v^× that r_1+r_2=n_v, which contradicts k+r_1+r_2≥ m_v'. Hence, we must have k≤ 0. r_1=n_v, r_2=0, m_v'-n_v≤ k≤ -1, e_v(1-t)=-2k, α≡ -1ϖ_v^-k β≡ -1ϖ_v^-k Moreover, β is uniquely determined by αϖ_v^n_v. So * Suppose k=0 in (<ref>), which implies that r_2+e_v(1-t)=0 r_1+r_2≥ m_v' min{r_2, r_1+r_2-n_v}=0 (ϖ_v^r_2+αϖ_v^r_1+r_2-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v∈𝒪_v. * Suppose r_2≥ 1. Then r_1+r_2=n_v. Since e_v(t)-r_1≥ -n_v, then e_v(t)+r_2≥ 0. So e_v(t)-e_v(1-t)≥ 0. In this case we have r_2=-e_v(t). Therefore, the contribution to ℰ_v(t) from this case is ℰ^(1)_v(t):=q_v^-2n_vs_0-e_v(t)s_01_e_v(t)≤ -1/|τ(χ_v)|^2(K_v[m_v'])∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^× (ϖ_v^-e_v(t)+α)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v∈𝒪_vχ(α)χ(β). Write t=ϖ_v^e_v(t)γ under the embedding F^×↪ F_v^×, where γ∈𝒪_v^×. Then the last sum over α and β becomes ∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^× (ϖ_v^-e_v(t)+α )(β+1)≡ -γ+ϖ_v^-e_v(t)ϖ_v^n_vχ(α)χ(β), which, after a change of variables, is equal to 𝒥_v^(1)(t):=∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^× αβ≡ -γ+ϖ_v^-e_v(t)ϖ_v^n_vχ(α-ϖ_v^-e_v(t))χ(β-1). As a special case of Proposition 2 on p.71 of <cit.> we have 𝒥_v^(1)(t)≪ n_vq_v^n_v/2, where the implied constant is absolute. Hence, ℰ^(1)_v(t)≪ n_vq_v^m_v'-n_v/2-2n_vs_0-e_v(t)s_01_m_v'≤ n_v1_e_v(t)≤ -1. * Suppose r_2=0. Then e_v(1-t)=0, r_1≥ n_v, and e_v(t)≥ r_1-n_v. The contribution to ℰ_v(t) from this case is ℰ^(2)_v(t):=∑_r_1=n_v^e_v(t)+n_vq_v^-2r_1s_01_e_v(t)≥ r_1-n_v/|τ(χ_v)|^2(K_v[m_v'])·𝒥_v^(2)(r_1,t), where 𝒥_v^(2)(r_1,t):=∑_α,β∈ (𝒪_v/ϖ_v^n_v𝒪_v)^× (1+αϖ_v^r_1-n_v)βϖ_v^-n_v+ϖ_v^-r_1t+αϖ_v^-n_v∈𝒪_vχ(α)χ(β). By Lemma <ref> below (in conjunction with r_1≥ n_v) the sum ℰ^(2)_v(t) is ≪ n_v(e_v(t)+1)1_e_v(t)≥ e_v(1-t)=0/|τ(χ_v)|^2(K_v[m_v'])· q_v^n_v+e_v(t)/2. Since |τ(χ_v)|^2(K_v[m_v])≫ q_v^n_v-m_v', then ℰ^(2)_v(t)≪ n_v^2(e_v(t)+1)1_e_v(t)≥ e_v(1-t)=0q_v^m_v'-n_v/2+e_v(t)/2. Then Proposition <ref> follows from (<ref>) and (<ref>). alpha
http://arxiv.org/abs/2307.04580v1
20230710141505
An implicit DG solver for incompressible two-phase flows with an artificial compressibility formulation
[ "Giuseppe Orlando" ]
math.NA
[ "math.NA", "cs.NA", "physics.flu-dyn" ]
An implicit DG solver for incompressible two-phase flows with an artificial compressibility formulation Giuseppe Orlando^(1) ======================================================================================================= ^(1) MOX - Dipartimento di Matematica, Politecnico di Milano Piazza Leonardo da Vinci 32, 20133 Milano, Italy [email protected] Keywords: Navier-Stokes equations, incompressible flows, two-phase flows, artificial compressibility, Discontinuous Galerkin methods. We propose an implicit Discontinuous Galerkin (DG) discretization for incompressible two-phase flows using an artificial compressibility formulation. Conservative level set (CLS) method is employed in combination with a reinitialization procedure to capture the moving interface. A projection method based on the L-stable TR-BDF2 method is adopted for the time discretization of the Navier-Stokes equations and of the level set method. Adaptive Mesh Refinement (AMR) is employed to enhance the resolution in correspondence of the interface between the two fluids. The effectiveness of the proposed approach is shown in a number of classical benchmarks, such as the Rayleigh-Taylor instability and the rising bubble test case, for which a specific analysis on the influence of different choices of the mixture viscosity is carried out. § INTRODUCTION Two-phase flows are common in many engineering and industrial applications. An evolving interface delimits the bulk regions of the single phases. Many techniques have been developed over the years to capture the motion of the interface. Two classes of methods are commonly used to locate the interface: interface-tracking and interface-capturing. Interface-tracking schemes employ either Arbitrary Lagrangian–Eulerian (ALE) methods on a mesh that deforms with the interface <cit.> or marker and cell methods <cit.>. Interface-capturing techniques are instead based on fixed spatial grids with an interface function which captures the interface. A full survey on interface-capturing methods goes beyond the scope of this work and we refer e.g. to <cit.> for a review of these techniques. Interface capturing methods include the level set (LS) method <cit.>, which represents the interface as an iso-surface of the so-called level set function. Classically, the level set function is defined as the signed distance function. However, this choice leads to non conservative methods. A number of approaches have been developed to overcome this issue; in this work, we employ the conservative level set (CLS) method, originally proposed in <cit.>, and briefly summarized in Section <ref>. CLS includes a reinitialization equation to maintain the shape of the level set, which will be also discussed in Section <ref>. Changing fluid properties, such as density and viscosity, and surface tension at the interface lead to discontinuities that make the discretization of the Navier-Stokes equations particularly challenging. The Discontinuous Galerkin (DG) method has been widely employed in the field of Computational Fluid Dynamics, see e.g. <cit.>, and is a natural candidate for the discretization of the governing equations of two-phase flows. Several approaches have been proposed in the literature combining the DG method and the level set method, see among many others <cit.>. In this paper, we propose an extension of the solver for single-phase incompressible Navier-Stokes equations with an artificial compressibility formulation presented in <cit.>, so as to overcome well know issues of projection methods. The time discretization is therefore based on the TR-BDF2 scheme <cit.>, which is a second order two-stage method. A brief review of the TR-BDF2 method will be given in Section <ref>, whereas we refer to <cit.> for a detailed analysis of the scheme. The solver is implemented in the framework of the open source numerical library deal.II <cit.>, which supports native non-conforming h-adaptation. We will exploit these capabilities to enhance the resolution in the regions close to the interface between the two fluids. The paper is structured as follows: the model equations and their non-dimensional formulation are reviewed in Section <ref>. The time discretization approach is outlined and discussed in Section <ref>. The spatial discretization is presented in Section <ref>. The application of the proposed method to a number of significant benchmarks is reported in Section <ref>. Here, we also analyze the impact of different possible choices for the mixture viscosity when the interface undergoes large deformations. Finally, some conclusions and perspectives for future work are presented in Section <ref>. § THE MODEL EQUATIONS Let Ω⊂ℝ^d, 2 ≤ d ≤ 3 be a connected open bounded set with a sufficiently smooth boundary ∂Ω and denote by 𝐱 the spatial coordinates and by t the temporal coordinate. The two fluids in Ω are considered immiscible and they are contained in the subdomains Ω_1(t) and Ω_2(t), respectively, so that Ω_1(t)∪Ω_2(t) = Ω. The moving interface between the two fluids is denoted by Γ(t), defined as Γ(t) = ∂Ω_1(t) ∩∂Ω_2(t). We consider the classical unsteady, isothermal, incompressible Navier-Stokes equations with gravity, which read as follows <cit.>: ρ(𝐱)[∂𝐮/∂ t + (𝐮·∇)𝐮] = -∇ p + [2μ(𝐱) 𝐃(𝐮)] + ρ(𝐱)𝐠 𝐮 = 0, for 𝐱∈Ω, t ∈ (0, T_f], supplied with suitable initial and boundary conditions. Here T_f is the final time, 𝐮 is the fluid velocity, p is the pressure, ρ is the fluid density and μ is the dynamic viscosity. We assume that both the density and the viscosity are discontinuous functions ρ(𝐱) = ρ_1 in Ω_1(t) ρ_2 in Ω_2(t) and μ(𝐱) = μ_1 in Ω_1(t) μ_2 in Ω_2(t) with ρ_1, ρ_2, μ_1, and μ_2 constant values. Moreover, 𝐠 is the gravitational acceleration and 𝐃(𝐮) denotes the symmetric part of the gradient of the velocity, defined as 𝐃(𝐮) = 1/2[∇𝐮 + (∇𝐮)^T]. In the following, for the sake of simplicity in the notation, we omit the explicit dependence on space and time for the different quantities. Surface tension effects are taken into accounts through the following balance of forces at the interface Γ: [𝐮]_Γ = 0 [-p𝐈 + 2μ𝐃(𝐮)]_Γ𝐧_Γ = σκ𝐧_Γ, where 𝐧_Γ is the outward unit normal to Γ, [Ψ]_Γ = Ψ|_Γ∩Ω_1 - Ψ|_Γ∩Ω_2 denotes the jump of Ψ across the interface Γ, σ is the constant surface tension coefficient, and κ = -𝐧_Γ is the curvature. The first condition implies the continuity of the velocity along Γ, whereas the second condition describes the balance of forces at the interface. A common way to handle the term with surface tension is to introduce the following volumetric force <cit.>: 𝐟_σ = σκ𝐧_Γδ(Γ), where δ(Γ) is the Dirac delta distribution supported on the interface. Hence, system (<ref>) can be rewritten as follows: ρ[∂𝐮/∂ t + (𝐮·∇)𝐮] = -∇ p + [2μ𝐃(𝐮)] + ρ𝐠 + 𝐟_σ 𝐮 = 0. A level set approach <cit.> is employed to capture the interface Γ. The interface between the two fluids is considered sharp and is described as the zero level set of a smooth function. Hence, the following relation holds: ∂φ/∂ t + 𝐮·∇φ = 0, where φ is the level set function. A common choice <cit.> is to consider as level set the signed distance function to Γ. In order to fix the notation, we consider φ < 0 in Ω_2 and φ > 0 in Ω_1. Therefore, we define φ = -dist(𝐱,Γ) if 𝐱∈Ω_2 0 if 𝐱∈Γ dist(𝐱,Γ) if 𝐱∈Ω_1 The unit normal vector can be evaluated at each point as follows <cit.>: 𝐧_Γ = ∇φ/|∇φ|, 𝐱∈Γ, so that (<ref>) is equivalent to ∂φ/∂ t + (𝐮·𝐧_Γ)|∇φ| = 0. Relation (<ref>) shows that the deformation of the level set function is due only to the normal component of the velocity. Moreover, we can express the density and the dynamic viscosity through the Heaviside function H ρ = ρ_2 + (ρ_1 - ρ_2)H(φ) μ = μ_2 + (μ_1 - μ_2)H(φ) The whole system of equations reads therefore as follows: ρ[∂𝐮/∂ t + (𝐮·∇)𝐮] = -∇ p + [2μ𝐃(𝐮)] + ρ𝐠 + 𝐟_σ 𝐮 = 0 ∂φ/∂ t + 𝐮·∇φ = 0. System (<ref>) can be rewritten in conservative form. First of all, thanks to the incompressibility constraint 𝐮 = 0, we can rewrite (<ref>) as ∂φ/∂ t + (φ𝐮) = 0. Moreover, one can verify that (<ref>), in combination with the incompressibility constraint, implies mass conservation. Indeed, we get ∂ρ/∂ t + (ρ𝐮) = ∂ρ/∂ t + 𝐮·∇ρ = (ρ_1 - ρ_2)(∂ H(φ)/∂ t + 𝐮·∇ H(φ)) = (ρ_1 - ρ_2)δ(φ)(∂φ/∂ t + 𝐮·∇φ) = 0, where we exploited the relation dH(φ)/dφ = δ(φ) <cit.>, with δ(φ) denoting the Dirac delta distribution with support equal to the function φ which implicitly describes the surface. It is appropriate to stress the fact that the differential operators involving the Heaviside function H(φ) have to be intended in a proper distributional sense. Finally, as discussed in <cit.>, we can rewrite 𝐟_σ = [σ(𝐈 - 𝐧_Γ⊗𝐧_Γ)δ(Γ)], where, once more, the divergence operator should be intended in a distributional sense. Hence, the conservative form of (<ref>) is ∂(ρ𝐮)/∂ t + (ρ𝐮⊗𝐮) = -∇ p + [2μ𝐃(𝐮)] + ρ𝐠 + 𝐟_σ 𝐮 = 0 ∂φ/∂ t + (φ𝐮) = 0. The Continuum Surface Force (CSF) approach, introduced in <cit.>, is employed to treat density, viscosity, and surface tension term. A regularized Heaviside H_ε(φ) is introduced, so as to obtain ρ≈ρ_2 + (ρ_1 - ρ_2)H_ε(φ) μ≈μ_2 + (μ_1 - μ_2)H_ε(φ). It is important at this stage to point out the relation between δ(Γ) and δ(φ). As discussed in <cit.>, the following relation holds: δ(Γ) = δ(φ)|∇φ|, so that we can rewrite 𝐟_σ = σκ𝐧_Γδ(φ)|∇φ| = [σ(𝐈 - 𝐧_Γ⊗𝐧_Γ)δ(φ)|∇φ|]. Hence, the CSF approximation of the surface tension term reads as follows: 𝐟_σ≈[σ(𝐈 - 𝐧_Γ⊗𝐧_Γ)δ_ε(φ)|∇φ|] = [σ(𝐈 - 𝐧_Γ⊗𝐧_Γ)dH_ε/dφ(φ)|∇φ|]. Since the seminal proposals in <cit.> (see also the review in <cit.>), projection methods have become very popular for the discretization of incompressible Navier-Stokes equations. However, difficulties arise in choosing boundary conditions for the Poisson equation which is to be solved at each time step to compute the pressure. An alternative that allows to avoid or reduce some of these problems is the so-called artificial compressibility formulation, originally introduced in <cit.> and employed in <cit.> among many others. In this formulation, the incompressibility constraint is relaxed and a time evolution equation for the pressure is introduced. This kind of approach has been adopted for incompressible flows with variable density, see e.g. <cit.>, and we aim here to consider an artificial compressibility formulation for immiscible, isothermal two-phase flows with gravity. The model equations can be therefore rewritten as follows: ∂(ρ𝐮)/∂ t + (ρ𝐮⊗𝐮) = -∇ p + [2μ𝐃(𝐮)] + ρ𝐠 + 𝐟_σ 1/ρ_0 c^2∂ p/∂ t + 𝐮 = 0 ∂φ/∂ t + (φ𝐮) = 0, where c is the artificial speed of sound and ρ_0 is a reference density. Finally, since we are relaxing the incompressibility constraint, we consider (<ref>) for the level set motion, which is valid for the transport of φ independently of the constraints on the velocity 𝐮. Moreover, this choice is justified by the results reported in <cit.> for a rising bubble test case, for which a non-conservative formulation leads to less diffusion in the treatment of the interface. Hence, the final form of the system under consideration reads as follows: ∂(ρ𝐮)/∂ t + (ρ𝐮⊗𝐮) = -∇ p + [2μ𝐃(𝐮)] + ρ𝐠 + 𝐟_σ 1/ρ_0 c^2∂ p/∂ t + 𝐮 = 0 ∂φ/∂ t + 𝐮·∇φ = 0, Before proceeding to describe the time and space discretization schemes, we perform a dimensional analysis to derive a non-dimensional version of system (<ref>). §.§ Dimensional analysis In this Section, we derive a non-dimensional formulation for system (<ref>). We denote with the symbol * non-dimensional quantities. We introduce a reference length and velocity, denoted by L_ref and U_ref, respectively, so as to obtain 𝐱 = L_ref𝐱^* 𝐮 = U_ref𝐮^* t = L_ref/U_reft^*. Moreover, we choose as reference density and viscosity those associated to the heavier fluid, which is conventionally considered in Ω_1. For the sake of simplicity, we also assume ρ_0 = ρ_1. The reference pressure p_ref is taken equal to p_ref = ρ_1U_ref^2. Hence, we get ρ = ρ_1ρ^* μ = μ_1μ^* p = ρ_1U_ref^2p^* κ = 1/L_refκ^* φ = L_refφ^*. Introducing the appropriate non-dimensional quantities, we obtain ρ_1U_ref^2/L_ref∂^*(ρ^*𝐮^*)/∂^*t^* + ρ_1U_ref^2/L_ref∇^*·(ρ^*𝐮^*⊗𝐮^*) = -ρ_1U_ref^2/L_ref∇^*p^* + μ_1U_ref/L_ref∇^*·[2μ^*𝐃(𝐮^*)] - ρ_1ρ^*g𝐤 + 1/L_ref^2∇^*·[σ(𝐈 - 𝐧_Γ⊗𝐧_Γ)δ^*_ε(φ^*)|∇^*φ^*|] ρ_1U_ref^3/ρ_1L_ref1/c^2∂^*p^*/∂^*t^* + U_ref/L_ref∇^*·𝐮^* = 0 U_ref∂^*φ^*/∂^*t^* + U_ref𝐮^*·∇^*φ^* = 0, where 𝐤 is the upward pointing unit vector in the standard Cartesian reference frame. System (<ref>) reduces to ∂^*(ρ^*𝐮^*)/∂^*t^* + ∇^*·(ρ^*𝐮^*⊗𝐮^*) = -∇^*p^* + 1/Re∇^*·[2μ^*𝐃(𝐮^*)] - 1/Fr^2ρ^*𝐤 + 1/We∇^*·[(𝐈 - 𝐧_Γ⊗𝐧_Γ)δ^*_ε(φ^*)|∇^*φ^*|] M^2∂^*p^*/∂^*t^* + ∇^*·𝐮^* = 0 ∂^*φ^*/∂^*t^* + 𝐮^*·∇^*φ^* = 0, where Re = ρ_1U_refL_ref/μ_1 Fr = U_ref/√(g L_ref) We = ρ_1U_ref^2L_ref/σ M = U_ref/c denote the Reynolds number, the Froude number, the Weber number, and the Mach number, respectively. In the following, with a slight abuse of notation, we omit the symbol * to mark non-dimensional quantities and we consider therefore the following system of equations: ∂(ρ𝐮)/∂ t + (ρ𝐮⊗𝐮) = -∇ p + 1/Re[2μ𝐃(𝐮)] - 1/Fr^2ρ𝐤 + 1/We[(𝐈 - 𝐧_Γ⊗𝐧_Γ)δ_ε(φ)|∇φ|] M^2∂ p/∂ t + 𝐮 = 0 ∂φ/∂ t + 𝐮·∇φ = 0, where ρ = ρ_2/ρ_1 + (1 - ρ_2/ρ_1)H_ε(φ) μ = μ_2/μ_1 + (1 - μ_2/μ_1)H_ε(φ). §.§ The conservative level set method The traditional level set method lacks of volume conservation properties <cit.>. The conservative level set (CLS) method <cit.> is a popular alternative to add conservation properties to level set schemes. The idea is to replace the signed distance function defined in (<ref>) with a regularized Heaviside function: ϕ(𝐱,t) = 1/1 + e^-φ(𝐱,t)/ε, where ε helps smoothing the transition of the discontinuous physical properties between the two subdomains and it is also known as interface thickness. Since ∇ϕ = 1/εe^-φ/ε/(1 + e^-φ/ε)^2∇φ we can compute the outward unit normal 𝐧_Γ exactly as in (<ref>). From definition (<ref>), it follows that Γ(t) = {𝐱∈Ω : ϕ(𝐱,t) = 1/2}. This new level set function needs to be reinitialized in order to keep the property of being a regularized Heaviside function <cit.>. This goal is achieved by solving the following PDE <cit.>: ∂ϕ/∂τ + (u_cϕ(1 - ϕ)𝐧_Γ) = (βε u_c(∇ϕ·𝐧_Γ)𝐧_Γ), where τ is an artificial pseudo-time variable, u_c is an artificial compression velocity, and β is a constant. It is important to notice that 𝐧_Γ does not change during the reinizialization procedure, but is computed using the initial value of the level set function. The relation (<ref>) has been originally introduced as an intermediate step between the level set advection and the Navier-Stokes equations to keep the shape of the profile <cit.> and to stabilize the advection <cit.>. Two fluxes are considered: a compression flux which acts where 0 < ϕ < 1 and in normal direction to the interface, represented by u_cϕ(1 - ϕ)𝐧_Γ, and a diffusion flux, represented by βε u_c(∇ϕ·𝐧_Γ)𝐧_Γ. The reinitialization is crucial for the overall stability of the algorithm, but it also introduces errors in the solution <cit.>. Hence, it is important to avoid unnecessary reinitialization. For this purpose, unlike the formulation proposed e.g. in <cit.> and <cit.>, we introduce the coefficient β to tune the amount of diffusion so as to keep it as small as possible. The choices for the different parameters will be specified in Section <ref>. Finally, we stress the fact that, in this method, we are already using a smooth version of Heaviside function so that H_ε = ϕ δ(Γ) ≈dH_ε/dϕ|∇ϕ| = |∇ϕ| § THE TIME DISCRETIZATION In this Section, we outline the time discretization strategy for system (<ref>). Our goal here is to extend the projection method based on the TR-BDF2 scheme developed in <cit.>. We now briefly recall for the convenience of the reader the formulation of the TR-BDF2. This second order implicit method has been originally introduced in <cit.> as a combination of the Trapezoidal Rule (or Crank-Nicolson) method and of the Backward Differentiation Formula method of order 2 (BDF2). Let Δ t = T_f/N be a discrete time step and t^n = nΔ t, n = 0, …, N, be discrete time levels for a generic time dependent problem u^' = 𝒩(u). Hence, the incremental form of the TR-BDF2 scheme can be described in terms of two stages, the first one from t^n to t^n+γ = t^n + γΔ t, and the second one from t^n+γ to t^n+1, as follows: u^n+γ - u^n/γΔ t = 1/2𝒩(u^n+γ) + 1/2𝒩(u^n) u^n+1 - u^n+γ/(1 - γ)Δ t = 1/2 - γ𝒩(u^n+1) + 1 - γ/2(2 - γ)𝒩(u^n+γ) + 1 - γ/2(2 - γ)𝒩(u^n). Here, u^n denotes the approximation at time n = 0, …, N. Notice that, in order to guarantee L-stability, one has to choose γ = 2 - √(2) <cit.>. We refer to <cit.> for a more exhaustive discussion on the TR-BDF2 method. We start by considering the equation in system (<ref>) associated to the level set. In order to avoid a full coupling with the Navier-Stokes equations, we perform a linearization in velocity, so that the first stage for the level set update reads as follows: ϕ^n+γ - ϕ^n/γΔ t + 1/2𝐮^n + γ/2·∇ϕ^n+γ = -1/2𝐮^n + γ/2·∇ϕ^n, where the approximation 𝐮^n + γ/2 is defined by extrapolation as 𝐮^n + γ/2 = (1 + γ/2(1-γ))𝐮^n - γ/2(1-γ)𝐮^n-1. Following then the projection approach described in <cit.> and applying (<ref>), the momentum predictor equation for the first stage reads as follows: ρ^n+γ𝐮^n+γ,* - ρ^n𝐮^n/γΔ t + 1/2(ρ^n+γ𝐮^n+γ,*⊗𝐮^n+γ/2) - 1/21/Re[2μ^n+γ𝐃(𝐮^n+γ,*)] = - 1/2(ρ^n𝐮^n⊗𝐮^n+γ/2) + 1/21/Re[2μ^n𝐃(𝐮^n)] - ∇ p^n + 1/21/We[(𝐈 - 𝐧_Γ^n+γ⊗𝐧_Γ^n+γ)δ_ε(ϕ^n+γ)|∇ϕ^n+γ|] + 1/21/We[(𝐈 - 𝐧_Γ^n⊗𝐧_Γ^n)δ_ε(ϕ^n)|∇ϕ^n|] - 1/21/Fr^2ρ^n+γ𝐤 -1/21/Fr^2ρ^n𝐤. Notice once more that, in order to avoid solving a non-linear system at each time step, 𝐮^n + γ/2 is employed in the momentum advection terms. We set then δ p^n+γ = p^n+γ - p^n and impose ρ^n+γ𝐮^n+γ - 𝐮^n+γ,*/γΔ t =-∇δ p^n+γ M^2δ p^n+γ/γΔ t + 𝐮^n+γ = 0. Substituting the first equation into the second in (<ref>), one obtains the Helmholtz equation M^2δ p^n+γ/γ^2Δ t^2 - (∇δ p^n+γ/ρ^n+γ) = -1/γΔ t𝐮^n+γ,*. Once this equation is solved, the final velocity update for the first stage is given by 𝐮^n+γ = 𝐮^n+γ,* - γΔ t∇δ p^n+γ/ρ^n+γ. The second TR-BDF2 stage is performed in a similar manner applying (<ref>). We first focus on the level set update: ϕ^n+1 - ϕ^n+γ/(1-γ)Δ t + a_33𝐮^n + 3/2γ·∇ϕ^n+1 = - a_32𝐮^n+γ·∇ϕ^n+γ - a_31𝐮^n·∇ϕ^n, where a_31 = 1-γ/2(2-γ) a_32 = 1-γ/2(2-γ) a_33 = 1/2-γ. Again, in order to avoid a full coupling with the Navier-Stokes equations, an approximation is introduced in the advection term, so that 𝐮^n + 3/2γ is defined by extrapolation as 𝐮^n + 3/2γ = (1 + 1 + γ/γ)𝐮^n+γ - 1-γ/γ𝐮^n. Then, we define the second momentum predictor: ρ^n+1𝐮^n+1,* - ρ^n+γ𝐮^n+γ/(1-γ)Δ t + a_33(ρ^n+1𝐮^n+1,*⊗𝐮^n + 3/2γ) - a_331/Re[2μ^n+1𝐃(𝐮^n+1,*)] = - a_32(ρ^n+γ𝐮^n+γ⊗𝐮^n+γ) + a_321/Re[2μ^n+γ𝐃(𝐮^n+γ)] - a_31(ρ^n𝐮^n⊗𝐮^n) + a_311/Re[2μ^n𝐃(𝐮^n)] - ∇ p^n+γ + a_331/We[(𝐈 - 𝐧_Γ^n+1⊗𝐧_Γ^n+1)δ_ε(ϕ^n+1)|∇ϕ^n+1|] + a_321/We[(𝐈 - 𝐧_Γ^n+γ⊗𝐧_Γ^n+γ)δ_ε(ϕ^n+γ)|∇ϕ^n+γ|] + a_311/We[(𝐈 - 𝐧_Γ^n⊗𝐧_Γ^n)δ_ε(ϕ^n)|∇ϕ^n|] - a_331/Fr^2ρ^n+1𝐤 - a_321/Fr^2ρ^n+γ𝐤 - a_311/Fr^2ρ^n𝐤. Notice that 𝐮^n + 3/2γ is employed in the non-linear momentum advection term. We set then δ p^n+1 =p^n+1 - p^n+γ and impose ρ^n+1𝐮^n+1 - 𝐮^n+1,*/(1-γ)Δ t =-∇δ p^n+1 M^2δ p^n+1/(1-γ)Δ t + 𝐮^n+1 = 0. Substituting the first equation into the second in (<ref>), one obtains the Helmholtz equation M^2δ p^n+1/(1-γ)^2Δ t^2 -(∇δ p^n+1/ρ^n+1) = -1/(1-γ)Δ t𝐮^n+1,*. The final velocity update then reads as follows: 𝐮^n+1 = 𝐮^n+1,* - (1 - γ)Δ t∇δ p^n+1/ρ^n+1. Finally, we focus on the reinitialization procedure described in Equation <ref>, which is performed after each stage of the level set update and before computing the momentum predictor. We consider an implicit treatment of the diffusion term (βε u_c(∇ϕ·𝐧_Γ)𝐧_Γ) and a semi-implicit treatment of the compression term u_c(ϕ(1 - ϕ)𝐧_Γ). Hence, the semi-discrete formulation reads as follows: ϕ^k+1,* - ϕ^k,*/Δτ + (u_cϕ^k+1,*(1 - ϕ^k,*)𝐧_Γ) = (βε u_c(∇ϕ^k+1,*·𝐧_Γ)𝐧_Γ), where Δτ is the pseudo time step. Moreover, ϕ^0,* = ϕ^n+γ after the first TR-BDF2 stage and ϕ^0,* = ϕ^n+1 after the second TR-BDF2 stage. We recall once more that 𝐧_Γ = ∇ϕ^0,*/|∇ϕ^0,*| and it does not change during the reinitialization. Following <cit.>, we define the total reinitialization time τ_fin as a fraction of the time step Δ t, namely τ_fin = ηΔ t. η = 0 corresponds to no reinitialization, whereas η = 1 yields an amount of reinitialization which can modify the values of level set function of the same order of magnitude of which they have been modified during the previous advection step. For most applications, η≈ 0.5 seems to provide an appropriate amount of reinitialization <cit.>. A pseudo time step such that two to five reinitialization steps are performed typically ensures stable solutions and leads to the updated level set function <cit.>. § THE SPATIAL DISCRETIZATION For the spatial discretization, we consider discontinuous finite element approximations. We consider a decomposition of the domain Ω into a family of hexahedra 𝒯_h (quadrilaterals in the two-dimensional case) and denote each element by K. The skeleton ℰ denotes the set of all element faces and ℰ = ℰ^I∪ℰ^B, where ℰ^I is the subset of interior faces and ℰ^B is the subset of boundary faces. Suitable jump and average operators can then be defined as customary for finite element discretizations. A face e ∈ℰ^I shares two elements that we denote by K^+ with outward unit normal 𝐧^+ and K^- with outward unit normal 𝐧^-, whereas for a face e ∈ℰ^B we denote by 𝐧 the outward unit normal. For a scalar function Ψ the jump is defined as [[Ψ]] = Ψ^+𝐧^+ + Ψ^-𝐧^- if e ∈ℰ^I [[Ψ]] = Ψ𝐧 if e ∈ℰ^B. The average is defined as {{Ψ}} = 1/2(Ψ^+ + Ψ^-) if e ∈ℰ^I {{Ψ}} = Ψ if e ∈ℰ^B. Similar definitions apply for a vector function Ψ: [[Ψ]] = Ψ^+·𝐧^+ + Ψ^-·𝐧^- if e ∈ℰ^I [[Ψ]] = Ψ·𝐧 if e ∈ℰ^B {{Ψ}} = 1/2(Ψ^+ + Ψ^-) if e ∈ℰ^I {{Ψ}} = Ψ if e ∈ℰ^B. For vector functions, it is also useful to define a tensor jump as: <<Ψ>> = Ψ^+⊗𝐧^+ + Ψ^-⊗𝐧^- if Γ∈ℰ^I <<Ψ>> = Ψ⊗𝐧 if Γ∈ℰ^B. We now introduce the following finite element spaces: Q_k = {v ∈ L^2(Ω) : v|_K∈ℚ_k ∀ K ∈𝒯_h} and 𝐐_k = [Q_k]^d, where ℚ_k is the space of polynomials of degree k in each coordinate direction. Considering the well-posedness analyses in <cit.>, the finite element spaces that will be used for the discretization of velocity and pressure are 𝐕_h = 𝐐_k and W_h = Q_k-1∩ L^2_0(Ω), respectively, where k ≥ 2. For what concerns the level set function, we consider instead X_h = Q_r with r ≥ 2, so that its gradient is at least a piecewise linear polynomial. We then denote by ψ_i(𝐱) the basis functions for the finite element spaces associated to the scalar variable, i.e. W_h and X_h, and by ψ_i(𝐱) the basis functions for the space V_h, the finite element space chosen for the discretization of the velocity. Hence, we get 𝐮≈∑_j=1^dim(𝐕_h) u_j(t)ψ_j(𝐱) p ≈∑_j=1^dim(W_h) p_j(t)ψ_j(𝐱) ϕ≈∑_j=1^dim(X_h)ϕ_j(t)ψ_j(𝐱) The shape functions correspond to the products of Lagrange interpolation polynomials for the support points of (k+1)-order Gauss-Lobatto quadrature rule in each coordinate direction. Given these definitions, the weak formulation of the level set update for the first stage is obtained multiplying equation (<ref>) by a test function w ∈ X_h: ∑_K ∈𝒯_h∫_Kϕ^n+γ/γΔ tw dΩ + 1/2∑_K ∈𝒯_h∫_K𝐮^n+γ/2·∇ϕ^n+γ w dΩ + 1/2∑_e ∈ℰ∫_e{{ϕ^n+γ𝐮^n + γ/2}}·[[w]]dΣ - 1/2∑_e ∈ℰ∫_e{{𝐮^n + γ/2}}·[[ϕ^n + γw]]dΣ + 1/2∑_e ∈ℰ∫_eλ^n + γ/2/2[[ϕ^n + γ]] ·[[w]]dΣ = ∑_K ∈𝒯_h∫_Kϕ^n/γΔ tw dΩ - 1/2∑_K ∈𝒯_h∫_K𝐮^n + γ/2·∇ϕ^n w dΩ - 1/2∑_e ∈ℰ∫_e{{ϕ^n𝐮^n + γ/2}}·[[w]]dΣ - 1/2∑_e ∈ℰ∫_e{{𝐮^n + γ/2}}·[[ϕ^nw]]dΣ - 1/2∑_e ∈ℰ∫_eλ^n + γ/2/2[[ϕ^n]] ·[[w]]dΣ, where λ^n + γ/2 = max(|(𝐮^n + γ/2)^+·𝐧^+|, |(𝐮^n + γ/2)^-·𝐧^-|). Following <cit.>, the numerical approximation of the non-conservative term is based on a double integration by parts. The algebraic form can be obtained taking w = ψ_i, i = 1,…,dim(X_h) and exploiting the representation in (<ref>), so as to obtain in compact form (1/γΔ t𝐌_ϕ + 1/2𝐀_ϕ^n+γ)^n+γ = 𝐅_ϕ^n, where ^n+γ denotes the vector of the degrees of freedom associated to the level set. Moreover, we have set 𝐌_ϕ_ij = ∑_K ∈𝒯_h∫_Kψ_jψ_i dΩ 𝐀_ϕ_ij^n+γ = ∑_K ∈𝒯_h∫_K𝐮^n+γ/2·∇ψ_jψ_i dΩ + ∑_e ∈ℰ∫_e{{𝐮^n + γ/2ψ_j}}·[[ψ_i]]dΣ - ∑_e ∈ℰ∫_e{{𝐮^n + γ/2}}·[[ψ_jψ_i]]dΣ + ∑_e ∈ℰ∫_eλ^n + γ/2/2[[ψ_j]] ·[[ψ_i]]dΣ and 𝐅_ϕ^n = ∑_K ∈𝒯_h∫_Kϕ^n/γΔ tψ_i dΩ + 1/2∑_K ∈𝒯_h∫_K𝐮^n + γ/2·∇ϕ^nψ_i dΩ - 1/2∑_e ∈ℰ∫_e{{ϕ^n𝐮^n + γ/2}}·[[ψ_i]]dΣ + 1/2∑_e ∈ℰ∫_e{{𝐮^n + γ/2}}·[[ϕ^nψ_i]]dΣ - 1/2∑_e ∈ℰ∫_eλ^n + γ/2/2[[ϕ^n]] ·[[ψ_i]]dΣ. Consider now the variational formulation for equation (<ref>). Take 𝐯∈𝐕_h so as to obtain after integration by parts ∑_K ∈𝒯_h∫_K1/γΔ tρ^n+γ𝐮^n+γ,*·𝐯 dΩ - 1/2∑_K ∈𝒯_h∫_Kρ^n+γ𝐮^n+γ,*⊗𝐮^n + γ/2 : ∇𝐯 dΩ + 1/2∑_e ∈𝒯_h∫_e{{ρ^n+γ𝐮^n+γ,*⊗𝐮^n + γ/2}} : <<𝐯>> dΣ + 1/2∑_e ∈𝒯_h∫_eλ^n + γ/2/2<<ρ^n+γ𝐮^n + γ/2>> : <<𝐯>> dΣ + 1/2Re∑_K ∈𝒯_h∫_K 2μ^n+γ𝐃(𝐮^n+γ,*) : ∇𝐯 - 1/2Re∑_e ∈ℰ∫_e{{2μ^n+γ𝐃(𝐮^n+γ,*)}} : <<𝐯>> dΣ - 1/2Re∑_e ∈ℰ∫_e<<𝐮^n+γ,*>> : {{2μ^n+γ𝐃(𝐯)}} dΣ + 1/2Re∑_e ∈ℰ∫_e C_u{{μ^n+γ}}_H<<𝐮^n+γ,*>> : <<𝐯>>dΣ = ∑_K ∈𝒯_h∫_K1/γΔ tρ^n𝐮^n·𝐯 dΩ + 1/2∑_K ∈𝒯_h∫_Kρ^n𝐮^n⊗𝐮^n + γ/2 : ∇𝐯 dΩ - 1/2∑_e ∈𝒯_h∫_e{{ρ^n𝐮^n⊗𝐮^n + γ/2}} : <<𝐯>> dΣ - 1/2∫_eλ^n+γ/2/2<<ρ^n𝐮^n>> : <<𝐯>>dΣ - 1/2Re∑_K ∈𝒯_h∫_K 2μ^n𝐃(𝐮^n) : ∇𝐯 + 1/2Re∑_e ∈ℰ∫_e{{2μ^n𝐃(𝐮^n)}} : <<𝐯>> dΣ + ∑_K ∈𝒯_h∫_K p^n𝐯 dΩ - ∑_e ∈ℰ∫_e{{p^n}}[[𝐯]] dΣ - 1/2 Fr^2∑_K ∈𝒯_h∫_Kρ^n+γ𝐤·𝐯dΩ - 1/2 Fr^2∑_K ∈𝒯_h∫_Kρ^n𝐤·𝐯dΩ - 1/2 We∑_K ∈𝒯_h∫_K(𝐈 - 𝐧_Γ^n+γ⊗𝐧_Γ^n+γ)δ_ε(ϕ^n+γ)|∇ϕ^n+γ| : ∇𝐯 dΩ + 1/2 We∑_e ∈ℰ∫_e{{(𝐈 - 𝐧_Γ^n+γ⊗𝐧_Γ^n+γ)δ_ε(ϕ^n+γ)|∇ϕ^n+γ|}} : <<𝐯>> dΣ - 1/2 We∑_K ∈𝒯_h∫_K(𝐈 - 𝐧_Γ^n⊗𝐧_Γ^n)δ_ε(ϕ^n)|∇ϕ^n| : ∇𝐯 dΩ + 1/2 We∑_e ∈ℰ∫_e{{(𝐈 - 𝐧_Γ^n⊗𝐧_Γ^n)δ_ε(ϕ^n)|∇ϕ^n|}} : <<𝐯>> dΣ, where {{μ^n+γ}}_H = 2/1/μ^n+γ,+ + 1/μ^n+γ,-. Here, following e.g. <cit.>, we employ the harmonic average of the viscosity coefficient for the penalization term. Notice that the approximation of the advection term employs an upwind flux, whereas the approximation of the diffusion term is based on the Symmetric Interior Penalty (SIP) <cit.>. Notice also that no penalization terms have been introduced for the variables computed at previous time steps in the diffusion terms. Following <cit.>, we set for each face e of a cell K σ^𝐮_e,K = (k + 1 )^2diam(e)/diam(K) and we define the penalization constant for the SIP method as C_u = 1/2(σ^𝐮_e,K^+ + σ^𝐮_e,K^-) if e ∈ℰ^I, C_u = σ^𝐮_e,K if e ∈ℰ^B. Finally, we stress the fact that a centered flux has been employed for the surface tension terms. The algebraic formulation is then computed considering 𝐯 = ψ_i, i=1, …, dim(𝐕_h) and the representation in (<ref>) for the velocity. Hence, we obtain (1/γΔ t𝐌_𝐮^n+γ + 1/2Re𝐀_𝐮^n+γ + 1/2𝐂_𝐮^n+γ)𝐔^n+γ,* = 𝐅_u^n, where 𝐔^n+γ,* denotes the vector of degrees of freedom for the velocity. Moreover, we have set 𝐌_𝐮_ij^n+γ = ∑_K ∈𝒯_h∫_Kρ^n+γψ_jψ_i dΩ 𝐂_𝐮_ij^n+γ = -∑_K ∈𝒯_h∫_Kρ^n+γψ_j⊗𝐮^n + γ/2 : ∇ψ_i dΩ + ∑_e ∈𝒯_h∫_e{{ρ^n+γψ_j⊗𝐮^n + γ/2}} : <<ψ_j>> dΣ + ∑_e ∈𝒯_h∫_eλ^n + γ/2/2<<ρ^n+γψ_j>> : <<ψ_i>> dΣ 𝐀_𝐮_ij^n+γ = ∑_K ∈𝒯_h∫_K 2μ^n+γ𝐃(ψ_j) : ∇ψ_i dΩ - ∑_e ∈ℰ∫_e{{2μ^n+γ𝐃(ψ_j)}} : <<ψ_i>>dΣ - ∑_e ∈ℰ∫_e<<ψ_j^n+γ,*>> : {{2μ^n+γ𝐃(ψ_i)}}dΣ + ∑_e ∈ℰ∫_e C_u{{μ^n+γ}}_H<<ψ_j>> : <<ψ_i>>dΣ and 𝐅_𝐮^n = ∑_K ∈𝒯_h∫_K1/γΔ tρ^n𝐮^n·ψ_i dΩ + 1/2∑_K ∈𝒯_h∫_Kρ^n𝐮^n⊗𝐮^n + γ/2 : ∇ψ_i dΩ - 1/2∑_e ∈𝒯_h∫_e{{ρ^n𝐮^n⊗𝐮^n + γ/2}} : <<ψ_i>> dΣ - 1/2∑_e ∈𝒯_h∫_eλ^n+γ/2/2<<ρ^n𝐮^n>> : <<ψ_i>> dΣ - 1/2Re∑_K ∈𝒯_h∫_K 2μ^n𝐃(𝐮^n) : ∇ψ_i + 1/2Re∑_e ∈ℰ∫_e{{2μ^n𝐃(𝐮^n)}} : <<ψ_i>> dΣ + ∑_K ∈𝒯_h∫_K p^nψ_i dΩ - ∑_e ∈ℰ∫_e{{p^n}}[[ψ_i]] dΣ - 1/2 Fr^2∑_K ∈𝒯_h∫_Kρ^n+γ𝐤·ψ_idΩ - 1/2 Fr^2∑_K ∈𝒯_h∫_Kρ^n𝐤·ψ_idΩ - 1/2 We∑_K ∈𝒯_h∫_K(𝐈 - 𝐧_Γ^n+γ⊗𝐧_Γ^n+γ)δ_ε(ϕ^n+γ)|∇ϕ^n+γ| : ∇ψ_idΩ + 1/2 We∑_e ∈ℰ∫_e{{(𝐈 - 𝐧_Γ^n+γ⊗𝐧_Γ^n+γ)δ_ε(ϕ^n+γ)|∇ϕ^n+γ|}} : <<ψ_i>> dΣ - 1/2 We∑_K ∈𝒯_h∫_K(𝐈 - 𝐧_Γ^n⊗𝐧_Γ^n)δ_ε(ϕ^n)|∇ϕ^n| : ∇ψ_i dΩ + 1/2 We∑_e ∈ℰ∫_e{{(𝐈 - 𝐧_Γ^n⊗𝐧_Γ^n)δ_ε(ϕ^n)|∇ϕ^n|}} : <<ψ_i>> dΣ. For what concerns the projection step, we apply again the SIP method. We multiply (<ref>) by a test function q ∈ Q_h, we apply Green's theorem and we get ∑_K ∈𝒯_h∫_KM^2/γ^2Δ t^2δ p^n+γ q dΩ + ∑_K ∈𝒯_h∫_K∇δ p^n+γ/ρ^n+γ·∇ q dΩ - ∑_e ∈ℰ∫_e{{∇δ p^n+γ/ρ^n+γ}}·[[q]]dΣ - ∑_e ∈ℰ∫_e[[δ p^n+γ]] ·{{∇ q/ρ^n+γ}} dΣ + ∑_e ∈ℰ∫_e C_p[[δ p^n+γ/ρ^n+γ]] ·[[q]]dΣ = ∑_K ∈𝒯_h∫_K1/γΔ t𝐮^n+γ,**·∇ q dΩ - ∑_e ∈ℰ∫_e1/γΔ t{{𝐮^n+γ,*}}·[[q]]dΣ, where we set σ^p_e,K = k^2diam(e)/diam(K), so that C_p = 1/2(σ^p_e,K^+ + σ^p_e,K^-) if e ∈ℰ^I, C_p = σ^p_e,K if e ∈ℰ^B. The algebraic formulation is once more obtained taking q = ψ_i, i = 1,…,dim(W_h) and considering the expansion for p^n+γ reported in (<ref>). Hence, we get (M^2/γ^2Δ t^2𝐌_p^n+γ + 𝐊_p)𝐏^n+γ = 𝐅_p^n. Here, 𝐏^n+γ denotes the vector of the degrees of freedom for the pressure. Moreover, we set 𝐌_p_ij^n+γ = ∑_K ∈𝒯_h∫_Kψ_jψ_i dΩ 𝐊_p_ij = ∑_K ∈𝒯_h∫_K∇ψ_j·∇ψ_i dΩ - ∑_e ∈ℰ∫_e{{∇ψ_j/ρ^n+γ}}·[[ψ_i]]dΣ - ∑_e ∈ℰ∫_e[[ψ_j]] ·{{∇ψ_i/ρ^n+γ}}dΣ + ∑_e ∈ℰ∫_e C_p [[ψ_j/ρ^n+γ]] ·[[ψ_i]]dΣ and 𝐅_p^n = ∑_K ∈𝒯_h∫_K1/γΔ t𝐮^n+γ,*·∇ q dΩ - ∑_e ∈ℰ∫_e1/γΔ t{{𝐮^n+γ,**}}·[[q]]dΣ. The second TR-BDF2 stage can be described in a similar manner according to the formulations reported in (<ref>), (<ref>), and (<ref>).   Finally, we consider the weak formulation for the reinitialization equation for the level set function (<ref>): ∑_K ∈𝒯_h∫_Kϕ^k+1,*/Δτw dΩ - ∑_K ∈𝒯_h∫_K u_cϕ^k+1,*(1 - ϕ^k,*)𝐧_Γ·∇ w dΩ + ∑_e ∈ℰ∫_e u_c{{ϕ^k+1,*(1 - ϕ^k,*)𝐧_Γ}}·[[w]]dΣ + ∑_e ∈ℰ∫_eλ̃^k/2[[ϕ^k+1,*]] ·[[w]]dΣ + ∑_K∫_K u_cβε(∇ϕ^k+1,*·𝐧_Γ)𝐧_Γ·∇ w dΩ - ∑_e ∈ℰ∫_e u_cβε{{(∇ϕ^k+1,*·𝐧_Γ)𝐧_Γ}}·[[w]]dΣ - ∑_e ∈ℰ∫_e u_cβε{{(∇ v ·𝐧_Γ)𝐧_Γ}}·[[ϕ^k+1,*]]dΣ + ∑_e ∈ℰ∫_e C_ϕ[[ϕ^k+1,*]] ·[[w]] dΣ = ∑_K ∈𝒯_h∫_Kϕ^k,*/Δτw dΩ, where λ̃^k = max(|(1 - (ϕ^k,*)^+)𝐧_Γ^+·𝐧^+|, |(1 - (ϕ^k,*)^-)𝐧_Γ^-·𝐧^-|). Moreover, we set σ^ϕ_e,K = (r + 1)^2diam(e)/diam(K), so that C_ϕ = 1/2(σ^ϕ_e,K^+ + σ^ϕ_e,K^-) if e ∈ℰ^I, C_ϕ = σ^ϕ_e,K if e ∈ℰ^B. One can notice that, following <cit.>, an upwind flux has been employed for the compression term and the SIP has been adopted for the diffusive term. Finally, the algebraic form is obtained considering w = Ψ_i, i = 1,…,dim(X_h) and the representation in (<ref>) so as to obtain (1/Δτ𝐌_ϕ + u_c𝐂_ϕ + 𝐀_ϕ) = 𝐅_ϕ. Here 𝐂_ϕ_ij = -∑_K ∈𝒯_h∫_K(1 - ϕ^k,*)𝐧_Γψ_j·∇ψ_i dΩ + ∑_e ∈𝒯_h∫_e{{(1 - ϕ^k,*)𝐧_Γψ_j}} : [[ψ_i]] dΣ + ∑_e ∈𝒯_h∫_eλ̃^k/2[[ψ_j]] : [[ψ_i]] dΣ 𝐀_ϕ = ∑_K ∈𝒯_h∫_K u_cβε(∇ψ_j·𝐧_Γ)𝐧_Γ·∇ψ_i dΩ - ∑_e ∈ℰ∫_e u_cβε{{(∇ψ_j·𝐧_Γ)𝐧_Γ}}·[[ψ_i]] dΣ - ∑_e ∈ℰ∫_e u_cβε[[ψ_j]] ·{{(∇ψ_i·𝐧_Γ)𝐧_Γ}} dΣ + ∑_e ∈ℰ∫_e C_ϕ[[ψ_j]] ·[[ψ_i]] dΣ. § NUMERICAL EXPERIMENTS The numerical method outlined in the previous Sections has been validated in a number of classical test cases for incompressible two-phase flows using the numerical library deal.II <cit.>, whose adaptive mesh refinement capabilities will be employed to enhance resolution close to the interface. We set h = min{diam(K) | K ∈𝒯_h} and we define two Courant numbers, one based on the flow velocity, denoted by C_u, and one based on the Mach number, denoted by C: C_u = kΔ t U/h C = k1/MΔ t/h, where U is the magnitude of the flow velocity. For the sake of convenience of the reader, we recall here that k and k - 1 are the polynomial degrees of the finite element spaces chosen for the discretization of velocity and pressure, respectively, whereas r is the polynomial degree of the finite element space chosen for the discretization of the level set function. We consider k = r = 2 in all the numerical experiments. §.§ Rayleigh-Taylor instability The Rayleigh-Taylor instability is a well known test case in which an heavier fluid penetrates a lighter fluid under the action of gravity. We consider the configuration presented e.g. in <cit.>, for which ρ_1 = 1.225 and ρ_2 = 0.1694, corresponding to the density of air and helium, respectively, whereas μ_1 = μ_2 = 0.00313. The effect of surface tension is neglected. Moreover, following <cit.>, we consider as reference length the computational width of the box W and as reference time the time scale of wave growth, equal to t_ref = √(W/Ag), where g = 9.81 and A = ρ_1 - ρ_2/ρ_1 + ρ_2 is the Atwood number. Hence, we obtain the following relations: U_ref = √(AgW) Re = ρ_1√(AgW)W/μ_1 Fr = √(A). We consider W = 1 so as to obtain a computational domain Ω = (0,1) ×(0,4). Hence, we get A ≈ 0.757, t_ref≈0.367, U_ref≈2.725, Re ≈ 1066.55, and Fr ≈ 0.87. We take M = 0.008, corresponding to c ≈343, namely the speed of sound in air. The final time is T_f = 2.45. No-slip boundary conditions are prescribed on top and bottom walls, whereas periodic boundary conditions are imposed along the horizontal direction. The pressure is prescribed to be zero on the upper wall. The initial velocity field is zero, whereas the initial level set function is ϕ(0) = 1/1 + exp(2 + 0.05cos(2π x) - y/ε). The computational grid is composed by 160 × 640 elements, whereas the time step is Δ t ≈ 1.63 × 10^-3, yielding a maximum advective Courant number C ≈ 1.36 and an acoustic Courant number C ≈ 32.7. Finally, we set ε = h = 1/160, Δτ = 0.05h, u_c = 0.0125u_max, and β = 1, where u_max is the maximum fluid velocity. The choice to relate u_c with u_max is rather common in the literature, see e.g. <cit.>. Figure <ref> shows the development of the interface at t = T_f, where one can easily notice the expected main behaviour of the Rayleigh-Taylor instability: as the heavier fluid penetrates the lighter one, the interface begins to roll up along the sides of the spike giving the typical “mushroom” shape. Obtained results are similar to those in literature, see e.g. <cit.>. Moreover, for the sake of completeness, we report in Figure <ref> the evolution of the relative variation of the area for the lighter fluid, defined as |Ω_2(t) - Ω_2(0)|/Ω_2(0). The maximum relative variation is 0.034 %, showing that CLS method preserves the area quite well. An interesting analysis regards the influence of the Atwood number. We fix ρ_2 = 0.408, so as to obtain A ≈ 0.5. As a consequence, we obtain t_ref≈0.451, U_ref≈2.215, Re≈ 867.05, Fr ≈ 0.71, and M = 0.006. We set the final time T_f = 2, so that the same final dimensional time of the previous configuration is achieved. The chosen time step is Δ t = 2.5 · 10^-3. One can easily notice from Figure <ref> that, with higher Atwood number, the roll up effect is enhanced. This points out the earlier appearance of the Kelvin-Helmholtz instability, due to the development of short wavelength perturbations along the fluid interface. The deal.II library supports non-conforming mesh adaptation. We employ the h-adaptive version of the scheme for the latter configuration. More specifically, we define for each element K the quantity η_K = max_i ∈𝒩_K|∇ϕ|_i, which acts as local refinement indicator. Here 𝒩_K denotes the set of nodes over the element K. We allow to refine when η_K exceeds 10 and to coarsen below 5. The initial grid is composed by 80 × 320 elements and we allow up to two local refinements, so as to obtain h = 1/320 and a maximum resolution which would correspond to a 320 × 1280 uniform grid. As one can notice from Figure <ref>, the refinement criterion is able to increase the resolution only in correspondence of the interface between the two fluids. The final grid consists of 43147 elements, corresponding to around 40 % of elements of the fixed uniform grid. Figure <ref> shows a comparison of the interface between the simulations with uniform and adaptive grid both at t = T_f/2 and t = T_f. One can easily notice that at t = T_f/2 the two interfaces are indistinguishable, whereas at t = T_f a slightly different development of the instability appears. Since we are analyzing a fluid mechanic instability, every small variation in the flow corresponds to large variations, and, therefore, it is difficult to say which solution is the more reliable. Similar results and considerations have been reported for a Kelvin-Helmholtz instability in <cit.>. §.§ Rising bubble benchmark The rising bubble benchmark is a well-established test case for the validation of numerical methods for incompressible two-phase flows <cit.>. More specifically, the evolution of the shape, position and velocity of the center of mass of a rising bubble is compared against the reference solution in <cit.>. Two configurations are considered with the corresponding physical parameters and non-dimensional numbers listed in Table <ref> and <ref>, respectively. The bubble occupies the subdomain Ω_2. Following <cit.>, we set L_ref = 2r_0 = 0.5 and U_ref = √(g L_ref) = 0.7. We consider as domain Ω = (0, L_x) ×(0, L_y), with L_x = 2 and L_y = 4, whereas the final time is T_f = 4.2. No-slip boundary conditions are imposed on the top and bottom boundaries, whereas periodic conditions are prescribed in the horizontal direction. The initial velocity field is zero. Finally, the initial level set function is described by the following relation: ϕ(0) = 1/1 + exp(R - √((x - x_0)^2 + (y - y_0)^2)/ε), with R = 1, x_0 = y_0 = 1. We compute as reference quantities the position 𝐱_c, the velocity 𝐮_c of the center of mass, and the so-called degree of circularity χ, defined respectively as 𝐱_c = ∫_Ω_2𝐱dΩ/∫_Ω_2dΩ = ∫_Ω_2𝐱dΩ/|Ω_2| 𝐮_c = ∫_Ω_2𝐮dΩ/∫_Ω_2dΩ = ∫_Ω_2𝐮dΩ/|Ω_2| χ = 2√(π|Ω_2|)/P_b, where Ω_2 is the subdomain occupied by the bubble, |Ω_2| is the area of the bubble, and P_b is its perimeter. The degree of circularity is the ratio between the perimeter of a circle with the same area of the bubble and the current perimeter of the bubble itself. For a perfectly circular bubble, the degree of circularity is equal to one and then decreases as the bubble deforms itself. Since ϕ is a regularized Heaviside function, we can compute the reference quantities as follows: 𝐱_c ≈ ∫_Ω𝐱(1- ϕ)dΩ/∫_Ω(1- ϕ)dΩ 𝐮_c ≈ ∫_Ω𝐮(1- ϕ)dΩ/∫_Ω(1- ϕ)dΩ χ ≈ 2√(π∫_Ω(1- ϕ)dΩ)/∫_Ω|∇ϕ|dΩ. We start with the first configuration and we set M = 0.0005, corresponding to c = 1400, which is of the order of magnitude of the speed of sound in water. The computational grid is composed by 320 × 640 elements, leading to h = 1/160, whereas the time step is Δ t = 6 · 10^-3, yielding a maximum advective Courant number C_u≈ 1.4 and an acoustic Courant number C = 1920. Finally, we set ε = h, Δτ = 0.05h, u_c = 0.05 u_max and β = 0.5. We point out here the fact that results in the Figures have been compared with the results of Group 2 in <cit.>. Figure <ref> shows the shape of the bubble at t = T_f and one can easily notice that we are able to recover the reference shape of the bubble. Figure <ref> reports the evolution of the degree of circularity. A good qualitative agreement is established, with only slightly lower values for our numerical results. Figure <ref> reports the evolution of the vertical coordinate of the position of the center of mass. For a quantitative point of view, the center of mass reaches y_c = 2.156, which is in good agreement with the value y_c = 2.162 ± 0.002 reported in <cit.>. Finally, Figure <ref> shows the evolution of the vertical coordinate of the velocity of the center of mass. The maximum rise velocity of the center of mass is v_c = 0.3461, which is again in good agreement with the value v_c = 0.3456 ± 0.0003 present in <cit.>. We analyze now the second configuration. The time step is Δ t = 5 · 10^-3, yielding a maximum advective Courant number C_u≈ 1.4 and an acoustic Courant number C = 1600. We also set ε = h = 1/160, Δτ = 0.05h = 3.125 × 10^-4, u_c = 0.0125 u_max and β = 2. Figure <ref> shows the shape of the bubble at t = T_f. The bubble develops a non-convex shape with thin filaments. The solutions given in <cit.> are different and, in some cases, the thin filaments tend to break off, although it is unclear if such a phenomenon should be observed in the current two-dimensional setting. The obtained profile is however in good agreement with that of Group 2 in <cit.>. Figure <ref>, <ref>, <ref> show the evolution of the degree of circularity, the vertical coordinate of the position of the center of mass, and the vertical coordinate of the velocity of the center of mass, respectively. A good qualitative agreement is established for the quantities of interest, even though deviations from the chosen reference solution are visible. In particular, differences appear for the degree of circularity starting from t ≈ 2.5, when the thin filaments start developing. Moreover, the second peak for the rising velocity reaches a lower value. As mentioned above, there is no clear agreement concerning the thin filamentary regions, and, therefore, their development can strongly affect computations of the reference quantities and can lead to different numerical results. We employ now Adaptive Mesh Refinement (AMR) to increase the resolution in correspondence of the interface. We consider the same refinement criterion (<ref>) and the same thresholds for η_K adopted in Section <ref> and we allow up two local refinements, so as to obtain h = 1/640 and a maximum resolution which would correspond to a 1280 × 2560 uniform grid. Figure <ref> shows both the shape of the bubble and the computational grid at t = T_f. One can notice that the resolution is enhanced close to the interface between the two fluids. The final grid consists of 283094 elements. Figure <ref> reports a comparison for the quantities of interest between the fixed grid simulation and the adaptive one. The results show that we have reached grid independence, since only the degree of circularity slightly differs between the two simulations, whereas the profiles of the vertical coordinates of both velocity and position of the center of mass are visually indistinguishable. A significant difference in the development of the thin filamentary regions depends on the modelling of the viscosity coefficient μ, as pointed out in <cit.> for diffuse interface models. A popular alternative to the linear interpolation model defined in (<ref>) is the so-called harmonic interpolation, defined as 1/μ = H_ε(φ) + μ_1/μ_2(1 - H_ε(φ)). This choice yields results which are more similar to Group 1 in <cit.>, where a break-up occurs (see Figure <ref>). For what concerns the quantities of interest, we notice from Figure <ref> that, since the thin elongated filaments break themselves, the degree of circularity is higher. Moreover, both the second peak of the rising velocity and the final position of the center of mass are significantly higher. The following analysis further confirms how challenging is defining a reference benchmark solution when the bubble undergoes large deformations. § CONCLUSIONS Building on the experience of <cit.>, we have proposed an implicit Discontinuous Galerkin discretization for incompressible two-phase flows. While discretizations of incompressible two-phase flows equations have been proposed in many other papers, we have presented here an approach based on an artificial compressibility formulation in order to avoid some well known issues of projection methods. The time discretization is obtained by a projection method based on the L-stable TR-BDF2 method. The implementation has been carried out in the framework of the numerical library deal.II, whose mesh adaptation capabilities have been exploited to increase the resolution in correspondence of the interface between the two fluids. The effectiveness of the proposed approach has been shown in a number of classical benchmarks. In particular, for the rising bubble test case, the influence of some possible choices for the mixture viscosity when the interface undergoes large deformations has been established, following an analysis previously carried out for diffuse interface models. In future work, we aim to exploit the possibility of considering well resolved interfaces for an analysis on the evolution equations of interfacial quantities, as well as an extension of analogous approaches to fully compressible flows. § ACKNOWLEDGEMENTS The author would like to thank L. Bonaventura and P. Barbante for several useful discussions on related topics. The author gratefully acknowledges N. Parolini for providing the original data of the rising bubble test case discussed in Section <ref>. The simulations have been partly run at CINECA thanks to the computational resources made available through the NUMNETF-HP10C06Y02 ISCRA-C project. This work has been partially supported by the ESCAPE-2 project, European Union’s Horizon 2020 Research and Innovation Programme (Grant Agreement No. 800897). plain
http://arxiv.org/abs/2307.06020v1
20230712090253
Representing Vineyard Modules
[ "Katharine Turner" ]
math.RT
[ "math.RT", "math.AT" ]
On the Uplink Distributed Detection in UAV-enabled Aerial Cell-Free mMIMO Systems Xuesong Pan, Zhong Zheng, Member, IEEE, Xueqing Huang, Member, IEEE, Zesong Fei, Senior Member, IEEE X. Pan, Z. Zheng and Z. Fei are with the School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China (e-mail: {xs.pan, zhong.zheng, feizesong}@bit.edu.cn). X. Huang is with the Department of Computer Science, New York Institute of Technology, NY 11568, USA (e-mail: [email protected]). June 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================= Time-series of persistence diagrams, known as vineyards, have shown to be useful in diverse applications. A natural algebraic version of vineyards is a time series of persistence modules equipped with interleaving maps between the persistence modules at different time values. We call this a vineyard module. In this paper we will set up the framework for representing vineyards modules via families of matrices and outline an algorithmic way to change the bases of the persistence modules at each time step within the vineyard module to make the matrices in this representation as simple as possible. With some reasonable assumptions on the vineyard modules, this simplified representation of the vineyard module can be completely described (up to isomorphism) by the underlying vineyard and a vector of finite length. We first must set up a lot of preliminary results about changes of bases for persistence modules where we are given ϵ-interleaving maps for sufficiently close ϵ. While this vector representation is not in general guaranteed to be unique we can prove that it will be always zero when the vineyard module is isomorphic to the direct sum of vine modules. This new perspective on vineyards provides an interesting and yet tractable case study within multi-parameter persistence. § INTRODUCTION Vineyards (<cit.>) are established within the topological data analysis literature as a way to studying time varying data with applications including music classification <cit.>, detecting dynamical regime change <cit.>, and EEG dynamics <cit.>, and studying fMRI data <cit.>. Historically vineyards are defined as a continuous map from a finite real interval to the space of persistence diagrams. Informally, there is an intuitive decomposition of vineyards into paths of points within the persistence diagrams which are called vines. However, to formally and rigorously decompose vineyards we need to view them as algebraic objects. In turn this requires informative ways to represent this algebraic information. To view vineyards as an algebraic object we first need to consider them as a continuous map from the unit interval to the space of persistence modules instead of persistence diagrams. There is now an important choice - do we merely require the existence of appropriate interleaving maps (which means we have no more information that the continuous map into the space of persistence diagrams) or do we incorporate the interleaving maps as part of the algebraic object? To distinguish these situations we use the term vineyard for a continuous map from a closed interval [t_1, t_2] to the space of persistence diagrams, and vineyard module to denote the algebraic object consisting of both the parameterised family of the original persistence modules alongside interleaving maps which are required to commute with the transition maps between the persistence modules. It turns out that there is a dramatic difference in the types of indecomposable vineyards under these two different paradigms. Vineyard modules contain strictly more information than vineyards. Different vineyard modules can become isomorphic as vineyards (when we forget the interleaving maps). Vineyards decompose naturally into paths of points within the plane. If no persistence diagram has any points with higher multiplicity, then this decomposition is guaranteed to be unique by continuity. This paths are called vines in the existing literature. We can define vine modules as vineyard modules whose corresponding vineyard is a vine. The definition of an indecomposable vineyard module stems naturally from the definitions of morphisms and direct sums of vineyard modules. Morphisms between vineyard modules are a family of morphisms, one for each time value, which commute appropriately with interleaving maps between the persistence modules. Direct sums can constructed by taking direct sums for the persistence modules at each time value and constructing the interleaving maps as direct sums of interleaving maps for each summand. At the end we illustrate an example of a vineyard module which is provably not isomorphic to the direct sum of two vine modules. A complete characterisation of indecomposable vineyard modules is beyond the scope of this paper. Here we focus on the first prerequisite step of creating a framework to represent vineyard modules. We define a vine and matrix representation of a vineyard module which consists of an index set of vines and a family of matrices. We show that two vineyard modules with the same vine and matrix representation are isomorphic. If all the matrices in this vine and matrix representation satisfy a common block diagonal form then we automatically can split the vineyard module as a direct sum of vineyard modules constructed over each of the blocks. However, there are many different vine and matrix representations for the same vineyard module as it is very dependent on the choices of basis made for the persistence modules at each time step. To counteract this ambiguity we outline an algorithmic way to change the bases of the persistence modules at each time step within the vineyard module to make the matrices in this representation as simple as possible. With some reasonable assumptions on the vineyard modules, this simplified representation of the vineyard module can be described (up to isomorphism) by the underlying vineyard and a vector of finite length. We first must set up a lot of preliminary results about changes of bases for persistence modules where we are given ϵ-interleaving maps for sufficiently close ϵ. While we cannot show that this representation is guaranteed to be unique, we can prove that it will be always zero when the vineyard modules is isomorphic to the direct sum of vine modules. As such it provides an algorithmic method of determining when a vineyard module is trivial. There are many potential directions of research studying these vineyard modules, with this paper providing a framework for studying them. This new perspective on vineyards provides an interesting new case study within multi-parameter persistence. More complicated than 1-parameter persistence and yet much more tractable than ladders of persistence modules (which are effectively two persistence modules with a morphism between them), let alone persistent homology of bi-filtrations. Related work includes the characterisation of the space of bases for a persistence module and the isomorphism classes and matrix representations of ladders (<cit.>). Other related work includes algorithms for updating persistence diagrams along vineyards such as in <cit.>. There is potential for efficient computation of representations of vineyard modules. § INTRODUCING VINEYARD MODULES AND OUR SIMPLIFYING ASSUMPTIONS This paper will be assuming readers are familiar with algebra of persistence modules, including interleaving maps, interleaving distance and bottleneck distance. This section is instead will focus on the various simplifying assumptions we will make.These assumptions relate to finiteness and genericity and will be reasonable in many applications. Recall that a persistence module X is a collection of vector spaces {X_t}_t∈ℝ along with transition maps ψ_s^t: X_s → X_t for all s≤ t such that ψ_s^s is the identity for all s and ψ_s^t∘ψ_r^s=ψ_r^t whenever r≤ s≤ t. A morphism between persistence modules α: X→Y is a parameterised family of linear maps α_r : X_r → Y_r which commute with the transitions maps of both X and Y. An isomorphism between persistence modules is an invertible morphism. In the representation theory of persistence modules the building blocks are interval modules. An interval module over the interval [b,d) is a persistence module with X_t a copy of the field for t∈ [b,d) and 0 otherwise. The transition maps are the identity when s,t∈ [b,d) and 0 otherwise. We denote the interval module over the interval [b,d) by I[b,d). Throughout this paper we assume that every persistence module will isomorphic to ⊕_i=1^NI[b_i, d_i), with b_i∈ finite, N finite and [b_i,d_i)≠ [b_j,d_j) for all i≠ j. We know that up to permuting the order of the intervals this decomposition would be unique. In full generality there are four types of intervals, i.e. open-open – (,), open-closed– (,], closed-open – [,), and closed-closed – [,] which may appear in the decomposition of persistence modules, but we will assume no intervals of these other forms appear. Note that this restriction naturally occurs in virtually all applications. By considering the each bar in the persistence barcode as a point in ℝ^2 with the first coordinate b_i and second coordinate d_i, we obtain the persistence diagram. We refer to b_i as the birth time and d_i as the death time. We will denote the space of persistence diagrams by . Note that by our simplifying assumptions all our persistence diagrams contain only finitely many off-diagonal points. The space of persistence diagrams is equipped with many metrics. In this paper we will only consider the bottleneck distance. The bottleneck distance is a form of optimal transport metric. For X and Y persistence diagrams with off-diagonal points {x_i=(a_i, b_i)} and {y_j=(c_j, d_j)} respectively, a transportation plan between X and Y is a subset M⊂ X× Y such that each x_i and y_j appears in at most one pair. Let U(X)⊂ X be the x_i not appearing in any pair in M and U(Y)⊂ Y the set of y_j not appearing in any pair in M. The cost associated to M is cost(M)=max{sup_(x_i,y_j)∈ M (max(|a_i-c_j|, |b_i-d_j|)), sup_x_i∈ U(X)(|a_i-b_i|/2), sup_y_j∈ U(Y)(|c_j-d_j|/2) } The bottleneck distance is defined as the infimum of the costs over all transportation plans. One of the fundamental shifts in perspective required for topology to be useful in applications is the ability to quantify how far from isomorphic two objects are. To do this we loosen the definition of morphism, which we call an ϵ-morphism to incorporate ϵ worth of wiggle room. A ϵ-morphism between persistence modules α: X→Y is a parameterised family of linear maps α_r : X_r → Y_r+ϵ which commute with the transitions maps of both X and Y. Note that a 0-morphism is just a morphism. In this wiggle-room universe the analogous concept for an isomorphism is an ϵ-interleaving. This consists of a pair of ϵ-morphisms α: X→Y and β:Y→X such that all the maps (the α_t, β_t and the transition maps in X and Y) commute appropriately. We can use interleaving maps to define the interleaving distance between persistence modules; d_int(X, Y)=inf{ϵ≥ 0|there exists an ϵ-interleaving between X and Y}. It is well known that the interleaving distance between two persistence modules is the same as the bottleneck distance between their respective diagrams <cit.>. For a vineyard module the persistence modules at each time value are given so we will be using interleaving distances. We are now ready to define vineyards and then vineyard modules. A vineyard is a map from [t_1,t_2] to which is continuous with respect to the bottleneck distance. We can define the domain of a vineyard as the set of times where the corresponding persistence diagram contains at least one off-diagonal point. Given a vineyard X={X_s} the support of X is the set of s such that X_s contains an off-diagonal point. A vine is a map from a compact interval to which is continuous with respect to the bottleneck distance, such that the support is a non-empty interval and the number of off-diagonal points is at most one. Every vineyard can be written as the union of a finite number of vines. If no persistence diagram has any points of multiplicity greater than or equal to 2 then this union of vines is unique up to reindexing. This is the generic case. Whenever there are a multiplicity of points in the persistence diagrams there will be a combinatorial explosion of different potential decompositions of the vineyard into vines. Our simplifying assumptions will imply all the vineyards we study are generic (as in never points of multiplicity in any persistence diagram), that every vineyard will contain only a finite number of vines, and that two critical values can coincide (as in only one pair of birth values, one pair of death values of one pair of a birth and a death values can be the same). These genericity assumptions are analogous to those found within Cerf theory which is studies one-parameter families of Morse functions. Let [t_1, t_2]⊂ and f: [t_1, t_2]→ (0,∞) be a bounded continuous function. For s,t∈[t_1, t_2] set F(s,t)=|∫_s^t f(x) dx|. A vineyard module over with respect to f is a family of persistence modules {V_t | t_1≤ t≤ t_2}, alongside for each s<t an F(s,t)-interleaving α_s^t: V_s → V_t and β_t^s: V_t→ V_s such that all the interleaving maps commute with each other and the transition maps within the individual persistence modules. We call these α_s^t and β_t^s the interleaving maps within the vineyard module as they interleaving between the persistence modules at different time values. For the sake of clarity we will from now on make the simplifying assumption that f:[t_1, t_2]→ is the constant function with value 1. Such vineyard modules and their corresponding vineyards are called 1-Lipschitz. This will imply that the scaling function is F(s,t)=|s-t|. One potential way to extend the results from 1-Lipschitz to more general vineyard modules would be to explore rescaling the time parameter via the value of f at each point in time. There are complications when multiple vineyard modules which have different scales for the the interleaving maps which has great potential for confusion. For this reason we will restrict here to the simpler case where everything is 1-Lipschitz and leave generalising to all vineyard modules to future research. We will assume that all vineyard modules are 1-Lipschitz. Given a vineyard module we can forget the interleaving maps and consider the underlying vineyard. There can be many different vineyard modules that have the same underlying vineyard but are not isomorphic. We can define the vines within a vineyard module as the vines within its corresponding vineyard. As we have two-dimensions to consider we will use different terminology to help discriminate. We will refer to the parameter along to vineyard which references which persistence module we are in as the time and the parameter within a single persistence persistence module as the height. A height within a persistence module is critical if it is birth or death time of some interval within the interval decomposition. A time is critical if the birth and death values within the corresponding persistence module are not all distinct. We will assume that all vineyard modules contain only finitely many vines and only finitely many critical times. We also will assume at the critical times that no more that at most two critical heights can coincide (as in only one pair of birth values, one pair of death values of one pair of a birth and a death values can be the same). Our simplifying assumptions assure that the set of vines within a vineyard module is uniquely determined as there are no points with higher multiplicity than 1 in any of the persistence diagrams. We can use the decomposition of a vineyard into vines to give a consistent labelling of the basis elements within the persistence modules of a vineyard module. This will substantially ease the bookkeeping required. Given a vine we can construct its vine module which is a vineyard module whose persistence modules X_t are interval modules I[(γ(t)), (γ(t))) for all t∈(γ) and the zero persistence module otherwise. We also have |s-t|-morphisms between X_s=(X^s_a)_a∈ and X_t=(X^t_a)_a∈; with α_a: X^s_a → X^t_a+|s-t| are the identity for (γ(s))≤ a<(γ(t))-|s-t|, and otherwise the zero map, and β: : X^t_a → X^s_a+|s-t| defined symmetrically. A morphism between vineyard modules A={A_t} and B={B_t} is a family of morphisms α_t:A_t →B_t which commute with all the appropriate interleaving and transition maps. Once we have a notion of morphism we can define, submodules, indecomposable modules, simple modules and a decomposition into submodules. There are many directions of theory development of the relevant homological algebra. However we will leave this for future work. Given two vineyard modules we can consider their direct sum. Let V=({V_t},{α_V^s→ t}, {β_V^t→ s} and W=({W_t}, {α_W^s→ t}, {β_W^t→ s}) be vineyard modules. Their direct sum V⊕W is the vineyard module with persistence modules {V_t⊕ W_t} and interleaving maps {α_V^s→ t⊕α_W^s→ t} and {β_V^t→ s⊕β_W^t→ s}. We know that every vineyard module is isomorphic to a direct sum of indecomposable vineyard modules. In this decomposition, each vine must be fully contained in a single summand. Let X be a vineyard module which is the direct sum of vineyard modules of vineyard modules V={V_t, α_V, β_V} and W={W_t, α_W, β_W}. Let γ be a vine of V. Then either [(γ(t)), (γ(t))) is an interval in the interval decomposition of V_t for all t in the support of γ, or [(γ(t)), (γ(t))) is an interval in the interval decomposition of W_t for all t in the support of γ. Since X=V⊕W we also have X_t=V_t ⊕ W_t for all t. For each t∈(γ), [(γ(t)), (γ(t))) is an interval in the interval decomposition of X_t so it must either be an interval in V_t or an interval in W_t. Let A^V, A^W⊂(γ) be the sets of values of t where [(γ(t)), (γ(t))) is an interval in the interval decomposition of V_t and W_t respectively. If either of these sets is empty we are done. Suppose neither set is empty. Since (γ) is a connected interval which is open in (s_0,s_1) without loss of generality (swapping the roles of V and W if necessary) there exists a value t∈ A^V and a sequence {t_n} in A^W which converges to t. Let ϵ>0 be the minimum distance from [(γ(t)), (γ(t))) to any interval in X_t or the diagonal. This ϵ is non-zero by our genericity assumption that there are no intervals of higher multiplicity. There is an element of s∈{t_n} with distance less than ϵ/2 from t. As V is a vineyard module the bottleneck distance between V_t and V_s is bounded by |s-t|. However, since [(γ(t)), (γ(t))) is not an interval in the interval decomposition of V_s there is no interval within V_s suitable to pair with [(γ(t)), (γ(t))) in V_t. This causes a contradiction. From Proposition <ref> we know that each when decomposing a vineyard module into submodules that each vine that this decomposition will also partition the vines. Let V be a vineyard modules with vines {γ_1, γ_2, …γ_N}. Let ⊕_n=1^k V_i be a decomposition of V into (non-zero) indecomposable submodules. Then k≤ N and there exists a partition P=⊔_i=1^k P_i of {1, … N} such that the the vineyard of V_i consists of the union of the vines {γ_j | j∈ P_i}. First observe that by Proposition <ref> we know that each vine must entirely contained in the vineyards of V_i for exactly one i. We know that k≤ N as whenever the corresponding vineyard of a vineyard module has no vines it must have come from the zero vineyard module. Given an underling vineyard V with vines {γ_1, γ_2, … ,γ_K}, the trivial vineyard module is the direct sum of the vine modules I[γ_i]. § MATRIX REPRESENTATIONS OF Ε-MORPHISMS Throughout we will be exploiting matrix representation of ϵ-morphisms between persistence modules which first requires understanding what a basis is. Given a persistence module there can be many possibles choices of basis. The space of bases is more complicated than in the situation of vector spaces. For looking at the space of all possible bases in a persistence module in great detail please see <cit.>. Here we will use much more condensed notation. For the purposes of this paper we will use the following description of a basis. Note that this description does require the assumption our persistence modules are in the form ⊕_i=1^m I[b_i,d_i) and other definitions would need to be used if we were considering intervals with different choices of closed/open endpoints. Before we define a basis we must first define the birth and death time of an element within a persistence module. Let X=(X_t, ϕ_s^t) be a persistence module. We say that x∈ X_t is born at t, denoted (x)=t, if x is not in the image of ϕ_s^t(X_s) for any s<t. We define the death of x, denoted (x), to be inf{s>(x)|ϕ_(x)^s(x_t)=0}. Suppose X=(X_t, ϕ_s^t) is a persistence module with interval decomposition ⊕_i=1^N I[b_i,d_i) such that no intervals appear with multiplicity greater than 1. The set {x_1, x_2, … , x_N| x_i ∈ X_b_i} is called a basis for X if, (x_i)=b_i, (x_i)=d_i and for each t∈, the set {ϕ_(x_i)^t(x_i) |(x_i)≤ t<(x_i)} is a basis of X_t. Once we have fixed a choice of basis for X=({X_t}, {ϕ_s^t}) and Y=({Y_t}, {ψ_s^t}) we can consider the matrix representations for any ϵ-morphism α:X→Y with respect to this basis. Using the index order for the basis elements (B_X={x_i} generators of X and B_Y={y_j} generators for Y), we can construct matrix _B_X^B_Y(α) by requiring α_(x_i)(x_i)=∑_{j|(x_i)+ϵ∈ [(y_j), (y_j))}_B_X^B_Y(α)(j,i)ψ_(y_j)^(x_i) +ϵ(y_j). and setting _B_X^B_Y(α)(j,i)=0 whenever (x_i) +ϵ∉ [(y_j), (y_j)). This is well defined as each vector space Y_t∈Y has {ψ_(y_j)^t(y_j)|(y_j)≤ t<(y_j)} as a basis. For fixed bases B_X and B_Y of persistence modules X=({X_t}, {ϕ_s^t}) and Y=({Y_t, ψ_s^t}), the matrix _B_X^B_Y(α) completely determines α. Furthermore, if _B_X^B_Y(α)(j,i)≠ 0 then (y_j) ≤(x_i)+ϵ<(y_j)≤(x_i)+ϵ We can write each of the linear maps α_t via _B_X^B_Y(α) and the transition maps ϕ and ψ. α_s(∑_(x_i)≤ s<(x_i)λ_i ϕ_(x_i)^s(x_i)) =∑_(x_i)≤ s<(x_i)λ_i α_s(ϕ_(x_i)^s(x_i)) =∑_(x_i)≤ s<(x_i)λ_i ψ_(x_i)+ϵ^s+ϵ(α_(x_i)(x_i)) =∑_(x_i)≤ s<(x_i)λ_i ψ_(x_i)+ϵ^s+ϵ(∑_(y_j)≤(x_i) +ϵ<(y_j)_B_X^B_Y(α)(j,i)ψ_(y_j)^(x_i) +ϵ(y_j) ) =∑_{i|(x_i)≤ s<(x_i)}λ_i∑_{j|(y_j)≤(x_i) +ϵ<(y_j)}_B_X^B_Y(α)(j,i)ψ_(y_j)^s+ϵ(y_j) Since the α_r must commute with the transition maps if (α)(j,i) is non-zero then (x_i) ≥(y_j) - ϵ. Instead of using change of basis matrices (such as explored in <cit.>) we will instead represent each change of basis as a linear transformation of the previous basis. That is, we wish to write the new basis elements as a linear combination of the old basis elements. This will reduce the linear algebra calculations needed later and avoid the issue of using inverses (which are not well-defined when using extended basis later). Let X be a persistence module with interval decomposition ⊕_i=1^N I[b_i,d_i) such that no intervals appear with multiplicity greater than 1. We say that an N× N matrix A=(a_ij) is a basis transformation matrix for X if a_ii≠ 0 for all i and whenever a_ji≠ 0 then (x_j)≤(x_i) and (x_j)≤(x_i). The following lemma is effectively proved in <cit.> but with such vastly different notation and perspective that we include the proof here. Let X=({X_t}, {ϕ_s^t}) be a persistence module with interval decomposition ⊕_i=1^N I[b_i,d_i) such that no intervals appear with multiplicity greater than 1. Fix a basis B={x_1, … x_N} for X. If A=(a_ji) is a basis transformation matrix then the set B^new:={x^new_1, x^new_2, … , x^new_N} forms a basis for X where x^new_i=∑ a_jiϕ_(x_j)^(x_i)(x_j)∈ X_(x_i). With a slight abuse of notation we write B_Y^new=A(B_Y). For this new basis we have (x_i^new)=(x_i) and (x_i^new)=(x_i). Let x^new_i=∑ a_jiϕ_(x_j)^(x_i)(x_j) which by construction is an element of X_(x_i). Fix a sufficiently small δ>0 so that no births or deaths events occur within [(x_i)-δ, (x_i)). As B is a basis, ϕ_(x_i)^(x_i) (x_i)=0. Furthermore, by assumption, ϕ_(x_j)^(x_i) (x_j)=0 whenever a_ji≠ 0. Together these imply ϕ_(x_i)^(x_i) (x_i^new)=a_iiϕ_(x_i)^(x_i) (x_i) + ∑_{j |(x_j)<(i)} a_jiϕ_(x_j)^(x_i)(x_j)=0. For t∈ [(x_i), (x_i)) we know that {ϕ_(x_j)^t(x_j) |(x_j)≤ t<(x_j)} is a basis of X_t. This implies that ϕ_(x_i)^t(x_i) is linearly independent to {ϕ_(x_j)^t(x_j) |(x_j)≤ t<(x_j), j≠ i} and ϕ_(x_i)^t (x_i^new)=a_iiϕ_(x_i)^t (x_i) + ∑_{j |(x_j)<(i)} a_jiϕ_(x_j)^t(x_j)≠ 0. We have now shown that (x_i^new)=(x_i) and (x_i^new)=(x_i) for all i. We need to show that the set {ϕ_(x_i^new)^t(x_i) |(x_i^new)≤ t<(x_i^new)} is a basis of X_t. Fix a t and let S={i|(x_i)≤ t<(x_i)}. set A_t to be the matrix A restricted to the columns and rows with indices in S. Without loss of generality, rearrange the order of the indices in S and the corresponding rows and columns within A_S such that b_j≤ b_i whenever j≤ i. Our assumptions on the entries a_ji imply that A_t is an upper triangular matrix with non-zero diagonal entries. This implies A_t is always invertible. As vectors in X_t we have x_i^new=A_t x_i for each i∈ S. Since {x_i| i∈ S} is a basis of X_t and A_t is invertible we also have {x_i^new| i∈ S} is a basis for X_t. Note that if (x_i)>(x_j) then ϕ_(x_j)^(x_i)(x_j)=0. This means that more than one basis transformation matrix can create the same new basis. Here we are only considering basis transformations which retain the same indexing with respect to some interval decomposition. It would be possible to generalise to allow for permutations of the indexing of the intervals. However in the context of vineyard modules this is unnecessary and a potential source of confusion. We now want to understand how the matrices of ϵ-morphisms change when we transform the basis. This will be analogous to matrix theory but some care needs to be made. We will use to denote the identity matrix. Consider an ϵ-morphism α:X→Y where B_X is a basis for X and B_Y^old is a basis for Y such that |(x_i)-(y_i)|<ϵ and |(x_i)-(y_i)|<ϵ and all intervals are of length greater than 2ϵ. If _B_X^B_Y^old(α) is a basis transformation matrix for Y and B_Y^new= _B_X^B_Y^old(α)(B_Y^old) is the corresponding transformed basis then _B_X^B_Y^new(α)=. Fix i. Since _B_X^B_Y^old(α) is a basis transformation we know that whenever _B_X^B_Y^old(α)(j,i)≠0 we have (y_j^old)≤(y_i^old). This means we can rewrite each of the ψ_(y_j^old)^(x_i)+ϵ as the composition of ψ_(y_i^old)^(x_i)+ϵ and ψ_(y_j^old)^(y_i^old). α_(x_i)(x_i) =∑_(y_j^old)≤(x_i)+ϵ_B_X^B_Y^old(α)(j,i)ψ_(y_j^old)^(x_i)+ϵ(y_j^old) =ψ_(y_i^old)^(x_i)+ϵ(∑_(y_j^old)≤(x_i)+ϵ_B_X^B_Y^old(α)(j,i)ψ_(y_j^old)^(y_i^old)(y_j^old)) =ψ_(y_i^new)^(x_i)+ϵ(y_j^new). Note that (y_i^new)=(y_i^old) by definition. Slightly more complication but of high importance later is the case where _B_X^B_Y^old(α)e_lk^μ is a basis transformation, where e_lk^μ is the elementary matrix with e_lk^μ (i,j)= 1 if i=j μ if (i,j)=(k,l) 0 otherwise. The function A↦ Ae^μ_lk corresponds to the standard elementary column operation of adding μ times column l to column k. Consider an ϵ-morphism α:X→Y where B_X is a basis for X and B_Y^old is a basis for Y such that |(x_i)-(y_i^old)|<ϵ and |(x_i)-(y_i^old)|<ϵ and all intervals are of length greater than 2ϵ. Further assume that (x_l)+ϵ < (y_k^old). If _B_X^B_Y^old(α)e_lk^-λ is a basis transformation for Y and B_Y^new is the basis for Y and after this basis transformation. Then _B_X^B_Y^new(α)=e^λ_lk. First consider i≠ k. Under the basis transformation _B_X^B_Y^old(α)e_lk^-λ we have y_i^new=∑_j(_B_X^B_Y^old(α)e_lk^-λ)(j,i)ψ_(y_j^old)^(y_i^old)(y_j^old)=∑_j _B_X^B_Y^old(α)(j,i)ψ_(y_j^old)^(y_i^old)(y_j^old). With the new basis we have α_(x_i)(x_i) =∑_(y_j^old)≤(x_i)+ϵ_B_X^B_Y^old(α)(j,i)ψ_(y_j^old)^(x_i)+ϵ(y_j^old) =ψ_(y_i^old)^(x_i)+ϵ(∑_(y_j^old)≤(x_i)+ϵ_B_X^B_Y^old(α)(j,i)ψ_(y_j^old)^(y_i^old)(y_j^old)) =ψ_(y_i^new)^(x_i)+ϵ(y_i^new) Note that (y_i^new)=(y_i^old) by definition. As the (j,l) entry of _B_X^B_Y^old(α) and _B_X^B_Y^old(α)e_lk^-λ agree, if _B_X^B_Y^old(α)(j,l)≠ 0 then (y_j)≤(y_l^old). If _B_X^B_Y^old(α)(j,k)-λ_B_X^B_Y^old(α)(j,l)≠ 0 then (y_j^old)≤(y_k^old) as by assumption as _B_X^B_Y^old(α)e_lk^-λ is a basis transformation for Y. We can use these facts to rewrite the summations in the following calculation. α_(x_k)(x_k) =∑_(y_j^old)≤(x_k)+ϵ_B_X^B_Y^old(α)(j,k)ψ_(y_j^old)^(x_k)+ϵ(y_j^old) =∑_(y_j^old)≤(x_k)+ϵ ( _B_X^B_Y^old(α)(j,k)-λ_B_X^B_Y^old(α)(j,l)) +λ_B_X^B_Y^old(α)(j,l))ψ_(y_j^old)^(x_k)+ϵ(y_j^old) =ψ_(y_k^old)^(x_k)+ϵ(∑_(y_j^old)≤(y_k^old) (_B_X^B_Y^old(α)(j,k)-λ_B_X^B_Y^old(α)(j,l))ψ_(y_j^old)^(y_k^old)(y_j^old)) + ψ_(y_l^old)^(x_k)+ϵλ(∑_(y_j^old)≤(y_l^old)_B_X^B_Y^old(α)(j,l)ψ_(y_j^old)^(y_l^old)(y_j^old)) =ψ_(y_k^new)^(x_k)+ϵ(y_k^new) + λψ_(y_l^new)^(x_k)+ϵ(y_l^new) From our assumptions about the lengths of intervals and the pairing of critical values we know that (x_i)+ϵ < (y_i^new) for all i. We also assumed that (x_l)+ϵ < (y_k^new). Since the {y_j^new} form a basis we can conclude that _B_X^B_Y^new(α)=e_lk^λ. To make the bookkeeping easier later we will want to have the same number of basis elements throughout the time period of a vineyard. It will be helpful to generalise our notion of basis to allow for extra zero elements. To do this we will introduce the definition of an extended basis and transformation of an extended basis. Given a persistence module X we say that an extended basis of X is a multiset B' consisting of the union of a basis B of X and an indexed set of zero elements. Note that within an extended basis the order of the indices of the zero and non-zero elements may be mixed up. When we wish to pull out the basis contained in an extended basis we will be restricting to appropriate subset of indices. The notions of the matrix of a morphism and basis transformations naturally extend to extended basis. To extend the definition of the matrix of a morphism we merely add in rows and columns of zeros for the indices of the extended basis which are zero. To extend the notion of a basis transformation we also add rows and columns for the zero elements of the different extended basis. If we restrict the extended basis transformation matrix to the indices of the contained bases then we will have a (non-extended) basis transformation matrix. § SIMPLIFYING THE MATRIX FOR AN Ε-INTERLEAVING This section is devoted to understanding when _B_X^B_Y(α) is a basis transformation matrix for Y when α:X→Y is part of an interleaving of sufficiently close persistence modules. Firstly we will establish a useful lemma for calculations. Let B_X and B_Y be bases for persistence modules X=(X_t, ϕ_s^t) and Y=(Y_t, ψ_s^t). Let α: X→Y and β:Y→X form an ϵ-interleaving. Then for each i we have β_(x_i)+ϵ(α_(x_i)(x_i))= ∑_j,k_B_X^B_Y(α)(j,i) _B_Y^B_X(β)(k,j) ϕ_(x_k)^(x_i)+2ϵ(x_k). β_(x_i)+ϵ(α_(x_i)(x_i)) =β_(x_i)+ϵ(∑_j _B_X^B_Y(α)(j,i) ψ_(y_j)^(x_i)+ϵ(y_j)) =∑_j _B_X^B_Y(α)(j,i) β_(x_i)+ϵ( ψ_(y_j)^(x_i)+ϵ(y_j)) =∑_j _B_X^B_Y(α)(j,i) ϕ_(y_j)+ϵ^(x_i)+2ϵ(β_(y_j)(y_j)) =∑_j _B_X^B_Y(α)(j,i) ϕ_(y_j)+ϵ^(x_i)+2ϵ(∑_k _B_Y^B_X(β)(k,j) ϕ_(x_k)^(y_j)+ϵ) = ∑_j,k_B_X^B_Y(α)(j,i) _B_Y^B_X(β)(k,j) ϕ_(x_k)^(x_i)+2ϵ(x_k) We want to relate the ϵ-morphisms within an interleaving (for sufficiently small ϵ) to basis transformation matrices. The main consideration is how the natural ordering amoungst the intervals changes. There is a natural partial order on ^2 with (b_1,d_1)≤ (b_2,d_2) whenever b_1≤ b_2 and d_1≤ d_2. This partial order induces a partial order on the set of intervals within a barcode and from this we have a natural partial order on the basis elements associated to each of the intervals. Let x_i, x_j be basis elements of persistence module X. We say x_j ≤ x_i if (x_i)≤(x_j) and (x_j)≤(x_i). We start with the (boring) case where the order of the critical values do not change and later we will consider what can happen when critical values coincide. Here the partial order stays the same, even with some ϵ wiggle room. Let X and Y be persistence modules where all critical values are distinct and the difference between pairs of critical values within a persistence module is greater than 2ϵ, and α: X→Y and β: Y→X form an ϵ-interleaving. This implies there must be the same number of intervals X and Y and we can pair them up so that the births and deaths vary by at most ϵ. For any choice of basis B_X={x_i} for X and B^old_Y={y_i} for Y such that |(x_i)-(y_i)|<ϵ and |(x_i)-(y_i)|<ϵ for all i, we have _B_X^B^old_Y(α) is a basis transformation matrix for Y. Let B_Y^new= _B_X^B^old_Y(α)B_Y be the new basis for Y. Then both _B_X^B_Y^new(α) and _B_Y^new^B_X(β) are the identity matrix. Suppose that _B_X^B^old_Y(α)(j,i)≠ 0. We know (y_j) ≤(x_i)+ϵ. Combined with our assumption that |(x_j)-(y_j)|<ϵ we have (y_j) ≤(y_i)+2ϵ. Our assumption that every pair of critical values is at least 2ϵ apart strengthens (y_j) ≤(y_i)+2ϵ to (y_j) ≤(y_i). The same argument can be applied to conclude that (y_j) ≤(y_i) for all (j,i) with _B_X^B^old_Y(α)(j,i)≠ 0. By Lemma <ref> β_(x_i)+ϵ(α_(x_i)(x_i))= ∑_j,k_B_X^B^old_Y(α)(j,i) _B^old_Y^B_X(β)(k,j) ϕ_(x_k)^(x_i)+2ϵ(x_k). Since α and β form an ϵ-interleaving we have β_(x_i)+ϵ(α_(x_i)(x_i))= ϕ_(x_i)^(x_i)+2ϵ(x_i). As the distances between every pair of critical values within X are greater than 2ϵ we know that {ϕ_(x_k)^(x_i)+2ϵ(x_k)} forms a basis for X_(x_i)+2ϵ and thus ∑_j_B_X^B^old_Y(α)(j,i) _B^old_Y^B_X(β)(i,j)=1. Since the order of the critical values in X and Y are the same, we know that for j≠ i that at least one of _B_X^B^old_Y(α)(j,i)=0 or _B^old_Y^B_X(β)(i,j)=0. This implies that _B_X^B^old_Y(α)(i,i)_B^old_Y^B_X(β)(i,i)=1 and hence _B_X^B^old_Y(α)(i,i)≠ 0. We have now shown that _B_X^B_Y^old(α) is a basis transformation matrix for Y. By Lemma <ref> _B_X^B_Y^new(α) is the identity. Substituting this into the equation in Lemma <ref> we see for each i that ϕ_(x_i)^(x_i)+2ϵ(x_i)=β_(x_i)+ϵ(α_(x_i)(x_i))= ∑_k _B_Y^new^B_X(β)(k,i) ϕ_(x_k)^(x_i)+2ϵ(x_k). Again using that, for each i, we know {ϕ_(x_k)^(x_i)+2ϵ(x_k)} forms a basis of X_(x_i)+2ϵ, and that no critical values occur in ((x_i), (x_i)+2ϵ], we can conclude that _B_Y^new^B_X(β) is the identity matrix. There are many different cases of segments to consider separately, which are illustrated in Table <ref> and Table <ref>. Our simplifying assumptions do reduce the number of cases to consider. For sufficiently close time values where we have the same number of intervals, there is either no change in the ordering of critical values, or a single change between all distinct values and distinct except for a single pair with a single pair equal. In the following table we present the different options. We will want to consider the effect of fixing the basis in X and allowing the basis of Y to vary. This means that the roles of X and Y are not symmetric. The indexing throughout this section will use v_k and v_l as the two vines where a potential change in the order of birth and death times occur and only depict these intervals within the table. [][h!] The different possible cases of critical values coinciding in X or Y such that whenever we are given an ϵ-interleaving α:X→Y and β:Y→X and a basis B_X of X then we can find a basis Y so that the matrices of the interleaving maps is the identity. Case X Y 1 [l][scale=0.06] [] (0,0) – (0,21); [fill=black] (0,0) circle (1cm); (0,22) circle (1cm); [] (10,0) – (10,12); [fill=black] (10,0) circle (1cm); (10,13) circle (1cm); [dashed](-5,0)–(15,0); [l] [scale=0.06] [] (0,0) – (0,20); [fill=black] (0,0) circle (1cm); (0,21) circle (1cm); [] (10,-3) – (10,12); [fill=black] (10,-3) circle (1cm); (10,13) circle (1cm); [dashed](-5,-1.5)–(15,-1.5); 2 [l] [scale=0.06] [] (0,0) – (0,20); [fill=black] (0,0) circle (1cm); (0,21) circle (1cm); [] (10,-3) – (10,12); [fill=black] (10,-3) circle (1cm); (10,13) circle (1cm); [dashed](-5,-1.5)–(15,-1.5); [l] [scale=0.06] [] (0,0) – (0,21); [fill=black] (0,0) circle (1cm); (0,22) circle (1cm); [] (10,0) – (10,14); [fill=black] (10,0) circle (1cm); (10,15) circle (1cm); [dashed](-5,0)–(15,0); 3 [l] [scale=0.06] [] (0,0) – (0,21); [fill=black] (0,0) circle (1cm); (0,22) circle (1cm); [] (10,3) – (10,12); [fill=black] (10,3) circle (1cm); (10,13) circle (1cm); [dashed](-5,1.5)–(15,1.5); [l] [scale=0.06] [] (0,0) – (0,21); [fill=black] (0,0) circle (1cm); (0,22) circle (1cm); [] (10,0) – (10,12); [fill=black] (10,0) circle (1cm); (10,13) circle (1cm); [dashed](-5,0)–(15,0); 4 [l] [scale=0.06] [] (0,-1) – (0,-21); (0,0) circle (1cm); [fill=black] (0,-22) circle (1cm); [] (10,-1) – (10,-12); (10,0) circle (1cm); [fill=black] (10,-13) circle (1cm); [dashed](-5,0)–(15,0); [l] [scale=0.06] [] (0,-1) – (0,-21); (0,0) circle (1cm); [fill=black](0,-22) circle (1cm); [] (10,2) – (10,-12); (10,3) circle (1cm); [fill=black] (10,-13) circle (1cm); [dashed](-5,1.5)–(15,1.5); 5 [l] [scale=0.06] [] (0,-1) – (0,-21); (0,0) circle (1cm); [fill=black](0,-22) circle (1cm); [] (10,2) – (10,-12); (10,3) circle (1cm); [fill=black] (10,-13) circle (1cm); [dashed](-5,1.5)–(15,1.5); [l] [scale=0.06] [] (0,-1) – (0,-21); (0,0) circle (1cm); [fill=black] (0,-22) circle (1cm); [] (10,-1) – (10,-12); (10,0) circle (1cm); [fill=black] (10,-13) circle (1cm); [dashed](-5,0)–(15,0); 6 [l] [scale=0.06] [] (0,-1) – (0,-21); (0,0) circle (1cm); [fill=black] (0,-22) circle (1cm); [] (10,-4) – (10,-12); (10,-3) circle (1cm); [fill=black] (10,-13) circle (1cm); [dashed](-5,-1.5)–(15,-1.5); [l] [scale=0.06] [] (0,-1) – (0,-21); (0,0) circle (1cm); [fill=black] (0,-22) circle (1cm); [] (10,-1) – (10,-12); (10,0) circle (1cm); [fill=black] (10,-13) circle (1cm); [dashed](-5,0)–(15,0); 7 [l] [scale=0.06] [] (0,1.5) – (0,9); [fill=black] (0,1.5) circle (1cm); (0,10) circle (1cm); [] (10,-2.5) – (10,-10); (10,-1.5) circle (1cm); [fill=black] (10,-9) circle (1cm); [dashed](-5,0)–(15,0); [l] [scale=0.06] [] (0,0) – (0,9); [fill=black] (0,0) circle (1cm); (0,10) circle (1cm); [] (10,-1) – (10,-10); (10,0) circle (1cm); [fill=black] (10,-10) circle (1cm); [dashed](-5,0)–(15,0); 8 [l] [scale=0.06] [] (0,-1.5) – (0,9); [fill=black] (0,-1.5) circle (1cm); (0,10) circle (1cm); [] (10,0.5) – (10,-10); (10,1.5) circle (1cm); [fill=black] (10,-9) circle (1cm); [dashed](-5,0)–(15,0); [l] [scale=0.06] [] (0,0) – (0,9); [fill=black] (0,0) circle (1cm); (0,10) circle (1cm); [] (10,-1) – (10,-10); (10,0) circle (1cm); [fill=black] (10,-10) circle (1cm); [dashed](-5,0)–(15,0); 9 [l] [scale=0.06] [] (0,0) – (0,9); [fill=black] (0,0) circle (1cm); (0,10) circle (1cm); [] (10,-1) – (10,-10); (10,0) circle (1cm); [fill=black] (10,-10) circle (1cm); [dashed](-5,0)–(15,0); [l] [scale=0.06] [] (0,1.5) – (0,9); [fill=black] (0,1.5) circle (1cm); (0,10) circle (1cm); [] (10,-2.5) – (10,-10); (10,-1.5) circle (1cm); [fill=black] (10,-9) circle (1cm); [dashed](-5,0)–(15,0); 10 [l] [scale=0.06] [] (0,0) – (0,9); [fill=black] (0,0) circle (1cm); (0,10) circle (1cm); [] (10,-1) – (10,-10); (10,0) circle (1cm); [fill=black] (10,-9) circle (1cm); [dashed](-5,0)–(15,0); [l] [scale=0.06] [] (0,-1.5) – (0,9); [fill=black] (0,-1.5) circle (1cm); (0,10) circle (1cm); [] (10,0.5) – (10,-10); (10,1.5) circle (1cm); [fill=black] (10,-9) circle (1cm); [dashed](-5,0)–(15,0); It turns out that we can apply the same proof from Proposition <ref> to cover all of the non-special cases without much need for amended. Let X and Y be persistence modules such that the order chages in the order of the critical values fit one of cases 1-10 in Table <ref> with the depicted vines involved in the change in the order of critical values being γ_k and γ_l. Further assume that all pairwise differences of critical values are at least 2ϵ except for the following: * |(x_k)-(x_l)| and |(y_k)-(y_l)| in cases 1, 2, and 3, * |(x_k)-(x_l)| and |(y_k)-(y_l)| in cases 4, 5 and 6 * |(x_k)-(x_l)| and |(y_k)-(y_l)| in cases 7, 8, 9, and 10. Suppose that α: X→Y and β: Y→X form an ϵ-interleaving. This implies there must be the same number of intervals X and Y and we have paired them up so that the births and deaths vary by at most ϵ. For any choice of basis B_X={x_i} for X and B^old_Y={y^old_i} for Y such that |(x_i)-(y^old_i)|<ϵ and |(x_i)-(y^old_i)|<ϵ for all i, we have _B^old_Y^B_X(β) is a basis transformation matrix for Y. Let B_Y^new= _B_X^B^old_Y(α)B^old_Y be the new basis for Y. Then both _B_X^B_Y^new(α) and _B_Y^new^B_X(β) are the identity matrix. We will show that _B_X^B^old_Y(α)(j,i)≠ 0 implies (y^old_j)≤(y^old_i)<(y^old_j)≤(y^old_j). To do this we will split into different options for (j,i). Suppose that (j,i) is neither (k,l) nor (l,k). If _B_X^B^old_Y(α)(j,i)≠ 0 then by definition that (y_j^old)≤(x_i) +ϵ <(y_j^old) ≤(x_i)+ϵ. Our pairing of intervals tells us that |(x_i)-(y^old_i)|<ϵ and |(x_i)-(y^old_i)|<ϵ. Together these inequalities imply (y_j^old) ≤(y^old_i)+2ϵ and (y^old_j) ≤(y^old_i)+2ϵ. We have assumed that |(y^old_j)-(y^old_i)|>2ϵ and |(y^old_j)-(y^old_i)|>2ϵ. These strengthen (y_j^old) ≤(y^old_i)+2ϵ to (y_j^old) ≤(y^old_i) and (y^old_j) ≤(y^old_i)+2ϵ to (y^old_j) ≤(y^old_i). Thus _B_X^B^old_Y(α)(j,i)≠ 0 implies (y^old_j)≤(y^old_i)<(y^old_j)≤(y^old_j). Now consider (j,i)=(l,k) In all cases we have (y^old_l)≤(y^old_k) and (y^old_l)≤(y^old_k) so whether _B_X^B_Y^old(α)(k,l) is non-zero or not causes no obstruction for _B_X^B_Y^old(α) being a basis transformation matrix for Y. Finally consider (j,i)=(k,l). Here _B_X^B_Y^old(α)(k,l) is always zero. The reasoning in each case is as follows. In cases 1, 2 and 3 we have (y^old_k)>(y^old_l)+2ϵ so (y^old_k)>(x^old_l)+ϵ. In cases 4, 5 and 6 we have (y^old_k)>(y^old_l)+2ϵ so (y^old_k)>(x^old_l)+ϵ. In cases 7 and 8 we have (y^old_k)=(y^old_l) which implies (x^old_k)+ϵ >(y^old_l). In cases 9 and 10 we have (x^old_k)=(x^old_l) which implies (x^old_k)+ϵ >(y^old_l). Having covered all the cases we can state that _B_X^B^old_Y(α)(j,i)≠ 0 implies (y^old_j)≤(y^old_i) and (y^old_j)≤(y^old_j) for all (j,i). From Lemma <ref> β_(x_i)+ϵ(α_(x_i)(x_i))= ∑_j,k_B_X^B^old_Y(α)(j,i) _B^old_Y^B_X(β)(k,j) ϕ_(x_k)^(x_i)+2ϵ(x_k). Since β_(x_i)+ϵ(α_(x_i)(x_i))=ϕ_(x_i)^(x_i)+2ϵ(x_i) and the {ϕ_(x_k)^(x_i)+2ϵ(x_k)|(x_k)≤(x_i)+2ϵ <(x_k)} form a basis for X_(x_i)+2ϵ we know that ∑_j_B_X^B^old_Y(α)(j,i) _B^old_Y^B_X(β)(k,j) =1 We thus have shown that _B_X^B^old_Y(α) is a basis transformation matrix for Y. Furthermore, by Lemma <ref> we automatically have _B_X^B^new_Y(α) is the identity matrix. We now wish to show that _B_Y^new^B_X(β) is also the identity matrix. Substituting _B_X^B_Y^new(α)= into the equation in Lemma <ref> we see for each i that ϕ_(x_i)^(x_i)+2ϵ(x_i)=β_(x_i)+ϵ(α_(x_i)(x_i))= ∑_j _B_Y^new^B_X(β)(j,i) ϕ_(x_j)^(x_i)+2ϵ(x_j). For each i, we know {ϕ_(x_j)^(x_i)+2ϵ(x_j)| (x_j)≤(x_i)+2ϵ<(x_j)} forms a basis of X_(x_i)+2ϵ. This implies that _B_Y^new^B_X(β)(i,i)=1 and if _B_Y^new^B_X(β)(j,i)≠ 0, for some j≠ i, then (x_i)≤(x_j)+2ϵ. By definition _B_Y^new^B_X(β)(j,i)≠ 0 also implies that (y_i^new)+ϵ<(x_j). As |(y_i^new)-(x_i)|<ϵ we conclude that (x_j)∈ ((x_i), (x_i)+2ϵ). Given our assumptions the only case where this could occur is case 10 with j=k, and here (y_j)=(y_j). However this implies (y_i^new)+ϵ<(x_j) as |(y_i^new)-(x_i)|<ϵ. This is a contradiction. We thus have shown that _B_Y^new^B_X(β)=. The remaining two cases are the ones which stop the automatic decomposition of vineyard modules into a sum of vine modules. These are illustrated in Table <ref>. [][h] The cases when, for a fixed basis of X, we can't guarantee to find a basis of Y so that the matrices of the interleaving maps are the identity. If the first of the two intervals corresponds to basis elements x_k and y_k and the second interval to x_l and y_l then we have x_l≤ x_k but y_l≰ y_k. Case X Y 11 [l] [scale=0.06] [] (10,-1) – (10,-21); (10,0) circle (1cm); [fill=black] (10,-22) circle (1cm); [] (0,-1) – (0,-12); (0,0) circle (1cm); [fill=black] (0,-13) circle (1cm); [dashed](-5,0)–(15,0); [l] [scale=0.06] [] (10,-1) – (10,-21); (10,0) circle (1cm); [fill=black] (10,-22) circle (1cm); [] (0,-4) – (0,-12); (0,-3) circle (1cm); [fill=black] (0,-13) circle (1cm); [dashed](-5,-1.5)–(15,-1.5); 12 [l] [scale=0.06] [] (0,0) – (0,21); [fill=black] (0,0) circle (1cm); (0,22) circle (1cm); [] (10,0) – (10,12); [fill=black] (10,0) circle (1cm); (10,13) circle (1cm); [dashed](-5,0)–(15,0); [l] [scale=0.06] [] (0,0) – (0,21); [fill=black] (0,0) circle (1cm); (0,22) circle (1cm); [] (10,3) – (10,12); [fill=black] (10,3) circle (1cm); (10,13) circle (1cm); [dashed](-5,1.5)–(15,1.5); Let X and Y be persistence modules with bases B_X and B_Y and that α:X→Y and β:Y→X form an ϵ-interleaving. Suppose that x_l≤ x_k but y_l≰ y_k and that all other elements of the preorder remain the same. Further suppose that either (a) (x_k)=(x_l) and |(y_k)-(y_l)|<ϵ and that all other pairwise differences of critical values are at least 2ϵ, or (b) (x_k)=(x_l) and |(y_k)-(y_l)|<ϵ and that all other pairwise differences of critical values are at least 2ϵ.. Let λ=_B_X^B_Y(α)(l,k)/_B_X^B_Y(α)(l,l) and e_lk^λ be the elementary matrix with λ in the (l,k) entry. Then _B_X^B_Y(α)e_lk^-λ is a basis transformation matrix for Y. Furthermore under the new basis B_Y^new we have _B_X^B_Y^new(α)=e^λ_lk and _B_Y^new^B_X(β)= e^-λ_lk. We will prove for (a) (case 11 in Table <ref>) and omit the proof for (b) (case 12 in Table <ref>) as it is highly analogous where we only need to switch the roles of births and deaths. For i≠ k, (_B_X^B^old_Y(α)e_lk^-λ )(j,i)= _B_X^B^old_Y(α)(j,i). If _B_X^B^old_Y(α)(j,i)≠ 0 then (y_j^old)≤(x_i) +ϵ <(y_j^old) ≤(x_i)+ϵ. Our pairing of intervals tells us that |(x_i)-(y^old_i)|<ϵ and |(x_i)-(y^old_i)|<ϵ. Together these inequalities imply (y_j^old) ≤(y^old_i)+2ϵ, (y^old_j) ≤(y^old_i)+2ϵ and (y^old_i) < (y_j^old). By construction of e^-λ_lk we have (_B_X^B^old_Y(α)e_lk^-λ )(j,k) =_B_X^B^old_Y(α)(j,k)-λ_B_X^B^old_Y(α)(j,l) =_B_X^B^old_Y(α)(j,k)-(_B_X^B_Y(α)(l,k)/_B_X^B_Y(α)(l,l)) _B_X^B^old_Y(α)(j,l) In particular (_B_X^B^old_Y(α)e_lk^-λ )(l,k)=0. Suppose that i=k but j≠ l. We have assumed that |(y^old_j)-(y^old_i)|>2ϵ and |(y^old_j)-(y^old_i)|>2ϵ. These strengthen (y_j^old) ≤(y^old_i)+2ϵ to (y_j^old) ≤(y^old_i) and (y^old_j) ≤(y^old_i)+2ϵ to (y^old_j) ≤(y^old_i). Thus _B_X^B^old_Y(α)(j,i)≠ 0 implies (y^old_j)≤(y^old_i)<(y^old_j)≤(y^old_j). The same reasoning in Proposition <ref> applies to show that _B_X^B^old_Y(α)(i,i)≠ 0 for all i. We thus have shown that (_B_X^B^old_Y(α)e_lk^-λ ) is a basis transformation matrix for Y. By Lemma <ref> we know that if for B_Y^new=(_B_X^B_Y^old(α)e_lk^-λ)(B_Y^old) we have _B_X^B_Y^new(α)=e^λ_lk. We now wish to show that _B_Y^new^B_X(β)=e^-λ_lk. Substituting _B_X^B_Y^new(α)=e^λ_lk into the equation in Lemma <ref> we see for each i≠ k that ϕ_(x_i)^(x_i)+2ϵ(x_i)=β_(x_i)+ϵ(α_(x_i)(x_i))= ∑_j _B_Y^new^B_X(β)(j,i) ϕ_(x_j)^(x_i)+2ϵ(x_j) and {ϕ_(x_j)^(x_i)+2ϵ(x_j)| (x_j)≤(x_i)+2ϵ<(x_j)} forms a basis of X_(x_i)+2ϵ. Immediately this implies that _B_Y^new^B_X(β)(i,i)=1. If _B_Y^new^B_X(β)(j,i)≠ 0, for some j≠ i, then both (x_i)≤(x_j)+2ϵ (by equation (<ref>)) and (y_i)>(y_j)+ϵ (by definition of matrix of an epsilon morphism) which this contradicts our assumptions so _B_Y^new^B_X(β)(j,i)= 0 for all j≠ i. For i=k Lemma <ref> combined with the above says ϕ_(x_k)^(x_k)+2ϵ(x_k)=β_(x_k)+ϵ(α_(x_k)(x_k)) = ∑_j (_B_Y^new^B_X(β)(j,k) +λ_B_Y^new^B_X(β)(j,l)) ϕ_(x_j)^(x_k)+2ϵ(x_j) =λϕ_(x_l)^(x_k)+2ϵ(x_l)+ ∑_j (_B_Y^new^B_X(β)(j,k) ϕ_(x_j)^(x_k)+2ϵ(x_j) This implies that _B_Y^new^B_X(β)(k,k)=1 and _B_Y^new^B_X(β)(k,l)=-λ. The remaining _B_Y^new^B_X(β)(j,k)=0 by the same contradiction argument above where i≠ k. There are in fact two more cases we need to consider which is when the number of intervals changes. We will have a different number of basis elements in X and Y. In terms of vineyards these correspond to the situation where a vine moves in or out of the diagonal. We cannot expect the matrices of our interleaving maps to be the identity but we can get the next best thing which is the projection map onto the common set for intervals. This will happen because the interleaving maps will naturally split into a direct sum of morphisms - one over the common intervals and one for the interval present in only one of the persistence modules. Let X̂, Y be persistence modules such that all the critical values are distinct and the pairwise distance between any pair of critical values within the same persistence module is greater than 2ϵ. Let N denote the number of intervals in Y. Set X=X̂⊕I[b,d) where d-b<2ϵ and the distance from b or d to any critical value of A is at least 2ϵ. Suppose that α:X→Y and β:Y→X form an ϵ-interleaving. Any basis B_X of X will partition into a basis for X̂ (which we will denote B_X̂) plus one other element. Without loss of generality order the basis elements of X̂ so those in X appear first. Choose an extended basis B_Y of Y consisting of a basis B̂_Y of Y alongside a single 0 element appearing last in index order. Then _B_X^B_Y(α) is an extended basis transformation matrix for B_Y. Under the new extended basis B_Y^new we have _B_X^B_Y^new(α)=diag(1,1, …, 1,0)=_B_Y^new^B_X(β) . We also have the restriction of _B_Y^B_X(β) to the first N columns and N rows is a basis transformation matrix for B_X̂, and the block matrix of _B_Y^B_X(β) with the 1 by 1 matrix with 1 is a a basis transformation matrix for B_X. Under this new basis _B^new_X^B_Y(α)=diag(1,1, …, 1,0)=_B_Y^B^new_X(β) . Our assumption that the distance from the end points of Î to critical value of A is at least 2ϵ, alongside the length of Î smaller that 2ϵ implies that for every interval in B, either Î is contained in that interval or it is disjoint to it. This implies that _B_X^B_Y(α)(N+1, i)=0= _B_X^B_Y(α)(i, N+1) for all i. As the N+1 element in the extended basis B_Y is a zero element we have definition _B_Y^B_X(β)(N+1, i)=0=_B_Y^B_X(β)(i, N+1) for all i When we restrict to first N elements of B_X and B_Y then we are in the same case as in Proposition <ref> and the same argument can be applied here to complete the proof. § VINE AND MATRIX REPRESENTATIONS OF VINEYARD MODULES In order to explore this decomposition further we will need find nice ways to represent vineyard modules. For this we will define a vine, basis and matrix representation. Let V=(V_t, α_s^t, β_t^s) be a vineyard modules over time interval [s_0,s_1]. A vine and matrix representation of V=(V_t, α_s^t, β_t^s) consists of a set of vines {γ_1, γ_2, …γ_N} each of which is defined over a connected subset of [s_0,s_1], alongside families of matrices {M(α)^s→ t| s_1≤ s<t≤ s_1} and {M(β)^t→ s| s_1≤ s<t≤ s_1}, such that there exists family of extended bases {B_t} (where B_t is a basis for V_t and respects the order of the vines) such that M(α)^s→ t=_B_s^B_t(α^s→ t) and M(β)^t→ s=_B_t^B_s(β^t→ s). We call {B_t} an associated family of bases for that representation. Notably this vine and matrix representation is not unique as it depends on the choice of bases. Given a vine and matrix representation there may also be many potential associated family of bases. However, we do at least know that if the vine and matrix representations agree then the vineyard modules are isomorphic. Let V=(V_t, α_V^s→ t, β_V^t→ s) and W=(W_t,α_W^s→ t, β_W^t→ s) be vineyard modules over the same vineyard with vines {γ_i}. Suppose that ({γ_i}, { M(α)^s→ t},{M(β)^t→ s}) is a vine and matrix representation of both V and W. Then V and W are isomorphic as vineyard modules. There must exist bases B_t^V of V_t and B_t^W of W_t such that _B^V_s^B^V_t(α_V^s→ t)=M(α)^s→ t=_B^W_s^B^W_t(α_W^s→ t) and _B^V_t^B^V_s(β_V^t→ s)=M(β_W)^t→ s=_B_t^B_s(β^t→ s) for all s<t. Set ρ_t: V_t →W_t by _B_t^V^B_t^W(ρ)=π_t where π_t is the diagonal matrix with 1 at the (i,i) entry with t∈(γ_i) and 0 otherwise. Observe that trivially we have that ρ_t commutes appropriately with all the interleaving and transition maps and determines a morphism ρ: V→W. This vineyard module morphism ρ is also clearly invertible with a symmetric construction of the inverse. Given the vine and matrix representations of a finite number of vineyard modules there is an obvious construction of a vine and matrix representation of the vineyard module of their direct sum via block matrices. Being able to write the matrices as block matrices description provides an easy sufficient condition for when a vineyard module decomposes. Let X=(X_t, α_s^t, β_t^s) be a vineyard module with vine and matrix representation ({γ_i}, {M(α^s→ t)}, {M(β^t→ s)}). Suppose that for all s_0≤ s<t ≤ s_1 both M(α^s→ t) and M(β^t→ s) satisfy block diagonal matrix with block index sets S_1, S_2, … S_m. Then we can construct vineyard modules X_1, X_2, …, X_m with X≅⊕_j=1^mX_j where X_j is has vine and matrix representations ({γ_i| i∈ S_j}, {π_S_j(M(α^s→ t))}, {π_S_j( M(β^t→ s))}). Here π_S_j(A) is the restriction of matrix A to the coordinates in S_j. Finding necessary conditions for decomposition in terms of the vine and matrix representation is much harder. Depending on the choice of associated bases, we can not expect that a vineyard module which is the direct sum of vine modules will necessarily have matrices that will split up into a block diagonal form. We will need to find ways to transform the bases of the persistence modules over the different t so that the matrices are of a nice form. We now wish to use these basis transformations to simplify the matrices of the interleaving maps within a vineyard module. The plan is to fix the basis at t_0 and then transforms the bases in a forward or backward direction. This is complicated by the vines within a vineyard module having different supports. Let π_S denote the projection matrix onto the coordinates in set S. That is, the diagonal matrix with 1 for each index in set S and 0 otherwise. Given the structure of the vineyard modules we only need to proscribe how to change the basis over smaller segments. To construct these segments we need to consider the locations where birth and or death values coincide, or a new interval appears/disappears (philosophically its own birth and death values coincide). We define these time values as critical. A vineyard module segment is the restriction of a vineyard module to a time interval [T_0, T_1] such that there are no critical times in (T_0, T_1) and one of the three conditions hold: * neither T_0 nor T_1 are critical times and the |T_0-T_1| is bounded above by ϵ/4 where ϵ is the smallest distance between the distinct birth and death values within T_0 or within T_1, * one of T_0 or T_1 is critical (label this T_i) and |T_0-T_1| is bounded above by ϵ/4 where ϵ is the smallest distance between the distinct birth and death values in T_i. Note that our simplifying assumptions guarantee that the number of times that the endpoints of the vines {γ_i} are not all distinct is finite. In the case where an interval appears or disappears we consider the limiting value as one of the distinct birth/death values. The partition of a vineyard module into segments in dependent only on the set of vines and not on the interleaving maps. We wish to simplify the matrix representative of the transition maps within a vineyard module by progressively changing the basis of the persistence modules going forward or going backwards. There will be time values where we cannot guarantee that these simplifications result in diagonal matrices. This leads to the definition of forwards and backwards incompatibility. Let V be a vineyard module. We say that V is forwards incompatible at s by vines (γ_k, γ_l) if γ_l(s)≤γ_k(s) but γ_l(t)≰γ_k(t) for all t∈ (s, s+δ) for δ>0 sufficiently small. We say that s is forwards compatible is it is not forwards incompatible. The forwards incompatible cases are shown in Table <ref> with X=V_t and Y=V_s for t>s sufficiently close, for s forwards incompatible. The definition of backwards compatible and backwards incompatible is completely symmetric - traversing the vineyard in the opposite direction. We say V is backwards incompatible at t by vines (γ_k, γ_l) if γ_l(t)≤γ_k(t) but γ_l(s)≰γ_k(s) for all s∈ (t-δ, t) for δ>0 sufficiently small. We say that t is backwards compatible is it is not backwards incompatible. Note that for a segment [T_m ,T_m+1] the only potentially forwards incompatible value is T_m. Let V=(V_t, {α^s→ t}, {β^t→ s}) be a vineyard module segment over [T_m, T_m+1]. And {B_t^old} an initial choice of basis for each V_t. Let A:=_B_T_m^new^B_T_m+1^old(α^T_m → T_m+1) and à the matrix A with 1 added to any non-zero diagonal element. We say {B^new_t} is a forwards simplified family of bases if * B^new_T_m=B^old_T_m, * B^new_T_m+1=Ã(B_T_m+1^old) and * _B^new_s^B^new_t(α^s→ t)=π_S_s∩ S_t for all t>s sufficiently close, when T_m is forwards compatible, and if T_m forwards incompatible by vines (γ_k, γ_l) and λ=A(l,k)/A(l,l) then * B^new_T_m=B^old_T_m * B^new_T_m+1=(A e^-λ_lk)(B_T_m+1^old) * _B^new_s^B^new_t(α^s→ t)=π_S_s∩ S_t for all T_m<s<t with s,t sufficiently close, and * _B^new_s^B^new_t(α^T_m→ t)=e^λ_lkπ_S_T_m for all t>T_m sufficiently close. The definition of backwards simplified is symmetric. Note that for a segment [T_m ,T_m+1] the only potentially backwards incompatible value is T_m+1. Let V=(V_t, {α^s→ t}, {β^t→ s}) be a vineyard module segment over [T_m, T_m+1]. And {B_t^old} an initial choice of basis for each V_t. Let A=_B_T_m+1^new^B_T_m^old(β^T_m+1→ T_m) and à the matrix A with 1 added to any non-zero diagonal element. We say {B^new_t} is a backwards simplified family of bases if * B^new_T_m+1=B^old_T_m+1, * B^new_T_m=Ã(B_T_m^old) and * _B^new_t^B^new_s(β^t→ s)=π_S_s∩ S_t for all t>s sufficiently close, when T_m+1 is backwards compatible, and if T_m+1 backwards incompatible by vines (γ_k, γ_l) and λ=A(l,k)/A(l,l) then * B^new_T_m+1=B^old_T_m+1, * B^new_T_m=(A e^-λ_lk)(B_T_m^old) * _B^new_t^B^new_s(β^t→ s)=π_S_s∩ S_t for all s<t<T_m+1 with s,t sufficiently close, and * _B^new_T_m+1^B^new_t(β^T_m+1→ t)=e^λ_lkπ_S_T_m+1 for all t<T_m+1 sufficiently close. Given a vineyard module we can partition it into segments {[T_m, T_m+1]}_m=1^M-1. We wish to forward simplify progressively over the segments from [T_0, T_1] through to [T_M-1, T_M]. We then can backwards simplify back again starting with [T_M-1, T_M] and progressively back to [T_0, T_1]. The final family of bases will be call forwards and then backwards simplified. Given the symmetry in the definitions of forward and backward simplification it will be sufficient to show it is always possible to forward simplify a segment. Let V=({V_t}, {α^s→ t}, {β^t→ s}) be a vineyard module segment over [T_m, T_m+1]. Then we can forward simplify V. We can split the proof into the different cases depending on whether T_m is critical and forward compatible, T_m is critical and forwards incompatible, T_m+1 is critical, or neither T_m nor T_m+1 is critical. We will omit the vineyard parameter from the transition maps within the persistence modules (denoting all by ϕ) as we already have an overwhelming abundance of indices and which persistence module the transition module is within can always be inferred from context using the location of the input. Denote by {B_t^old} the choice of basis for each V_t before forwards simplifying. Let A=_B_T_m^new^B_T_m+1^old(α^T_m → T_m+1) and let à be the matrix A with 1 added to any non-zero diagonal element. §.§.§ Case where neither T_m nor T_m+1 is critical: If neither T_m nor T_m+1 are critical then for all t∈ [T_m, T_m+1] the critical values are all distinct. Observe that S_t is the same for all t∈ [T_m, T_m+1]. Set B^new_T_m=B^old_T_m. Both V_T_m and V_t are persistence modules whose critical values are distinct and the difference between pairs of critical values within a persistence module is greater than 4|T_m-t|. Furthermore, α^T_m → t and β^t→ T_m form an |t-T_m| interleaving. This means that we can apply Proposition <ref> to say that if we can set B_t^new to be _B_T_m^new^B^old_t(α^T_m→ t)B_t^old as the new basis for V_t then both _B_T_m^new^B_t^new(α^T_m→ t) and _B_t^new^B_T_m^new(β^t→ T_m) are the identity when restricted to the vines in S_T_m. In particular for t=T_m we have B^new_T_m+1=A(B_T_m+1^old). Since the support of the vines is the same throughout the segment A(B_T_m+1^old)=Ã(B_T_m+1^old). It remains to show that for s< t that _B_s^new^B_t^new(α^s→ t)=π_T_m=_B_t^new^B_s^new(β^t→ s). Denote the basis elements in B_t^new by {x_i^t}. It is sufficient to show that α^s→ t_(x_i^s)(x_i^s)=ϕ_(x_i^t)^(x_i^s)+|s-t|(x_i^t). Let s,t∈(T_m, T_m+1] with s<t. Let x_i^s be a non-zero basis element in B_s^new. Diagram chasing we can show that ϕ_(x_i^s)+|s-t|^(x_i^s)+|s-T_m|+|t-T_m|(α^s→ t_(x_i^s)(x_i^s)) =α^T_m → t_(x_i^s)+|s-T_m|(β^s→ s_n_(x_i^s)(x_i^s)) =α^T_m → t_(x_i^s)+|s-T_m|(ϕ_(x_i^s_n)^(x_i^s)+|s-s_i|(x_i^s_n)) =ϕ_(x_i^T_m)+|t-s_n|^(x_i^s)+|s-T_m|+|t-T_m|(α_(x_i^T_m)^T_m→ t(x_i^T_m)) =ϕ_(x_i^T_m)+|t-T_m|^(x_i^s)+|s-T_m|+|t-T_m|(ϕ_(x_i^t)^(x_i^T_m)+|t-T_m|(x_i^t)) =ϕ_(x_i^s)+|s-t|^(x_i^s)+|s-T_m|+|t-T_m|(ϕ_(x_i^t)^(x_i^s)+|s-t|(x_i^t)) By assumption there are no critical heights of V_t within the interval [(x_i^s)+|s-t|,(x_i^s)+|s-T_m|+|t-T_m|]⊂ ((x_i^t), (x_i^t) + δ) and so we can infer that α^s→ t_(x_i^s)(x_i^s)=ϕ_(x_i^t)^(x_i^s)+|s-t|(x_i^t). Since this holds for all i∈ S_T_m we conclude that _B_s^new^B_t^new(α^s→ t)=π_S_T_m. §.§.§ Case where T_m+1 is critical: Observe that S_t is the same for all t∈ [T_m, T_m+1) and that S_T_m+1∩ S_t=S_T_m+1 for all t∈ [T_m, T_m+1]. This is because the only potential change in support can be from the disappearance of a vine at time T_m+1. Set B_T_m^new=B_T_m^old. By Proposition <ref> (if an interval disappears at T_m+1) or Proposition <ref> (otherwise) we can set B_T_m+1^new to be A(B_T_m+1^old) (noting this is the same as Ã(B_T_m+1^old)), and that under this new basis _B_T_m^new^B_T_m+1^new(α^T_m→ T_m+1)=π_S_T_m∩ S_T_m+1=_B_T_m+1^new^B_T_m^new(β^T_m+1→ T_M). Define the function f:[T_m, T_m+1]→ [0, ∞) by f(t) as the smallest distance between any pair of critical values in V_t. Note that f is 2-Lipschitz as the vineyard is assumed to be 1-Lipschitz, f(T_m+1)=0 and f(t)>0 for t≠ T_m+1. In particular this implies 0<f(t)<2|T_m+1-t| and for any t∈ [T_m, T_m+1) we have t+f(t)/4∈ [T_m, T_m+1). Construct the strictly increasing sequence {s_n}⊂ [T_m, T_m+1) with s_0:=T_m and s_n:=s_n-1+ f(s_n-1)/4. As {s_n} is a bounded increasing sequence it must converge to some limit which we will denote L∈[T_m , T_m+1]. Suppose that L<T_m+1 which by assumption implies f(L)>0. Choose k such that s_k>L-f(L)/4. Since |f(s_k)-f(L)|<2|L-s_k| we have f(s_k)>f(L)/2. This implies that s_k+1=s_k+f(s_k)/4> L-f(L)/16+f(L)/8=L+f(L)/16>L which is a contradiction as {s_n} is increasing. We conclude that lim_n→∞ s_n= T_m+1. Thus every s∈ [T_m, T_m+1) will satisfy s∈ [s_n, s_s+1) for some n. We will consider s<t to be sufficiently close if they lie in the same or adjacent subintervals. We can define B^new_t for t∈ (s_n , s_n+1] inductively over n, using the same arguments in to case where neither T_m nor T_m-1 are critical as we can note that [s_n, s_n+1] is satisfies the definition of segment by construction. This implies that by the previous case that _B_s^new^B_t^new(α^s→ t)=π_T_m=_B_t^new^B_s^new(β^t→ s) for s<t and both in [s_n, s_n+1]. Now suppose that s∈(s_n-1, s_n] and t∈(s_n, s_n+1]. We have already shown α^s→ s_n_(x_i^s)(x_i^s)=ϕ_(x_i^s_n)^(x_i^s)+|s-s_n|(x_i^s_n) and α^s_n→ t_(x_i^s_n)(x_i^s_n)=ϕ_(x_i^t)^(x_i^s_n)+|t-s_n|(x_i^t). As the interleaving maps commute we combine to say α^s→ t_(x_i^s)(x_i^s)=α^s_n→ t_(x_i^s)+|s_n-s|(α^s→ s_n_(x_i^s)(x_i^s))= ϕ_(x_i^t)^(x_i^s)+|s-t|(x_i^t). Note that by construction of our sequence {s_n} we have (x_i^t)>(x_i^s)+|s-t|. As this holds for all vines γ_i with i∈ S_t we conclude _B_s_n^new^B_t^new(α^s_n→ t)=π_S_t for s<t<T_m+1 sufficiently close. We want to show that _B_s^new^B_T_m+1^new(α^s→ T_m+1)=π_S_T_m for all s∈ [T_m, T_m+1]. We prove this inductively over n for s∈ (s_n-1, s_n] with the base case of n=0 the singleton {T_m} true by construction. Let γ_i∈ S_T_m+1 and thus (γ_i^t)-(γ_i^t)>|T_m-T_m+1 for all t∈ [T_m, T_m+1]. As the interleaving maps commute we know α_(x_i^s_n)^s_n → T_m+1(x_i^s_n)=α_(x_i^s_n)+|s-s_n|^s→ T_m+1(α_(x_i^s_n)^s_n → s(x_i^s_n)) and thus ϕ_(x_i^T_m+1)^(x_i^s_n)+|T_m+1-s_n|(x_i^T_m+1) =α_(x_i^s_n)+|s-s_n|^s→ T_m+1(ϕ_(x_i^s)^(x_i^s_n)+|s-s_n|(x_i^s)) =∑_j _B_s^new^B_T_m+1^new(α^s→ T_m+1)(j,i) ϕ_(x_j^T_m+1)^(x_i^s_n)+|T_m+1-s_n|(x_j^T_m+1) As no critical values in V_T_m+1 occur in the height range of ((x_i^T_m+1), (x_i^T_m+1)+|T_m-T_m+1|) we infer that _B_s_n^new^B_T_m+1^new(α^s_n→ T_m+1)=π_S_T_m+1. §.§.§ Case where T_m is critical and forwards compatible: Observe that S_t is the same for all t∈ (T_m, T_m+1] and that S_T_m∩ S_t=S_T_m for all t∈ [T_m, T_[m+1]. Set B_T_m^new=B_T_m^old and by Proposition <ref> or Proposition <ref> (depending on the type of critical behaiviour) we can set B_T_m+1^new to be _B_T_m^new^B^old_T_m+1(α^T_m→ T_m+1)B_T_m+1^old as the new basis for V_T_m+1, and under this new basis _B_T_m^new^B_T_m+1^new(α^T_m→ T_m+1)=π_S_T_m∩ S_T_m+1=_B_T_m+1^new^B_T_m^new(β^T_m+1→ T_m). We now use the same process as in the case where T_m is critical but in the reverse direction. Define the sequence {s_n} inductively by s_0=T_m+1 and s_n=s_n-1-f(s_n)/4. This sequence is bounded and strictly decreasing. We can show it limits to T_m analogously to the above case. We can then inductive define the new bases for the persistence module. For t∈ [s_n+1, s_n) we apply Proposition <ref> with X=V_s_n and Y=V_t and, slightly confusingly, α=β^s_n→ t and β=α^t → s_n. The calculations showing that the matrices of the various interleaving maps are all π_S_s∩ S_t is highly analogous and thus we will omit them here. §.§.§ Case where T_m is critical and forwards incompatible: Observe that S_t is the same for all t∈ [T_m, T_m+1]. Let (γ_k, γ_l) denote the vines that make T_m forwards incompatible. Set λ = _B_T_m+1^new^B_T_m^old(β^T_m+1→ T_m)(l,k)/_B_T_m+1^new^B_T_m^old(β^T_m+1→ T_m)(l,l). Set B_T_m^new=B_T_m^old. By Proposition <ref> we know _B_T_m^new^B_T_m+1^old(α^T_m→ T_m+1)e_lk^-λ is a basis transformation matrix for V_T_m+1 and, furthermore, that under the new basis B_T_m+1^new we have _B_T_m^new^B_T_m+1^new(α^T_m→ T_m+1)=e^λ_lk and _B_T_m+1^new^B_T_m^new(β^T_m+1→ T_m)= e^-λ_lk. We now use the same process as in the case where T_m is critical and forward compatible. We use the same sequence {s_n} inductively defined by s_0=T_m+1 and s_n=s_n-1-f(s_n)/4 which again limits to T_m. We then inductively over define the new bases for the persistence module for t∈ [s_n+1, s_n) using Proposition <ref>. The same arguments show that _B_s^new^B_t^new(α^s→ t)=π_T_m=_B_t^new^B_s^new(β^t→ s) for s<t sufficiently close and both in [T_m, T_m+1). We want to show that _B_s^new^B_T_m+1^new(α^s→ T_m+1)=π_S_T_me^λ_lk for all s∈ [T_m, T_m+1]. We prove this inductively over n for s∈ (s_n-1, s_n] with the base case of n=0 the singleton {T_m} true by construction. Let γ_i∈ S_T_m+1 and thus (γ_i^t)-(γ_i^t)>|T_m-T_m+1 for all t∈ [T_m, T_m+1]. If i≠ k then as the interleaving maps commute we know α_(x_i^s_n)^s_n → T_m+1(x_i^s_n)=α_(x_i^s_n)+|s-s_n|^s→ T_m+1(α_(x_i^s_n)^s_n → s(x_i^s_n)) and thus ϕ_(x_i^T_m+1)^(x_i^s_n)+|T_m+1-s_n|(x_i^T_m+1) =α_(x_i^s_n)+|s-s_n|^s→ T_m+1(ϕ_(x_i^s)^(x_i^s_n)+|s-s_n|(x_i^s)) =∑_j _B_s^new^B_T_m+1^new(α^s→ T_m+1)(j,i) ϕ_(x_j^T_m+1)^(x_i^s_n)+|T_m+1-s_n|(x_j^T_m+1). For i=k, we instead get ϕ_(x_k^T_m+1)^(x_k^s_n)+|T_m+1-s_n|(x_k^T_m+1) +λϕ_(x_l^T_m+1)^(x_l^s_n)+|T_m+1-s_n|(x_l^T_m+1) =α_(x_i^s_n)+|s-s_n|^s→ T_m+1(ϕ_(x_i^s)^(x_i^s_n)+|s-s_n|(x_i^s)) =∑_j _B_s^new^B_T_m+1^new(α^s→ T_m+1)(j,i) ϕ_(x_j^T_m+1)^(x_i^s_n)+|T_m+1-s_n|(x_j^T_m+1) As no critical values in V_T_m+1 occur in the height range of ((x_i^T_m+1), (x_i^T_m+1)+|T_m-T_m+1|) we infer that _B_s_n^new^B_T_m+1^new(α^s_n→ T_m+1)=π_S_T_m+1e^λ_lk. It would be possible to develop algorithms for computing forward and backwards simplified vine and matrix representations given an input vine and matrix representation over a sufficiently dense discretisation, but this is outside the scope of this paper. § VINEYARD AND VECTOR REPRESENTATION If we require that the associated family of basis for a vine and matrix representation has been forwards and then backwards simplified then we know that the matrices must be in a very restricted form. Assuming that s<t are sufficient close we know that the matrices are diagonal for almost all pairs s,t with the entries 0 or 1 in a manner determined by the underlying vineyard. The only matrices that are an exception to this are where t is not backwards compatible and s<t. Here we can have an additional non-zero entry, but only at the (l,k) entry where t is backwards incompatible by vines (γ_k,γ_l). Let λ(t) denote this (l,k) entry. Given a vineyard module V with and underlying V. We can summarise all the information in the vine and matrix representation (for forwards and then backwards simplified associated bases) by the sequence λ:=(λ(t_1),λ(t_2),…λ(t_K)) where t_1<t_2<… <t_K are the times where the vineyard V is backwards incompatible. We call the pair (V, λ) a vineyard and vector representation of vineyard module V. By Proposition <ref> we know that whenever the vineyard and vector representations of V and W agree then V and W must be isomorphic. However, we can not in general expect uniqueness of this representation. There is one important case where we have a unique vineyard and vector representation which is where the vineyard module is trivial. By trivial we mean it is isomorphic to the direct sum of vineyard modules. Notably this provides a necessary and sufficient condition for a vineyard module to be trivial. Let V be a vineyard modules with simplified representation λ:=(λ(t_1),λ(t_2),…λ(t_K)). Then V is isomorphic to the direct sum of vine modules if and only if λ(t_k)=0 for all k. Note that for the direct sum of vine modules λ(t_k)=0 for all k. We can then apply Proposition <ref> to say that if λ(t_k)=0 for all k then V is isomorphic to a direct sum of vine modules. We know will wish to prove the other direction and will be assuming that V is isomorphic to a direct sum of vine modules. We now need to set up substantial notation. Let {[T_m, T_m+1}]_m=0^N-1 be a segmentation of the underlying vineyard into segments. To reduce the number of indices we will write (γ_j^m) for (γ_j^T_m) and (γ_j^m) for (γ_j^T_m). For each m where (γ_j^m)≤(γ_i^m)<(γ_j^m)≤(γ_i^m), let τ_m(j,i) be the largest n≤ m where the intervals corresponding to γ_i and γ_j are disjoint at T_n (with value -∞ if not disjoint at any previous time). Let {γ_i} denote the vines in the underlying vineyard of V. Suppose that ρ:V→W is an isomorphism where W=⊕I[γ_i] is the direct sum of vine modules equipped with the standard basis. By construction the forward and backwards simplification of W leaves the basis elements unchanged. Denote the basis of W_T_m by {B_m^W}. Let B̂_m denote the transformed bases of V_T_m after forwards simplifying, and B_m be the resulting bases of V_T_m after forwards and then backwards simplifying. Let M̂_m=_B̂_m^B_m^W(ρ_T_m) and M_m=_B_m^B_m^W(ρ_T_m) denote the corresponding basis transformation matrices. If M̂_m(j,i)≠ 0 then (γ_j^n)≤(γ_i^n)<(γ_j^n)≤(γ_i^n) for all n∈ (τ_m(j,i), m). We will prove this claim by induction. The base case holds as ρ_T_0:V_T_0→W_T_0 is a morphism so M̂_0(j,i)≠ 0 implies (γ_j^0)≤(γ_i^0)<(γ_j^0)≤(γ_i^0). Suppose that M̂_m+1(j,i)≠ 0. This implies that (γ_j^m+1)≤(γ_i^m+1)<(γ_j^m+1)≤(γ_i^m+1). If γ_i and γ_j are disjoint at T_m we are done (as m=τ_m+1(j,i)) so suppose that τ_m+1(j,i)<m. Note by definition this implies τ_m+1(j,i)=τ_m(j,i). We now have to consider the different local cases. Set ϵ=T_m+1-T_m. If T_m is forward compatible with M̂_m+1(j,i)≠ 0 and m<τ_m+1(j,i) we know (γ_j^m+1)>(γ_i^m)+ϵ by considering the cases in Table <ref> and the restriction on ϵ in our definition of segment. Since M̂_m+1(p,i) ϕ_(v_p^m+1)^(w_i^m)+ϵ(v_p^m+1)=ρ_m+1(α_X^T_m → T_m+1(w_i^m))=α_V^T_m → T_m+1(ρ_m(w_i^m))=∑_p M̂_m(p,i) ϕ_(v_p^m+1)^(w_i^m)+ϵ(v_p^m+1) we conclude that M̂_m(j,i)=M̂_m+1(j,i)≠ 0. Our inductive assumption then implies (γ_j^n)≤(γ_i^n)<(γ_j^n)≤(γ_i^n) for all n∈ (τ_m+1(j,i), m). Now suppose that T_m is forwards incompatible with respect to vines (γ_k, γ_l) and let λ be such that _B̂_m^V^B̂_m+1^V(α^V)=e^λ_lkπ_S where S is the set of vines whose support contains T_m. Note that by construction we have _B̂_m^W^B̂_m+1^W(α^W)=π_S. Since ρ commutes with the interleaving maps, and no deaths occur within 2ϵ of any births, we know that M̂_m+1=e^-λ_lkM̂_m as matrices. For i≠ k we have M̂_m+1(j,i)=M̂_m+1(j,i) and so M̂_m+1(j,i) ≠ 0 implies M̂_m+1(j,i)≠ 0. Thus we can apply the inductive hypothesis to say.(γ_j^n)≤(γ_i^n)<(γ_j^n)≤(γ_i^n) for all n∈ (τ_m+1(j,i), m). For i=k and j=l we know M̂_m+1(l,k)= 0 (as ρ_m+1 is a morphism) so there is nothing to prove here. We know M̂_m+1(l,k)= M̂_m(l,k)-λM̂_m(l,l). As ρ_m+1 is a morphism we know that M̂_m+1(l,k)=0 and thus λ =M̂_m(l,k)/M̂_m(l,l). Finally consider i=k and j≠ l and suppose that M̂_m+1(j,k)≠ 0. If M̂_m(j,k)≠ 0 then the inductive hypothesis can be used, so suppose further that M̂_m(j,k)=0. We have M̂_m+1(j,k)= M̂_m(j,k)- M̂_m(j,l)M̂_m(l,k)/M̂_m(l,l) which, with are current suppositions, implies that both M̂_m(j,l)≠ 0 and M̂_m(l,k)≠ 0. By the inductive hypothesis with M̂_m(j,l)≠ 0 and M̂_m(l,k)≠ 0 we know that (γ_j^n)≤(γ_l^n)<(γ_j^n)≤(γ_l^n) for all n∈ (τ_m(j,l), m) and (γ_l^n)≤(γ_k^n)<(γ_l^n)≤(γ_k^n) for all n∈ (τ_m(l,k), m). We can show that τ_m(j,k)>τ_m(j,l) and τ_m(j,k)>τ_m(l,k) by sandwiching of intervals. If n>τ_m(j,k) then (γ_l^n)≤(γ_k^n)<(γ_l^n)≤(γ_k^n) and (γ_j^n)≤(γ_l^n)<(γ_j^n)≤(γ_l^n). Combined these imply (γ_j^n)≤(γ_k^n)<(γ_j^n)≤(γ_k^n). Having covered all the cases we have finished proving by induction that if M̂_m(j,i)≠ 0 then (γ_j^n)≤(γ_i^n)<(γ_j^n)≤(γ_i^n) for all n∈ (τ_m(j,i), m). If _B_m^V^B_m^W(ρ_T_m)(j,i)≠ 0 then (γ_j^T_n)≤(γ_i^T_n)<(γ_j^T_n)≤(γ_i^T_n) for all n∈ (τ_m(j,i), m). This claim can also be proved by induction. The base case holds as M_N=M̂_N by definition of backwards simplification, and using Claim <ref>. Assume the inductive hypothesis for m and we wish to show it also holds for m-1. Let ϵ=T_m-T_m-1. Suppose that T_m is backwards compatible. By the construction of the backwards simplified basis we have ∑_j M_mψ_(w_j^m-1)^(v_i^m)+ϵ(w_j^m-1)=β_W^T_m→ T_m-1(ρ_T_m(v_i^m))=ρ_T_m-1(β_V^T_m→ T_m-1(v_i^m))=∑_j M_m-1ψ_(w_j^m-1)^(v_i^m)+ϵ(w_j^m-1) For (j,i) with (γ_j^m-1)≤(γ_i^m-1)<(γ_j^m-1)≤(γ_i^m-1) and (γ_j^m-1)>(γ_i^m-1)+ϵ we also have τ_m-1(j,i)=τ_m(j,i). By comparing coefficients we infer M_m-1(j,i)=M_m(j,i). We then can use the inductive assumption to say if _B_m-1^V^B_m-1^W(ρ_T_m-1)(j,i)≠ 0 then (γ_j^T_n)≤(γ_i^T_n)<(γ_j^T_n)≤(γ_i^T_n) for all n∈ (τ_m-1(j,i), m-1). If there is a pair (l,k) with (γ_l^m-1)≤(γ_k^m-1)<(γ_l^m-1)≤(γ_k^m-1) but (γ_l^m-1)≤(γ_k^m-1)+ϵ then we must be in the case where (γ_l^m)=(γ_k^m) (see case 10 in Table <ref> with X=V_T_m and Y=V_T_m-1). Here we will need to use the construction of the backwards simplified basis. For the sake of clarity we will assume for the moment that all the vines have T_m and T_m-1 in their support - so we have a bases rather than extended bases and transformation matrices of these matrices are invertible. Extending the argument to extended bases is left as an exercise. Denote by β_V: V_T_m→V_T_m-1 the morphism within V. Let A=_B_m^B̂_m() be the matrix corresponding to the change of basis. Since T_m-1 is forwards compatible and T_m is backwards compatible we know β_V(v_i^m)=v_i^m-1 and β_V(v̂_i^m)=v̂_i^m. We thus have β_V(v_i^m)=β_V(∑_j A(j,i)ψ_(γ_j^m)^(γ_i^m) (v̂_j^m))=∑_j A(j,i) ψ_(γ_j^m-1)^(γ_i^m)+ϵ(v̂_j^m-1) and since A(l,k)=0 this implies _B_m^V^B̂_m-1^V(β_V)=A. When backwards simplifying the basis B_m-1^V is constructed by computing the matrix _B_m^B̂_m-1(β_V) and then using this as a basis transformation matrix and applying it to B̂_m-1. In short, B_m-1=A(B̂_m-1). As all the basis transformation maps commute (when the domain and codomain bases match appropriately) we can show that M_m-1=M̂_m-1 A and M_m= M̂_m A as matrices. Combining all these equations together we have M_m-M_m-1=(M̂_m-M̂_m-1) A. We also know that for (j,i)≠ (l,k) that both M_m(j,i)=M_m-1(j,i) and M̂_m(j,i)=M̂_m-1(j,i). If M_m-1(l,k)≠ 0 then, since M_m(l,k)=0, we have M_m-M_m-1≠ 0. This implies so (M̂_m-M̂_m-1)A≠ 0 and since A is invertible this implies M̂_m-M̂_m-1≠ 0. Since the only entry where M̂_m and M̂_m-1 can differ is (l,k) this implies that M̂_m-1(l,k)≠M̂_m(l,k)=0. We thus can use Claim <ref> to say that (γ_j^T_n)≤(γ_i^T_n)<(γ_j^T_n)≤(γ_i^T_n) for all n∈ (τ_m(l,k),m-1). Now suppose that T_m backwards incompatible with respect to (γ_k,γ_l). This means T_m=t_n for some n that is it is a critical time for our vector representation. By our inductive assumption this implies _B_m^V^B_m-1^V(β_T_m)(l,k)=0. Let λ(=λ(t_n)) be such that _B̂_m^V^B̂_m-1^V(α^V)=e^λ_lkπ_S where S is the set of vines whose support contains T_m. Note that by construction we have _B̂_m^W^B̂_m-1^W(α^W)=π_S. Since ρ commutes with the interleaving maps, and no deaths occur within 2ϵ of any births, we know that M̂_m-1=e^-λ_lkM̂_m as matrices. We know M̂_m-1(l,k)= 0 (as ρ_m+1 is a morphism) and M̂_m-1(l,k)= M̂_m(l,k)-λM̂_m(l,l). As M̂_m(l,k)=0 and M̂_m(l,l)≠ 0 we conclude that λ = 0. This now implies M̂_m-1=M̂_m as matrices and we can we can apply the inductive assumption to finish this case. Notably in the process of proving Claim <ref> we proved that the vector in our vineyard and vector representation is the zero vector. § AN INDECOMPOSABLE VINEYARD MODULE WITH TWO VINES In this section we present the simplest example of a vineyard module with two vines which is not decomposable into two vine modules. For the interests of clarity we restrict to _2 as the field for homology calculations but this example will hold for general fields. The underlying space is a lying in the plane which we split into four sets: A={z<1}, B={z=1}, C={1<z<2} and D={z=2}. We have K=A∪ B∪ C∪ D and f_t:K→ continuous which respect to t∈ [0,10] and for each t we have f_t is constant on each of A, B, C, D. f_t(A)=21- t, f_t(B)=14-t, f_t(C)=15+t, f_t(D)=t. Note that all sublevel sets are closed as f_t(B)≤ f_t(A) and f_t(C) ≤ f_t(B), f_t(D) for all t. We have two times where critical heights coincide which is at t=3 (with f_3(A)=f_3(C)) and at t=7 (with f_7(B)=f_7(D)). The continuously changing sublevel set filtration defines a vineyard module where the persistence modules V_t is the one for the 1-homology dimension persistent homology of the filtration by f_t, and the interleaving maps are defined by the natural inclusion maps f_t^-1(-∞, h]⊂ f_s^-1(∞, h+|s-t|] for h∈, and s,t∈ [0,10]. We can depict the underlying vineyard via its barcode representations at periodic locations in Figure <ref>. Over the interval [0, 3) there is only one choice of basis for V_t, namely x_1^t=[B] (with ([B])=f_t(B) and ([B])=f_t(A)) and x_2^t=[D+B] (with ([D+B])=f_t([D]) and ([D+B])=f_t(C)). Over the interval (7, 10] there is only one choice of basis for V_t, namely x_1^t=[B] (with ([B])=f_t(B) and ([B])=f_t(A)) and x_2^t=[D] (with ([D])=f_t([D]) and ([D])=f_t(C)). In the middle section, for t∈ [3,7] there are two different possible choices of basis; {[B], [D+B]} or {[B], [D]}. When we forward simplify we get basis over this middle range corresponding to {[B], [D+B]} and then when we then backwards simplify we get instead the basis corresponding to {[B], [D]}. After forwards and backwards simplifying we have for small δ, β^3→ 3-δ(x_2^3)=x_2^3-δ+x_1^3-δ and β^3→ 3-δ(x_1^3)=x_1^3-δ. In matrix form: _B_3^B_3-δ(β)=[ 1 1; 0 1; ] implying the vector in the vine and vineyard vector representation is (1) and the vineyard module is not isomorphic to the direct sum of vine modules. § FUTURE WORK Given the framework of forwards and backwards simplification we have a tractable description of vineyard modules that at the very least can determine whether if it is trivial. The next natural question is whether we can use the forward and backwards simplified representation to explore the decomposition of a vineyard module into a direct sum of indecomposable vineyard modules. As a partway step, can it determine the partition of the vines within the vineyards into those within the same indecomposable summand? Other future directions include: * Removing the various simplifying assumptions on our vineyard modules - allowing for countably many vines or allowing for higher multiplicity of intervals within a persistence module. * How could we extend this approach to continuous persistence module valued functions over S^1? Here we can still define forward simplification locally but will have the potential of holonomy. What about for more general persistence bundles (<cit.>). * If we consider special cases of vineyard modules do we have nice decompositions or do these decompositions have nice geometric interpretations. For example, what happens when our input is a point cloud, and we construct the vineyard where the time parameter corresponds to a bandwidth and the persistence modules are built from height filtrations of kernel density estimates? Do these representations relate to topological simplification (such as in <cit.>)? * Can we enumerate or construct all the isomorphism classes of vineyard modules of a given vineyard? * Can we find conditions for when a vineyard and vector representation is realisable? * Can we describe the space of simplified representations of vineyard modules that are isomorphic? plain
http://arxiv.org/abs/2307.04443v1
20230710095228
Search-time Efficient Device Constraints-Aware Neural Architecture Search
[ "Oshin Dutta", "Tanu Kanvar", "Sumeet Agarwal" ]
cs.CV
[ "cs.CV", "cs.LG" ]
Indian Institute of Technology {oshin.dutta,sumeet}@ee.iitd.ac.in, [email protected] Search-time Efficient Device Constraints-Aware Neural Architecture Search Oshin Dutta Tanu Kanvar Sumeet Agarwal ========================================================================= Edge computing aims to enable edge devices, such as IoT devices, to process data locally instead of relying on the cloud. However, deep learning techniques like computer vision and natural language processing can be computationally expensive and memory-intensive. Creating manual architectures specialized for each device is infeasible due to their varying memory and computational constraints. To address these concerns, we automate the construction of task-specific deep learning architectures optimized for device constraints through Neural Architecture Search (NAS). We present DCA-NAS, a principled method of fast neural network architecture search that incorporates edge-device constraints such as model size and floating-point operations. It incorporates weight sharing and channel bottleneck techniques to speed up the search time. Based on our experiments, we see that DCA-NAS outperforms manual architectures for similar sized models and is comparable to popular mobile architectures on various image classification datasets like CIFAR-10, CIFAR-100, and Imagenet-1k. Experiments with search spaces—DARTS and NAS-Bench-201 show the generalization capabilities of DCA-NAS. On further evaluating our approach on Hardware-NAS-Bench, device-specific architectures with low inference latency and state-of-the-art performance were discovered. § INTRODUCTION In recent years, there has been significant progress in developing Deep Neural Network (DNN) architectures <cit.> for edge and mobile devices.However, designing DNN architectures for specific hardware constraints and tasks is a time-consuming and computationally expensive process <cit.>. To address this, Neural Architecture Search (NAS)  <cit.> has become popular as it discovers optimal architectures given a task and network operations. Despite its success, traditional NAS techniques cannot guarantee optimal architecture for specific devices with hardware constraints such as storage memory and maximum supported FLOPs. To address this concern, researchers have developed hardware-aware algorithms <cit.> that find optimal device architectures with low resource training overhead and search time. These methods often use inference latency <cit.>, FLOPs <cit.> or a combination of hardware metrics <cit.> as constraints scaled by a tunable factor. However, the time to tune the scaling factor is often not considered within the NAS search time and can be ten times the reported search time. To address these issues, we propose the Device Constraints-Aware NAS (DCA-NAS), a principled differentiable NAS method that introduces total allowable model size or floating-point operations (FLOPs) as constraints within the optimization problem, with minimal hyper-parameter tuning. Unlike inference latency which is task dependent, FLOPs and memory are specified with a given hardware and thus are appropriate for our generic method. The approach is adaptable to other hardware metrics such as energy consumption or inference latency using additional metric-measuring functions. The paper make the following significant contributions: * It introduces a fast method that uses weight sharing among operations in the search space and channel bottleneck, along with a differentiable resource constraint, for continuous exploration of the search space. * A training pipeline that allows a user to input device memory or FLOPs and search for optimal architecture with minimal hyper-parameter tuning. * Our extensive experimentation on vision datasets- CIFAR-10, CIFAR-100, TinyImagenet, Imagenet-1k and inference-latency comparisons of trained models on Hardware-NAS-bench demonstrate the efficiency of our method. The generalization of our method to different search spaces is shown with experiments on DARTS and NAS-Bench. § RELATED WORK Neural Architecture Search Popular approaches <cit.> designed architectures for high performance on specific tasks or datasets with the traditional deep learning perspective that bigger is better, resulting in computationally and memory-intensive inference on edge devices. Network pruning <cit.>, channels removal <cit.> and weights/activations quantization <cit.> can compress architectures, but require pre-training, hyperparameter tuning, and often lack transferability.Neural Architecture Search (NAS) methods such as Reinforcement Learning <cit.>, Evolutionary Learning <cit.> and Differentiable Neural Architecture Search (DNAS) <cit.> can automatically search for architectures without user intervention, and can transfer across similar tasks. DNAS with surrogate metrics <cit.> have also been used to explore the architecture search space. However, architectures found by DNAS methods are not optimized for deployment on edge devices and smaller models obtained by reducing layers or channels are often sub-optimal. Hardware-aware Neural Architecture search Certain NAS methods optimize <cit.> for constraints such as latency, inference speed <cit.>, FLOPS <cit.>, memory usage <cit.>. Some use a separate DNN to predict constraint metrics and evolutionary search to obtain hardware-aware optimal models <cit.>, while others consider real-time latencies of edge devices or provide specific architectures for specific devices <cit.>. However, these methods require significant search time and tuning of scaling factors controlling the trade-off between the performance and the constraint, and do not always account for optimal architectures. In contrast, we use a differentiable hardware-aware objective function with generic hardware metrics, and do not require a tunable scaling factor. Certain methods <cit.> train a supernet first and then search for a smaller architecture, but this is only efficient when there are more than fifteen different edge devices with different limitations or deployment scenarios  <cit.> as training the supernet takes huge resources-32 V100s taking about 1,200 GPU hours. Search stage followed by evaluation, as done in our approach is more efficient when the different number of possible edge devices is less than fifteen. § DCA-NAS: DEVICE CONSTRAINTS AWARE FAST NEURAL ARCHITECTURE SEARCH We present the preliminary gradient-based NAS objective function in section <ref> and then formulate the problem of incorporating the hardware-awareness in NAS as a constrained optimization problem in section <ref> followed by techniques to reduce the search time in section <ref>. The framework of our approach is illustrated in Figure <ref>. Notation α_o^i, j :is the architecture parameter for operation o between a pair of nodes (i,j) b(o) :is the number of learnable parameters or the FLOPs required by the operation o. w :are the learnable weights of the operations. K_d :is the resource constraint of the device given as input to the algorithm. K_d^' :is the constraint metric derived from the look-up graph. λ :is the Lagrange multiplier for solving the constrained optimization that incorporates model size or FLOPs as constraints. §.§ Gradient-based NAS Objective Function Popular DNAS techniques <cit.> have two stages, the search phase and the evaluation phase. During the search phase, given a task or a dataset the techniques search for a network of cells, which are directed acyclic graphs with N nodes. The edges of the graph are network layers, whose operations are to be selected from a pre-defined set 𝒪 containing operations such as 3x3 separable convolution and identity operations with trainable weights w_o. The search is made differentiable by making the choice of a particular operation to be a softmax of architecture weights α of all operations. Thus, the intermediate output z_j at node j is given by, z_j=∑_o ∈𝒪exp{α_o^i, j}/∑_o^'∈𝒪exp{α_o^'^i, j}· o(w_o^i,j,𝐳_i) §.§ DCA-NAS formulation Previous DNAS approaches <cit.> did not focus on searching architectures specifically for inference on resource-constrained devices. In contrast, we formulate the DNAS objective function as a constrained optimization problem by incorporating device resource constraints (memory or FLOPs) in the search objective function. The constrained bi-level optimization problem is written as, [ min _α ℒ_val (w^*(α), α); s.t. w^*(α)=argmin_wℒ_train (w, α); s.t. k_s(α) ≤ K_d ] where training dataset is split into train and val to optimize w and α simultaneously in each iteration subject to the constraint that the architecture's number of parameters or FLOPs k_s must be less than or equal to the device resource constraint K_d. The following equation calculates the architecture's number of parameters or FLOPs during search given the number of cellsc_n . Our method can also be adapted to use other metrics such as latency and energy consumption with additional metric measuring functions. k_s(α)= c_n∑_(i,j)∈ N∑_o ∈𝒪exp{α_o^i, j} * b(o)/∑_o^'∈𝒪exp{α_o^'^i, j} §.§.§ Tackling the difference in search and evaluation networks The size of the architecture in the search phase k_s is different from the architecture size in evaluation phase due to the softmax weighting factor in equation <ref> (demonstration can be found in the appendix). To address this, we introduce a tighter bound on the search constraint K_d^', which is less than the device resource constraint K_d. A lookup graph (LUG) needs to be made for each dataset by varying K_d^' within appropriate bounds and running the algorithm until convergence each time to obtain the corresponding device resource constraint K_d. The computation time of the LUG can be reduced by running the searches in parallel. Thus, on incorporating the tighter constraint by looking-up the graph for the given device resource constraint K_d along with the trainable Lagrange multiplier λ in Equation <ref>, the objective function is re-written as, [ ℒ =ℒ_val (w^*(α), α) +λ (k_s(α)-LUG(K_d)); s.t. w^*(α)=argmin_wℒ_train (w, α) ] §.§ Techniques to reduce search time Channel Bottleneck We use convolutional layers of 1x1 kernel to reduce the depth of output channels of operations in the search space to save computation time and memory overhead. Derived Cell and Weight sharing. During architecture search, only one cell with trainable α is used to optimize architecture parameters. The target network for inference is built by stacking cells with architectures derived from highly weighted operations. This can be done during search by deriving the other cell architectures from the first at each iteration <cit.>. The arrangement of the cells for search is given in the appendix. This derived cell saves computation and memory overhead. A weight sharing strategy <cit.> among same operations with the same originating node i to all nodes i<j<N has been applied within a cell. This is motivated by the observation that non-parametric operations operating on the representation of a node produce the same feature map irrespective of the output node and thereby extended to parametric operations. Thus, Equation <ref> may be re-written to the following, z_j=∑_o ∈𝒪exp{α_o^i, j}/∑_o^'∈𝒪exp{α_o^'^i, j}· o(w_o^i,𝐳_i) § EXPERIMENTAL RESULTS Our approach is evaluated on two search spaces- DARTS and NAS-Bench with vision datasets- CIFAR10, TinyImagenet, Imagenet-16-20 and Imagenet-1k. The details of the search space and implementation is given in the appendix §.§ Results on DARTS search space §.§.§ Transferability- learning of coarse features during search. We transfer the architecture searched on CIFAR-10 to train and evaluate the model weights on TinyImagenet in Table <ref> and ImageNet-1k in Table <ref>. This transferred model yields higher performance than manually designed architectures <cit.> for the target dataset. It is observed that performance of the transferred model is comparable to the architecture searched on the target dataset itself which can be attributed to the architecture learning coarse features than objects during search. §.§.§ Performance versus Device-Constraints trade-off DCA-NAS discovers 2 to 4% better-performing architectures than manual designs with a memory constraint of 3.5 million parameters on CIFAR-10 and similar performance on TinyImagenet as in Table <ref>. On Imagenet-1k, DCA-NAS yields models with similar performance to other NAS methods <cit.> with a constraint of 5.5 million parameters (taken to yield similar sized models as other NAS methods) as in Table <ref>. We vary the input device resource constraint and plot the performance of the searched models against the number of parameters in Figure <ref>. As observed, DCA-NAS searched models can yield 15x lower sized models than manual architectures like PyramidNet-272 <cit.> with at most 1% reduction in accuracy on CIFAR-10. On TinyImagenet, DCA-NAS yields models similar in performance but 6x smaller in size than the manual Resnet variant. In comparison to ProxylessNAS <cit.> for Imagenet-1k, DCA-NAS yields 32% smaller model in terms of model parameters for similar accuracy. In comparison to DNAS methods <cit.> for each of the three datasets, we observe that the performance of the DCA-NAS searched models is retained to a certain extent as resources are further limited after which the model performance degrades. DCA-NAS model of similar size has the advantage of better performance (by 1%) and being automatically searched over MobileNet-v2 <cit.>, a manually designed network on Imagenet-1k. §.§.§ Search time comparison For evaluation on TinyImagenet in Table <ref>, the architecture searched on CIFAR-10 with DCA-NAS yields model in the lowest search time which indicates the search-time efficiency of the transferability property. Our method requires about 4x lower search cost than SGAS <cit.> which performs the best among the other transferred architectures and 16x lower search time than the other resource-constrained approach <cit.> for similar performance as seen in Table <ref>. Moreover, ProxylessNAS <cit.> takes about 4x more search time than DCA-NAS whereas PC-DARTS takes about 2x more search time with no capability to constraint model size. §.§ Results on NAS-Bench-201 search space §.§.§ Performance and Latency comparisons on different devices Our method reports the mean by averaging over five runs with different random seed. Figure <ref> compares the performance of models searched with DCA-NAS and PC-DARTS by varying the latency constraints. It shows that unlike PC-DARTS, DCA-NAS can search for more efficient models which have lower inference latency for similar test accuracy. Moreover, we observe that models with similar performance have lower latency when tested on Pixel 3 than on Raspberry Pi 4 due to a faster RAM in Pixel 3. DCA-NAS takes the lowest search time among all the NAS methods due to the addition of search-time-efficient techniques while being at-par in terms of performance across all datasets. § ABLATION STUDY Effectiveness of various algorithmic augmentations for faster search: We analyze the effectiveness of algorithmic augmentations mentioned preciously <ref> to reduce search cost in our study. We sequentially add weight sharing, channel bottleneck, and derived cells to the baseline DARTS <cit.> method and measure search time and accuracy. Weight sharing, channel bottleneck, and derived cells was observed to significantly reduce search memory overhead, enabling us to use larger batch sizes and reducing overall search cost as seen in Figure  <ref>. Adding the resource-constraint in the final DCA-NAS method negligibly increases search cost while maintaining performance. Stability of the approach: We test stability by running the search algorithm independently five times with different initial seeds and the same constraints and hyperparameters. The architectures found during each run have similar performance when re-trained and evaluated as shown in Fig. <ref>. Smaller models have lower performance due to restrictions in model complexity compared to larger models. § CONCLUSION We present DCA-NAS, a device constraints-aware neural architecture search framework which discovers architectures optimized to the memory and computational constraints of an edge device in a time-efficient manner. It does so by incorporating a constraint in terms of the number of parameters or floating point operations (FLOPs) in the objective function with the help of a Lagrange multiplier. DCA-NAS in essence searches for a Pareto optimal solution given the edge device memory or FLOPs constraint. Moreover, it enables architecture search with search cost 4 to 17 times lower than the previous state-of-the-art Hardware-aware NAS approaches. DCA-NAS can discover models with size about 10 to 15 times lower than manually designed architectures for similar performance. In comparison to DARTS and its other NAS variants, DCA-NAS can discover models upto 3x smaller in size with similar performance. This hardware-aware approach can be generalized to any future updates to differential neural architecture search and possibly to training-free methods of NAS with some adaptation. § ACKNOWLEDGEMENT We thank the anonymous reviewers; Profs. Surendra Prasad and Brejesh Lall of IIT Delhi; and colleagues at Cadence India for their valuable feedback and inputs. This research is supported by funding from Cadence India; the first author is also supported by a fellowship from the Ministry of Education, India. splncs04 Appendix ======== § DERIVING CELL ARCHITECTURES The searched cells are stacked to form the network whose weights are trained and evaluated. The layers of this network during the evaluation phase is varied from 4 to 20. It can be seen that the models searched with DARTS with only 2-cells perform equally well as those of 8-cell search for target model with layers more than 10. Hence, in our experiments, instead of training architecture parameters for all 8 cells, we train only 2 cells- one normal and the other reduction cell. The architecture of the other 6 cells stacked to form the network during search are derived from either the normal or the reduction cell as shown in Figure <ref>. § CALCULATION OF SEARCH-STAGE ARCHITECTURE SIZE The size of the architecture in the search phase k_s is different from the architecture size in evaluation phase due to the softmax weighting factor in equation <ref> (demonstrated in Figure  <ref>). To address this, we introduce a tighter bound on the search constraint K_d^', which is less than the device resource constraint K_d. A lookup graph (LUG) needs to be made for each dataset by varying K_d^' within appropriate bounds and running the algorithm until convergence each time to obtain the corresponding device resource constraint K_d. The computation time of the LUG can be reduced by running the searches in parallel. § ALGORITHM The practical implementation of our resource-constrained gradient descent-based approach is illustrated in Algrorithm <ref>. § IMPLEMENTATION DETAILS The experiments with the smaller vision datasets-MNIST, FashionMNIST, CIFAR-10, Imagenet-16-120 and TinyImagenet were run on a single Tesla V100 GPU. Training and evaluation on Imagenet-1k was performed on a cluster containing eight V100 GPUs. The super-net used for search with smaller vision datasets except Imagenet-1k consists of 8 cells, with 6 normal cells and 2 reduction cells, and an initial number of channels set to 16. Each cell has 6 nodes, with the first 2 nodes in cell k serving as input nodes. The super-net is trained for 50 epochs with a batchsize of 512, and optimized using SGD with a momentum of 0.9 and weight decay of 3e-4. The learning rate is initially set to 0.2 and gradually reduced to zero using a cosine scheduler. Architecture parameters α are optimized using Adam optimizer, with a learning rate of 6e-4, a momentum of (0.5, 0.999), and a weight decay of 1e-3. The search is run 5 times, and the architecture with the highest validation accuracy is chosen. For evaluation, the target-net has 20 cells, with 18 normal cells and 2 reduction cells, and an initial number of channels set to 36. The target-net is trained for 600 epochs with a batchsize of 96, optimized using SGD with a momentum of 0.9, weight decay of 3e-4, and gradient clipping of 5. The initial learning rate is set to 0.025 and gradually reduced to zero using a cosine scheduler. Additional settings include a cutout length of 16, dropout rate of 0.2, and use of an auxiliary head. For Imagenet-1k, We reduce the input size from 224 × 224 to 28 × 28 using three convolution layers with a stride of 2. The super-net for search has 8 cells starting with 16 channels, and the target-net for evaluation has 14 cells starting with 48 channels. Both search and evaluation use a batch size of 1,024. In search, we train for 50 epochs with a learning rate of 0.5 (annealed down to zero using a cosine scheduler), and a learning rate of 6e-3 for architecture parameters. In evaluation, we train for 250 epochs using the SGD optimizer with a momentum of 0.9 and a weight decay of 3e-5, and adopt an auxiliary head and the label smoothing technique. § MODEL PERFORMANCE BY VARYING FLOPS CONSTRAINT ON CIFAR10, TINYIMAGENET AND IMAGENET-1K Instead of model parameters, we also experiment with FLOPs as the constraint in our objective function. As shown in Figure <ref>, our method DCA-NAS retains performance till a certain FLOPs constraint, after which it degrades. In comparison to manual architectures, our NAS approach yields models which require much smaller FLOPs and hence would have lower latency.
http://arxiv.org/abs/2307.04199v1
20230709150835
Mid-infrared spectroscopy with a broadly tunable thin-film lithium niobate optical parametric oscillator
[ "Alexander Y. Hwang", "Hubert S. Stokowski", "Taewon Park", "Marc Jankowski", "Timothy P. McKenna", "Carsten Langrock", "Jatadhari Mishra", "Vahid Ansari", "Martin M. Fejer", "Amir H. Safavi-Naeini" ]
physics.optics
[ "physics.optics", "quant-ph" ]
APS/123-QED 1E.L. Ginzton Laboratory, Stanford University, Stanford, CA, 94305, USA 2NTT Research, Inc., Physics & Informatics Laboratories, Sunnyvale, CA, 94085 Mid-infrared spectroscopy, an important and widespread technique for sensing molecules, has encountered barriers stemming from sources either limited in tuning range or excessively bulky for practical field use. We present a compact, efficient, and broadly tunable optical parametric oscillator (OPO) device surmounting these challenges. Leveraging a dispersion-engineered singly-resonant OPO implemented in thin-film lithium niobate-on-sapphire, we achieve broad and controlled tuning over an octave, from 1.5–3.3 µm by combining laser and temperature tuning. The device generates >25 mW of mid-infrared light at 3.2 µm, offering a power conversion efficiency of 15% (45% quantum efficiency). We demonstrate the tuning and performance of the device by successfully measuring the spectra of methane and ammonia, verifying our approach's relevance for gas sensing. Our device signifies an important advance in nonlinear photonics miniaturization and brings practical field applications of high-speed and broadband mid-infrared spectroscopy closer to reality. Mid-infrared spectroscopy with a broadly-tunable thin-film lithium niobate optical parametric oscillator Amir H. Safavi-Naeini1 August 12, 2023 ======================================================================================================== § INTRODUCTION A fundamental technique for sensing is mid-infrared (MIR) spectroscopy, which exploits molecules' strong and distinct absorption responses in the 2–20 µm spectral region. High-sensitivity and high-resolution MIR spectroscopy with coherent sources has rich applications, e.g., in gas <cit.>, chemical reaction <cit.>, and biological <cit.> sensing. Further advancing broadband, field-deployable MIR sources would enable a multitude of applications in areas such as rapid portable health monitoring and wide-coverage greenhouse gas detection. However, currently-available sources still suffer from significant limitations. For instance, compact quantum- and interband- cascade lasers have dramatically improved their output power and efficiency, making them prominent sources for MIR spectroscopy <cit.>. However, material-defined gain bandwidths restrict tuning to hundreds of cm^-1 <cit.>, limiting potential multi-species detection. Meanwhile, optical parametric oscillator (OPO) sources allow efficient conversion of low-noise, wavelength-agile near-IR lasers over extremely broad tuning ranges (often thousands of cm^-1) <cit.>. However, their conventional use of bulk optics creates large footprints, high threshold powers, high cost, and demanding stabilization requirements. These factors limit widespread field applications of OPOs, despite many laboratory spectroscopic studies <cit.>. Because of the limitations of bulk systems, OPO miniaturization has been actively pursued. Well-established systems include integrated weakly-confining waveguide cavities <cit.>, polished crystals <cit.>, and whispering-gallery resonators <cit.>. Moreover, recent nanofabrication breakthroughs have led to on-chip planar nanophotonic circuits in strongly nonlinear materials such as lithium niobate (LN). Sub-wavelength transverse mode confinement in these architectures allows enhanced nonlinear efficiency <cit.>, dispersion engineering for ultrabroadband operation <cit.>, and capability for complex nonlinear photonic circuits <cit.>. As a result, the first on-chip OPOs integrated with highly-scalable, small-footprint, nanophotonic circuits have recently been developed <cit.>. Despite these rapid advances, recent nanophotonic integrated OPOs thus far have limited capability for MIR spectroscopy. One reason for this is that established nonlinear integrated photonic platforms utilize a silica undercladding that becomes strongly absorptive past 3 µm <cit.>, limiting MIR performance. Another crucial reason is that engineering nanophotonic OPOs with sufficiently stable and precise tuning over fine spectroscopic lines is challenging. Bulk OPO-based spectroscopy systems usually achieve ideal tuning behavior by engineering the cavity in a singly-resonant configuration with a resonant signal wave and non-resonant, freely-tunable MIR idler wave <cit.>. Developing such wavelength-selective behavior within a high-quality-factor nanophotonic cavity is difficult. This has led previous integrated OPOs to simultaneously resonate signal and idler beams in either doubly- <cit.> or triply-resonant <cit.> configurations, creating complex tuning dynamics undesirable for spectroscopy. Here we demonstrate an efficient, broadly-tunable, continuous-wave integrated MIR OPO and use it for gas spectroscopy. This single-wavelength MIR source complements broadband integrated MIR frequency comb sources <cit.> that can exhibit more complex dynamics, difficult calibration/stabilization, low efficiency, and limited resolution. Pumped with continuous-wave light at λ_p=1 µm, a single dispersion-engineered device exhibits broad tuning over an octave from 1.5–3.3 µm. By engineering a wavelength-selective, high-quality-factor cavity, we realize pump-enhanced singly-resonant MIR OPO operation. The OPO's reliable tuning behavior allows us to measure the spectra of methane and ammonia, demonstrating the spectrosopic potential of OPOs within a fully-chip-integrated platform. We discuss clear paths towards further enhancing the current OPO for widespread, practical use by improving overall system efficiency, near-degenerate performance, and gap-free tuning range. § RESULTS §.§ Device concept and operation Fig. <ref>a illustrates our OPO design concept. An optical cavity incorporates a χ^(2) nonlinear crystal that provides parametric amplification between λ_p = 1 µm pump light and generated signal/idler light at λ_s = 1.5 µm and λ_i = 3 µm (Fig. <ref>a.i). We design the cavity to be strongly resonant for λ_s, weakly resonant for λ_p, and non-resonant for λ_i, classifying it as a pump-enhanced singly-resonant OPO (SRO) <cit.>. This design allows the MIR idler to freely tune for spectroscopy. An effective, simple SRO fine tuning method <cit.> sweeps λ_p while λ_s clamps on a strong cavity resonance, so λ_i tunes freely by energy conservation, e.g., over molecular absorption peaks (Fig. <ref>a.ii). Tuning the temperature and pump wavelength broadly adjusts the OPO output over 1.5–3.3 µm (Fig. <ref>a.iii), which overlaps fundamental vibrational transitions of dozens of small molecules (e.g. CO_2, CH_4, H_2O, and NH_3) important for spectroscopic monitoring. We implement the integrated OPO device (Fig. <ref>b) in a photonic circuit composed of etched LN-on-sapphire ridge waveguides. Deeply-etched LN-on-sapphire photonics, with substrate transparency up to 4.5 µm, have enabled dispersion-engineered broadband MIR generation up to 4 µm <cit.>. We fabricate 15 OPOs with different design parameters on a 12×12 mm LN-on-sapphire chip (Fig. <ref>b.i), then focus on the optimal device for the experiment. Periodically poling one of the LN waveguides (Fig. <ref>b.ii) compensates for phase-velocity mismatch and allows broadband parametric gain. We choose the parametric gain waveguide geometry (878 nm LN film, 600 nm etch, and 1.95 µm top width) to enable strong fundamental transverse electric mode confinement at pump/signal/idler wavelengths (Ext Fig. <ref>a) and large parametric gain from modal overlap. Moreover, choosing this geometry produces ultrabroadband gain at degeneracy resulting from near-zero signal/idler group velocity dispersion (GVD) (Sec. <ref> and Methods). The pump-enhanced SRO cavity combines waveguide bends with two crucial engineered elements: the output coupler and intracavity coupler. The output coupler (Fig. <ref>b.iii) is a directional coupler designed for ∼100% transfer of MIR light out of the cavity while only extracting ∼1% of telecom light. The intracavity coupler is an adiabatic coupler designed for broadband, ∼100% transfer of telecom-wavelength light to enable strong signal resonances. To verify the strong cavity modes at λ_s, we sweep resonances with a tunable telecom laser (Ext. Fig. <ref>a), revealing sharp, low-loss signal modes with total quality factor Q_tot = 1.3-1.6 × 10^6 (Fig. <ref>c). This corresponds to ≈12% round-trip loss in the 22.3 mm-length cavity. Extracted intrinsic/extrinsic Q-factors for the undercoupled cavity are Q_i = 1.35-1.7 × 10^6 and Q_ex≈ 20 × 10^6, respectively. High Q-factors extend over our telecom laser's whole tuning range (1500–1640 nm, Ext. Fig. <ref>b). Meanwhile, the 1-µm pump only weakly resonates, with cavity finesse F=2.5–3.5, corresponding to ≈11% total power recirculation and 2× intracavity power enhancement (Ext. Fig. <ref>). To operate the device, we couple continuous-wave pump light onto the chip using a lensed fiber with (33 ± 2)% coupling efficiency (Methods). When parametric gain provided by the pump exceeds round-trip signal loss, the device oscillates, generating signal and idler photons. We collect output light with a multimode fiber (∼3% MIR chip-to-fiber collection efficiency, Methods) and use the idler beam for MIR spectroscopy (Fig. <ref>b). We attribute the few-percent chip-to-fiber collection efficiency to the roughly-cleaved output fiber facet and mismatch between high-NA LN waveguide and NA = 0.2 fiber. Chip-fiber and fiber-chip coupling efficiencies could be improved dramatically to 80–90% using cladding mode-matching waveguides <cit.> and/or placing a high-NA lens on the output (>70–80% efficiency measured on a different chip/setup). §.§ Power characterization Utilizing the characterization setup in Fig. <ref>a, we tune the device to 170 °C and pump near λ_p=1.051 µm to obtain clean non-degenerate parametric oscillation at λ_s=1.56 µm, λ_i=3.21 µm. Because of weak pump resonances (Fig. <ref>b, top), intracavity pump intensity and hence generated signal/idler output (Fig. <ref>b, bottom) varies periodically with λ_p. Because the pump resonance is weak, tuning to a specific λ_p leads to stable continuous-wave oscillation for >10–15 minutes without any cavity or laser stabilization (Ext Fig. <ref>). We then scan λ_p for different pump powers and record maximum generated signal/idler power. We observe clear pump depletion but do not precisely quantify it due to background pump light scattering into the multimode collection fiber. The device begins oscillating with 80 ± 6 mW on-chip threshold pump power (Fig. <ref>c). Above threshold, the generated signal/idler powers monotonically increase with pump power. With ∼200 mW on-chip pump power, the device produces a maximum of 29 ± 3 mW on-chip power at 3.2 µm. This power level has been used for portable sensor systems <cit.> and exceeds the typical required power for shot-noise-limited MIR detection (∼0.1 mW) <cit.>. The on-chip power conversion efficiency of signal/idler also increases monotonically within the range of pump power sweep (Fig. <ref>d). We measure a maximum of (15 ± 2)% on-chip power conversion efficiency (45% quantum efficiency) from pump to MIR idler. An ideal OPO produces nearly 100% quantum efficient conversion (≈ 33% power conversion at these wavelengths) <cit.>. Our device's deviation is likely caused by modal/radiative scattering of pump light in waveguide tapers (see Methods), MIR losses from e.g. surface-adsorbed molecules <cit.>, and inefficient MIR light transfer in the output coupler. The measured dependence of the emitted MIR light on input pump power aligns well with numerical modeling of a weakly pump-enhanced SRO (Fig. <ref>c-d, solid lines), verifying that the device behaves as designed. In our modeling (Methods) we assume the measured values of total Q-factor (1.6 million), pump recirculation (11%, Methods), and normalized efficiency (41 %/(W·cm^2)). The numerically-modeled on-chip idler output powers are scaled by 0.46 to account for the effective MIR extraction efficiency, and intracavity signal powers are scaled by 0.013 to account for the intended small (∼1%) signal extraction from the cavity. §.§ Tunability §.§.§ Coarse tunability We tune our OPO's output wavelength over an octave of bandwidth using a combination of temperature and pump wavelength (Fig. <ref>a,b). At the higher temperatures of 100–200 °C we access the “far-from-degenerate” regime with widely-separated signal and idler (λ_s=1.5–1.7 µm, λ_i=3–3.3 µm) (Fig. <ref>a). We observe sufficiently reliable tuning for spectroscopy at these operating temperatures and clean output spectra (Fig. <ref>b). In this regime, we measure MIR output wavelengths up to 3.315 µm at 200 °C, limited by the temperature control range and pump amplifier bandwidth. The high operation temperature is only due to phase matching in this device; future devices can extend deeper into the MIR at lower temperatures by lithographically defining a different poling period. In our device, lower temperatures from 70–90 °C access the “near-degenerate” regime (1.7 µm < λ_s, λ_i < 2.7 µm), exhibiting broad bandwidths and tunability but also some complex multimoded behavior. From 80–100 °C, the OPO sometimes oscillates simultaneously in the near-degenerate and far-from-degenerate regimes. Pump wavelength tuning at a fixed temperature tunes the device reliably and rapidly over a large range (Fig. <ref>a). From 80–200 °C, the >2.8 µm idler tunes roughly linearly with pump wavelength. The fitted tuning slope dλ_i/dλ_p≈ -2 at higher temperatures and increases to -4.2 at 100 °C. This equates to 100–200 nm MIR wavelength tuning at a given temperature with 50 nm of pump tuning. As we further decrease the temperature, wavelength tunability rapidly increases as the device begins oscillating at near-degenerate signal/idler waveguide modes with near-zero GVD. At 70 °C, we operate the device in the anomalous dispersion regime, resulting in a U-shaped tuning curve that spans over 800 nm (Fig. <ref>a) and agrees with simulations (Ext. Fig. <ref>a). Near degeneracy, the accessible gain bandwidth broadens from cancellation of odd-order dispersion, allowing oscillation at multiple different signal/idler pairs (Fig. <ref>b). At 80 °C, the device operates near the signal/idler zero-GVD point, resulting in broadband OPO output spanning 1.3 µm, a bandwidth approaching a full octave, at a single temperature (Fig. <ref>c,d). The increasingly broadband near-degenerate OPO as we increase λ_p and approach zero-GVD at 2λ_p agrees well with simulation (Fig. <ref>c) <cit.>. From λ_p≈1075–1090 nm, the single-device, single-temperature, OPO output spans 1.7–2.7 µm. This 65 THz-spanning ultrabroadband gain bandwidth matches that of state-of-the-art pulsed-pump dispersion-engineered thin-film-LN parametric amplifiers <cit.>. To fully harness the broadband OPO operation, future devices could employ an on-chip wavelength control element (e.g. <cit.>) rapidly tunable using LN's electro-optic effect and selective of a particular oscillating mode. Full near-degenerate dispersion-engineering details are described in Methods. §.§.§ Fine tunability We finely tune the SRO's MIR emission wavelength with sufficient control for use in spectroscopy. At a fixed temperature, we tune the pump laser wavelength. For small changes of λ_p, λ_s stays approximately constant without excessive mode hops while the MIR λ_i tunes by energy conservation (Fig. <ref>e). The vertical gaps visible in these fine tuning curves are caused by weak pump resonance enhancement, not signal mode hops. Typical gap-free tuning range is 60–80 pm at 3184 nm (1.8–2.4 GHz), reflecting that the OPO is activated for around one-third of the pump cavity FSR (5.5 GHz, Fig. <ref>a). Eliminating the weak pump resonance in an optimized fully-singly-resonant cavity design will allow broader gap-free tuning range. Despite this discontinuous tuning at a fixed temperature, adjusting the chip temperature by only 1 °C results in nearly uniform MIR wavelength coverage as different signal modes are selected to oscillate. The signal mode hops when λ_p is detuned sufficiently large amounts (Methods, Ext. Fig. <ref>a). §.§ Proof-of-concept spectroscopy §.§.§ OPO spectroscopy of methane We direct part of the output idler light to a low-pressure (20 Torr) methane gas cell (Fig. <ref>a) to measure its absorption spectrum. Tuning the device to 151 °C with λ_p ≈ 1041  nm shifts the OPO idler output to a cluster of methane absorption lines at 3184 nm. To sweep the generated MIR output over the methane lines, we sweep λ_p in a narrow range (Fig. <ref>a). During this measurement, the OPO output signal wavelength λ_s stays nearly constant, while the idler wavelength λ_i increases with time (Fig. <ref>a). The portion of the MIR light passing through the methane cell couples into a photodiode, generating the voltage signal V_1(t). The reference beam of MIR light couples into a second photodiode, generating a reference signal V_2(t). We plot an example trace of the unprocessed relative gas cell transmission scan V_1(t)/V_2(t) in Fig. <ref>b, which scans over two absorption peaks. To calibrate the wavelength axis λ_i(t) of the swept MIR beam, we measure λ_p(t) and λ_s(t) and infer λ_i(t) by energy conservation. This method allows us to use precise and more readily available near-IR wavelength measurement tools to infer MIR emission properties. We measure λ_p(t) with a wavemeter in the input path (Fig. <ref>b). Meanwhile, λ_s(t) is measured by beating a portion of the generated signal beam against a reference laser on a fast photodiode. The beatnote is read in an RF spectrum analyzer, from which we extract λ_s(t) (Fig. <ref>b). The observed small (5 pm) redshift of λ_s(t) is much smaller than the cavity free spectral range (∼45 pm), indicating that the signal mode does not mode hop during this scan, but only shifts slightly, likely due to heating at higher OPO power. We tune the device to four different absorption transitions of methane near 3184 nm and collect spectra (Fig. <ref>c). After background subtraction (see Methods), collected experimental spectra agree well with HITRAN reference curves <cit.>. The clean, stable MIR OPO output easily resolves the low-pressure, Doppler-broadened methane peaks with linewidths down to 10 pm/300 MHz/.01 cm^-1. This spectral resolution highlights an advantageous aspect of the widely-tunable single-wavelength integrated source compared to an integrated frequency comb, which in integrated incarnations have few GHz–100 GHz resolution limited by the cavity free spectral range <cit.>. §.§.§ Resonant DFG spectroscopy of ammonia In addition to operating as an OPO, the broadband operation and singly-resonant nature of our device makes it attractive as a source of MIR light generated by difference frequency generation (DFG). Here, we pump the OPO cavity below threshold, now leaving λ_p(t) constant over time (Fig. <ref>d). We instead seed the device with a scanning telecom-band laser λ_s(t). Injected seed builds up strongly when λ_s(t) matches a signal cavity resonance and generates a bright MIR idler beam by DFG. Hence, λ_i(t) consists of discretely-spaced MIR peaks (Fig. <ref>d). In our device's SRO cavity, peaks at λ_i will be equally spaced in frequency at the signal cavity FSR (≈5.6 GHz) over the entire gain bandwidth. By contrast, in a doubly- or triply-resonant device, generated MIR peaks would be much more sparse because of the requirement of simultaneous signal and idler resonance. The wide availability of rapidly tunable telecom-band lasers, including on-chip and LN-integrated devices <cit.>, makes this resonant DFG technique highly accessible. We demonstrate the broadband resonant DFG spectroscopy by detecting atmospheric pressure ammonia, which exhibits broad lineshapes with 0.5–10 nm peak widths. As in the methane experiment, the MIR output splits into a gas cell and reference path. We measure λ_p with a wavemeter, assume λ_s(t) sweeps linearly with time, then infer λ_i(t) by energy conservation. Fig. <ref>e shows a typical trace of discrete MIR peaks at detector 2 vs. λ_i(t), where λ_p = 1043 nm and λ_s(t) sweeps from 1535 to 1620 nm. Generated equally-spaced MIR lines (Fig. <ref>e, inset) are strong over an ∼100 nm bandwidth (equates to sweeping λ_s only 30 nm), and the >10 µW MIR output can be detected directly by a DC-coupled photodiode. By dividing the signal path's discrete peak heights by those from the reference path, we obtain broadband spectra of ammonia with 5 GHz resolution (Fig. <ref>f). The presented data consists of two scans, each with high signal-to-noise ratio over 100 nm MIR bandwidth. Adjusting temperature and λ_p tunes the center wavelength of the two scans exactly as the OPO is coarsely tuned (Fig. <ref>b). We resolve ammonia's narrower features with ∼0.5 nm peak width alongside broader 10 nm peaks in agreement with the HITRAN database. § DISCUSSION In summary, we have designed and implemented an integrated nanophotonic OPO and demonstrated operation for MIR spectroscopy. Such a device inherits the useful advantages of bulk OPOs as MIR spectroscopic light sources (widely-available near-IR laser pumps, high efficiency, broad tunability, and high resolution) while adding the benefits of nanophotonic integration (reduced footprint, better stability, lower threshold powers, broadband operation via dispersion engineering, and integration capability). The key enabling advance here is the fabrication of high-quality factor, wavelength-selective cavities built from the MIR-compatible LN-on-sapphire platform. With the miniaturization of such a useful MIR spectroscopic technology onto a fully-chip-integrated platform, a plethora of applications can be envisaged, from deployable gas monitoring systems to portable, real-time MIR biosensors. Our work outlines a clear path for improving the device sufficiently to realize powerful and deployable sensors. As highlighted in the text, including an electro-optically-tunable wavelength-selective intracavity etalon would allow precise, rapid, and low-power control over the broad demonstrated gain bandwidths. In addition, further gains in efficiency are important and within reach. These will come from improvements in input fiber-to-chip and output chip-to-detector coupling efficiencies. Simulations show that utilizing cladding mode matching waveguides and/or free space optics would raise edge coupling efficiencies to >70%. Moreover, the simulated normalized efficiency is ∼7× larger than the experimentally-obtained value (43 %/(W·cm^2)), likely due to fabrication imperfections preventing coherent nonlinear enhancement over the full waveguide length. By improving waveguide fabrication we expect threshold powers as low as ∼10 mW, within the output range of heterogeneously integrated lasers near 1 µm <cit.> and thus potentially enabling full pump-OPO on-chip integration. § METHODS §.§ Coupler, bend, and taper design details §.§.§ Intracavity coupler The intracavity coupler (Ext. Fig. <ref>b) is an adiabatic coupler designed to weakly couple pump light at 1 µm and strongly couple signal light at >1.5 µm. The coupler was designed using local coupled mode theory simulations of the slow transfer of light from the waveguide emerging from the resonator bend to the poled section waveguide. We utilize a symmetric adiabatic waveguide coupler where two neighboring waveguides of width 0.7/1.0 µm are tapered to widths 1.0/0.7 µm width, respectively, over 1 mm length. In order to maximize the adiabatic transition near the degenerate point where both waveguides have equal width (0.85 µm), the coupler is divided into three sections: two 150 µm-length fast-tapered couplers at the beginning and end of the coupler and a slowly-varying 700 µm section in the middle of the coupler. The slowly-varying middle section accounts for 20% of the total waveguide width change, while the fast-varying sections account for the remaining 80%. §.§.§ Output and diagnostic couplers The output coupler (Ext. Fig. <ref>c) is a simple directional coupler consisting of two identical 2 µm-width waveguides separated by a 0.9 µm coupling gap over a length of 50 µm. The diagnostic coupler (Ext. Fig. <ref>) uses the same coupling geometry as the output coupler in order to obtain ∼1% coupling of telecom light in/out of the cavity for measuring the cavity resonances (Fig. <ref>c). §.§.§ Resonator bends Following the poled section and the output coupler, the waveguides are tapered to 1 µm width to ensure that the resonator is single-moded at telecom wavelengths for cleaner mode structure. This also effectively filters out MIR light >3 µm that cannot be well-confined in the smaller waveguide. The waveguide bends are Euler bends <cit.>. §.§.§ Waveguide tapers Pump light incident onto the chip edge couples into a 1.7 µm-width waveguide, then tapers down to 0.7 µm as it reaches the adiabatic coupler. After the adiabatic coupler, the light is confined in a 1 µm-width waveguide before it tapers up to the 1.95 µm-width periodically-poled 9.3 mm gain section. At each of these tapers, fundamental TE mode pump light can be scattered into other modes or free space, but the exact loss rate cannot be extracted from the current chip. §.§ Device fabrication Device fabrication starts with a commercial MgO-doped, x-cut LN film on a c-cut sapphire substrate (NGK Inc.). The LN film is thinned using an ion mill, then poling electrodes (poling period Λ=6.72 µm) are patterned using electron-beam lithography and Cr metal liftoff. The LN is poled using high-voltage pulses (∼900 V), then poled domains are monitored using second-harmonic-generation microscopy. Electrodes are stripped using Cr etchant. Waveguides are patterned with electron-beam lithography (JEOL 6300FS 100 kV) and HSQ FOX-16 resist followed by argon ion mill etching (Intlvac). Finally, the chip is laser stealth diced to create clean edge facets for light in/out coupling. §.§ Measurement setup and calibrations §.§.§ General setup A block diagram of the general setup used for measurements is shown in Ext. Fig. <ref>. Our pump light source is a tunable external cavity diode laser (Toptica DL Pro). The laser can tune coarsely from 1010–1100 nm with 0.1 nm resolution, and finely using a voltage-controlled piezo within 40 GHz. The laser output is fiber coupled, then routed through a 99:1 splitter where the 1% tap is sent to a near-IR wavemeter (Bristol Instruments Model 621). Light from the 99% port is amplilfied in a ytterbium-doped fiber amplifier with 1040–1090 nm operating bandwidth (Civil Laser), then sent to a variable optical attenuator (OZ Optics). To calibrate power sent to the chip, 1% of the light is tapped off to a powermeter (Newport), then the rest is sent to a 1/1.5 µm wavelength division multiplexer, then into a lensed Hi1060 single mode fiber that couples light to the OPO device. The amount of intracavity pump light is measured by collecting the light exiting the bottom bus waveguide on the left of Ext. Fig. <ref> with a lensed multimode silica fiber. The light is then sent through a fiber collimator, and a short-pass filter to remove 1.5-µm output generated by the OPO, and focused onto an InGaAs detector. The output at telecom and MIR wavelengths is collected using a flat-cleaved MIR-compatible multimode fiber (Zinc fluoride glass, La Verre Fluoré). The output light is split into several paths. To detect the MIR light, we focus with a CaF lens (Thorlabs) then through a ZnSe OD1 ND filter (Thorlabs) to avoid saturating the detector. Finally, the light passes through a longpass filter (Ge) so that only MIR light reaches the MCT detector (Thorlabs PDAVJ5). To detect telecom, we use a 1350 nm longpass filter (Thorlabs) before focusing light onto an InGaAs detector (which does not detect MIR photons). Finally, a portion of the output light is coupled into an InF MIR-compatible multimode fiber (Thorlabs) sent into a Yokogawa Optical Spectrum Analyzer (AQ6376). §.§.§ Pump fiber-to-chip coupling efficiency We couple pump light with wavelength 1046–1056 nm in/out of a straight waveguide using two lensed SMF fibers. By dividing the power collected from the output lensed fiber by the power sent to the input lensed fiber, we infer the pump power coupling efficiency per edge of η̃_p, fiber-to-chip = (33 ± 2) %. We assume here that pump propagation loss is negligible. The uncertainty in the pump fiber-chip coupling comes from ripples in the waveguide throughput observed as λ_p is tuned from 1046–1056 nm. We attribute the ripples to excitation of higher-order pump modes in the multimoded LN waveguide, because a simultaneous measurement of parametric gain (which depends only on fundamental TE0 mode power) during the same wavelength scan does not follow the same ripples. §.§.§ Nonlinear efficiency The normalized efficiency of the periodically-poled gain section is calculated by measuring optical parametric amplification (OPA) on a straight waveguide adjacent to the OPO that was poled using the same electrodes (Ext. Fig. <ref>c). In this experiment we couple both 1 µm pump and 1.5 µm signal onto the straight waveguide. We modulate the pump with a 10 kHz square-wave using an acousto-optic modulator (Aerodiode). The periodic pump modulation periodically provides gain to the signal wave, which we measure with a lock-in amplifier. The relationship between measured signal gain and nonlinear efficiency can be derived starting from the coupled wave equations: ∂_z A_p = -ω_p/ω_i√(η_DFG) A_s A_i ∂_z A_s = ω_s/ω_i√(η_DFG) A_p A_i^* ∂_z A_i = √(η_DFG) A_p A_s^*, where A_p,s,i(z) are the power-normalized pump (ω_p), signal (ω_s), and idler (ω_i) amplitudes with units of √(W) and η_DFG is defined as the normalized efficiency and with units of %/(W·cm^2). For optical parametric amplification with an undepleted pump, these equations can be reduced to: ∂_z a_s = γ a_i^* ∂_z a_i = γ a_s^*, where a_s,i are photon flux-normalized signal and idler amplitudes and γ = -i√(ω_s/ω_i)√(η_DFG)A_p(0). The solution to this system is well-known <cit.>: a_s(z) = cosh(|γ|z) a_s(0) -i sinh(|γ|z) a_i^*(0) a_i^*(z) = isinh(|γ|z) a_s(0) + cosh(|γ|z) a_i^*(0), so with zero intial idler input (a_i=0) and fixed poling length L_pol, the telecom amplitude experiences the power gain: signal power gain ≡|a_s(L_pol)|^2-|a_s(0)|^2/|a_s(0)|^2 = cosh^2(|γ|L_pol) - 1 ≈η_DFG(ω_s/ω_i)P_p(0)L_pol^2 in the low gain limit. At a fixed λ_p and pump power, we can sweep λ_s and monitor the OPA gain (Ext. Fig. <ref>d). The OPA gain is maximized when phase-matching is optimized. We then track the maximal phase-matched OPA gain as a function of P_p(0) (Ext. Fig. <ref>e). Fitting signal power gain as a function of pump power (using Eq. <ref>) for known L_pol = 0.93 cm allows extraction of the nonlinear efficiency η_DFG = 43.5 %/(W·cm^2). The simulated normalized efficiency is around 300 %/(W·cm^2). §.§.§ MIR collection efficiency for power sweep We calibrate the relationship between MIR on-chip power and detected voltage in the MIR MCT detector by simultaneously comparing OPA and difference-frequency-generation (DFG) processes in a straight nonlinear waveguide. For this calibration, we send a modulated pump (10 kHz square wave) along with a CW telecom signal wave onto a straight periodically-poled waveguide. The modulated pump produces parametric gain modulation in the telecom signal according to Eq. <ref>. Because of photon number conservation, the amplification of telecom photons is also accompanied by generation of the same number of MIR photons by DFG. The expected amount of detected MIR idler power is then: P_i,det = η̃_i,chip-to-det P_i(L_pol) = η̃_i,chip-to-detη_DFGP_p(0)P_s(0)L_pol^2. Dividing Eqs. <ref> and <ref> we can solve for the MIR chip-to-detector collection efficiency: η̃_i,chip-to-det = P_i,det/signal power gain· (ω_i/ω_s)· P_s(0). Using this method eliminates the contribution of any uncertainties in nonlinear efficiency η_DFG and on-chip pump power. The uncertainties are dominated instead by P_i,det and P_s(0). With an on-chip pump power of 47.9 mW and on-chip signal power of P_s(0) = 0.8 ± 0.06 mW, we measure a telecom signal gain = 3.28 %. We also measure DFG idler of P_i,det=17.8 ± 1.1 nW. Combining these values leads to a MIR chip-to-detector collection efficiency of η̃_i,chip-to-det = (0.16 ± 0.016)%. Dividing out the attenuation from the OD1 filter and 50:50 beamsplitter, this means that the MIR chip-to-fiber collection efficiency is around 3%. §.§.§ Telecom collection efficiency for power sweep For the telecom calibration, we first in-couple and out-couple 1550 nm telecom light onto a straight waveguide using Hi1060 lensed fibers. By measuring the power sent into the input fiber and collected from the output fiber, we extract a telecom fiber-to-chip power coupling efficiency of (30%± 2) %. Then we switch the output telecom detection chain to that shown in Ext. Fig. <ref>. By comparing the collected telecom power measured directly before the InGaAs telecom detector to the known on-chip power, we extract the telecom chip-to-detector collection efficiency of (2 ± 0.2)%, which is on the same order as that for MIR (previous section). We also calibrate the detector conversion efficiency to be 6.2 V/mW, allowing for measured voltage to be converted to on-chip power. §.§ Pump resonance characterization Using the detection setup shown in Ext. Fig. <ref>a (more detail described in Ext. Fig. <ref>), pump resonances are monitored for pump powers below the OPO threshold and are plotted in Ext. Fig. <ref>b. To fit these curves, we develop a simple model for the pump cavity. In steady-state, the intracavity pump field A_p,cav obeys the equation: A_p,cav = √(T_p)A_p,in + √(R_p)√(1-ℓ_p)A_p,cave^-ikL where R_p is the pump power coupling ratio across the waveguide gap inside the intracavity coupler, T_p = 1-R_p is the pump power transmission ratio of light that stays on the same waveguide through the coupler, A_p,in is the pump amplitude on the input waveguide, ℓ_p is the round-trip power loss of the pump within the cavity excluding the intracavity coupler region, k is the propagation constant of pump, and L is the round-trip cavity length. Hence the intracavity pump power buildup is found to be of the common Airy function form: |A_p,cav/A_p,in|^2 = B/1+(2F/π)^2sin^2(kL/2) where B = T_p/(1-√(R_p (1-ℓ_p)))^2 is a constant scaling factor that represents the pump power buildup on-resonance and (2F/π)^2 = 4√(R_p (1-ℓ_p))/(1-√(R_p (1-ℓ_p)))^2. Here, F represents the cavity finesse. The output power |A_p,out|^2 will be directly proportional to the intracavity power. The pump resonance curve shape is entirely determined by F, while any collection/detection efficiencies can be incorporated into the scaling factor B. Since we only want to determine F, we fit the resonances measured in Ext. Fig. <ref>b to Eq. <ref> along with a constant offset factor that comes from stray light coupling into the multimode collection fiber. The resultant curve fits are plotted in Ext. Fig. <ref>b along with the fitted value of finesse F. The unfitted peaks do not contribute to OPO and thus represent TM modes. The fitted value of finesse varies from 2.5–3.5 (Ext. Fig. <ref>c). The variation in finesse arises because the pump cavity spectrum is sensitive to small temperature variations. From the value of finesse, Eq. <ref> can be solved for the total pump power recirculation, ζ = R_p(1-ℓ_p). For the fitted values of finesse, ζ ranges between 8–14%. Given ζ, we can estimate the pump power buildup inside the cavity on resonance using Eq. <ref> and assuming T_p = 1-R_p. R_p is only known if we assume a value of ℓ_p. We can reasonably assume ℓ_p is small and similar to the round-trip loss at telecom wavelengths (12% for measured Q_tot = 1.5×10^6). In this regime, the pump losses are dominated by the intracavity coupler, and R_p≈ R_p(1-ℓ_p) (Ext. Fig. <ref>d). With this result in Eq. <ref>, from Ext. Fig. <ref>d we find that the pump power buildup on resonance ranges from 1.75–2.15 for the fitted values of total power recirculation ζ = 8-14%. §.§ Power sweep modeling To model the power out vs. power in data presented in Fig. <ref>b,c, we solve numerically Eq. <ref> for field evolution through the periodically poled gain section for many round trips until the device reaches steady state. To implement this, for the first round trip we initialize pump, signal, and idler amplitudes inside the cavity at z=0 (beginning of the poled region) as: A^(k=1)(z=0) ≡[ A^(1)_p(z=0); A^(1)_s(z=0); A^(1)_i(z=0) ] = [ √(P_p,in(1-R_p)); A_s,0; 0 ] where the superscript k=1 denotes the first round-trip, A_s,0 is a small value representing the random subthreshold signal field fluctuations, and the factor of (1-R_p) within A^(1)_p(0) comes because the pump power injected onto the chip P_p,in needs to be multiplied by (1-R_p) to represent the pump power that enters the periodically poled section (see Ext. Fig. <ref>a). Next, the three waves are propagated through the periodically poled region by numerically solving Eq. <ref>, yielding A^(k=1)(z=L_pol), where η_DFG is assumed to be 40 %/(W·cm^2) (see Sec. <ref>) and L_pol = 0.93 cm. For successive round-trips, k>1 and the inital amplitudes at z=0 are: A^(k)(0) = [ √(P_p,in(1-R_p)) + A^(kk-1)_p(L_pol) √(1-ℓ_p)√(R_p); A^(kk-1)_s(L_pol) √(1-ξ_s); 0 ] where (1-ℓ_p)R_p is the round-trip pump power recirculation, chosen to be 11% (see Sec. <ref>), and ξ_s ≈ 12% is the round-trip signal power loss based on measured Q_tot (Fig. 1c). The idler is explicitly assumed not to resonate. We run the simulation for N_RT = 5000 round trips, which allows the system to reach steady-state. The outputs of the simulation used for Fig. <ref>b,c are steady-state idler output power |A^(N_RT)_i(L_pol)|^2 and steady-state intracavity signal power |A^(N_RT)_s(L_pol)|^2. §.§ Coarse tuning measurements The obtain the OPO coarse tuning data presented in Fig. <ref>a-c and Ext. Fig. <ref>e, a portion of the output light is coupled into an OSA as shown in Ext. Fig. <ref>. For each temperature, the pump wavelength is tuned coarsely in 2–5 nm steps from 1040–1090 nm. At each coarse wavelength step, the pump wavelength is finely tuned until oscillation occurs, then wide-spanning OSA scans are taken. The noise floor of the scans is around -70 dBm. The peaks of the OSA scan are then extracted and plotted as in Fig. <ref>a. For temperatures above 100 °C, the device has clean, non-degenerate output at 1.5 µm and 3 µm, with typical measured power around -40 to -30 dBm after being coupled into the OSA. §.§ Near-degenerate OPO: measurement and simulation For temperatures below 100 °C, the OPO approaches degenerate operation. Because of the broad gain bandwidth near degeneracy, the device can oscillate at different OPO wavelengths with slight perturbations to pump wavelength within a given coarse wavelength step. Moreover, the device can sometimes exhibit multimode oscillation with 2 or more signal/idler mode pairs. To capture all the wavelengths the device oscillates at, we take several OSA scans for each coarse wavelength step. In Fig. <ref>a, Fig. <ref>d, and Ext. Fig. <ref>e we plot the locations of OSA trace peaks with peak power >-45 dBm. Choosing our specific waveguide geometry enables near-degenerate OPO operation near the zero-GVD point. To illustrate this, we simulate the OPO tuning curves for the design geometry: 875 nm LN film thickness, 600 nm etch, and 1.95 µm top width. Simulating the modal effective index for pump wavelengths 1000–1100 nm and signal/idler wavelengths from 1400–3500 nm allows us to plot the phase mismatch Δ k_0 = k(λ_p) - k(λ_s) - k(λ_i) vs. OPO signal wavelength (Ext. Fig. <ref>a-b). Temperature-dependent refractive indices of LN are obtained from Umemura et. al. <cit.> and of sapphire from Thomas et. al. <cit.>. In both simulation temperatures 70 and 80 °C, the Δ k_0(λ_s) curves exhibit upward curvature near degeneracy for λ_p≤1060 nm that gradually flattens as λ_p increases to 1100 nm. The curvature of Δ k_0(λ_s) at degeneracy is directly related to the GVD, which can be seen by Taylor-expanding the phase mismatch around the degenerate frequency (≡Δω = 0): Δ k_0 (Δω) = k(ω_p) - k(ω_p/2 + Δω) - k(ω_p/2 - Δω) ≈Δ k_0(Δω=0) - ( β_2 )_ω_p/2 (Δω)^2 where the GVD β_2 = ∂^2 k /∂ω^2. Hence positive curvature of Δ k_0 indicates anomalous dispersion (β_2 < 0), and as β_2→ 0, the phase mismatch curves should flatten. This is verified in Ext. Fig. <ref>a-b by plotting GVD as a function of wavelength. To quasi-phasematch the process, we include the poling period Λ(T). Incorporating the thermal expansion of LN at temperature T [degree C] and the period at 25 °C Λ_0 = 6.57 µm, results in Λ(T) = Λ_0[1+(1.59×10^-5)(T-25) + (4.9×10^-9)(T-25)^2] <cit.>. With the addition of the periodic poling the total phasematch becomes Δ k = Δ k_0 - G where G = 2π/Λ(T). The signal-wave gain from propagation through the periodically poled region can be found analytically by solving Eq. <ref> in the presence of total phase mismatch Δ k <cit.>: a_s(z)e^iΔ k z/2 = [ cosh(gz) + iΔ k/2gsinh(gz)] a_s(0), where g = |γ|√(1-(Δ k/2|γ|)^2). The simulated gain vs. Δ k results are plotted in Ext. Fig. <ref>a-b for P_pump=600 mW, η_DFG=40 %/(W·cm^2), and L=0.93 cm. Plotting the signal gain experienced for each combination of λ_p and λ_s constructs the OPO tuning color plots. At 70 °C, the Δ k(λ_s) curves with strong upward curvature and hence anomalous dispersion experience the highest gain, leading to a U-shaped tuning curve (Ext. Fig. <ref>a). In contrast, at 80 °C, the Δ k(λ_s) curves with the highest gain have flat curvature and hence near-zero-GVD, leading to a T-shaped tuning curve (Ext. Fig. <ref>b) and ultrabroad OPA gain bandwidth. To match the experimental tuning behavior with simulation, we include a small variation in film thickness across the poled waveguide length. Namely, we assume the LN film thickness Y(z) = Y_0 + Δ Y(z) where the nominal film thickness is 875  nm and the spatially-dependent film thickness change Δ Y(z) = -Δ Y_tot/2 + bz + az^2 where the total film thickness variation Δ Y_tot = 4 nm, b = b_0 - aL, a = ϵ b_0 and b_0 = (Δ Y(L) - Δ Y(0))/L. The 4 nm simulated total film thickness variation over 9.3 mm length was chosen as it is the minimum film thickness where simulated results qualitatively match experiment. Moreover, the chosen thickness variation agrees well with thickness measurements performed by the LN-on-sapphire vendor, which indicate around 0.4 nm LN thickness variation per mm length. The factor ϵ describes the curvature of the film thickness variation, as depicted in Ext. Fig. <ref>c. To calculate how film thickness variation profiles affect the signal wave gain, we solve Eq. <ref> numerically in the presence of spatially-dependent phase mismatch Δ k = Δ k_0 - G + Δ k_h(z) where the phase mismatch due to height variations Δ k_h(z) = dΔ k_h/dYΔ Y(z) and the ratio of phase mismatch shift to change in film thickness dΔ k_h/dY = 8.5 cm^-1 / nm is found from simulation. Specifically, we solve: d/dz a_s = γexp[-i∫_0^z Δ k(z')dz' ] a_i^* d/dz a_i^* = γ^* exp[i∫_0^z Δ k(z')dz' ] a_s The resultant signal gain as a function of the constant part of phase mismatch Δ k_0 - G is plotted in Ext. Fig. <ref>d. For linear film thickness variation (ϵ = 0), the gain curve vs. phase mismatch has broadened, reduced in magnitude, and exhibits two major peaks instead of one major peak found when Δ k_h(z) = 0 (Ext. Fig. <ref>a,b). The addition of quadratic film thickness variation (ϵ > 0) makes the gain curve slightly asymmetric, which matches experimental results. Simulated OPO tuning curves along with experimental data for T = 30–87.5 °C, Λ_0 = 6.58 µm, and ϵ=1 are shown in Ext. Fig. <ref>e. The experimental data qualitatively matches simulation. As temperature increases in both simulation and experiment, the observed OPO output tuning curves shift upwards in the plot (towards longer pump wavelengths). 40 °C (labeled with ⋆) and 70 °C (⋆⋆) both present U-shaped tuning curves, while 80 °C (⋆⋆⋆) presents a T-shaped tuning curve. To understand the results, we highlight the simulation results of 40 °C, 70 °C, and 80 °C (Ext. Fig. <ref>f-h). At 40 °C, the U-shaped tuning curve arises from a secondary gain peak amplifying anomalous dispersion regions (Ext. Fig. <ref>f). By 70 °C, a stronger U-shaped tuning curve arises from the main gain peak amplifying the same anomalous dispersion regions (Ext. Fig. <ref>g). Finally, at 80 °C, the U-shape transforms into a T-shape when the main gain peak amplifies regions of near-zero-GVD, leading to broadband OPO in both simulation and experiment (Ext. Fig. <ref>h). §.§ Fine tuning characterization OPO fine tuning data (Fig. <ref>e, Ext. Fig. <ref>a) is obtained by taking ∼10 100-pm-wide pump wavelength piezo scans and recording the wavelength of both pump and generated OPO signal. To do so, the measurement setup in Ext. Fig. <ref> is modified; generated telecom-wavelength OPO output is collected in a single-mode lensed fiber to increase wavelength resolution. Directly connecting this fiber to a rapidly-scanning OSA allows determination of the telecom-band wavelength. The plotted idler wavelength is calculated based on energy conservation. OPO wavelengths collected during a single OPO “cluster" (Fig. <ref>a) are plotted as dots connected by lines. Data is recorded at 11 temperatures between 150.7–151.7 °C. The presented fine tuning data (Ext. Fig. <ref>a) exhibits three regimes: (1) Clean tuning from λ_p=1041–1041.5 nm at λ_s≈1547.5 nm, (2) Strong mode hop at λ_p= 1041.6 nm between λ_s≈ 1547.5, 1549.5 nm, and (3) Clean tuning from λ_p=1041.7–1042 nm at λ_s≈1549.5 nm. The signal mode transition from λ_s = 1547.5 nm → 1549.5 nm as λ_p = 1041 nm → 1042 nm comes because changing λ_p shifts the nonlinear gain spectrum. To clarify how the shift in gain spectrum affects which modes oscillate, we measure the amplification experienced by cavity modes (Ext. Fig. <ref>b-e). With no pump laser power, the cavity mode spectrum is obtained by sweeping the wavelength of the tunable telecom laser (Santec TSL-710) input to the chip and detecting the output in an InGaAs photodiode (Newport 1623). The reduced-height peaks in the mode-spectrum structure comes from mode crossings. Then we pump the device below the OPO threshold while scanning the tunable telecom laser and measuring the output at telecom wavelengths. The pump laser amplifies all the telecom cavity modes within its gain bandwidth (Ext. Fig. <ref>c-e). The highest-peaked modes in these plots are those that experienced the highest net gain and thus oscillate when the device is above threshold. When λ_p=1041.33 nm (Ext. Fig. <ref>c), the cavity modes near 1547.5 nm experience the most gain. When λ_p shifts near 1041.6 nm, two groups of modes near 1547.5 and 1549.5 nm compete for gain (Ext. Fig. <ref>d), explaining the mode crossing observed in (Ext. Fig. <ref>a). Finally, when λ_p= 1041.94 nm, the modes at 1549.5 nm have high gain. §.§ Methane OPO spectroscopy As shown in Fig. <ref>a, the measurement setup for methane spectroscopy slightly modifies the general setup shown in Ext. Fig. <ref>. The methane cell (7.5 cm length, 20 Torr, Triad Technologies) is placed before a MCT detector (Thorlabs PDAVJ5), while the reference arm uses an InAsSb detector (Thorlabs PDA07P2). The generated signal wavelength is measured by heterodyning it with blueshifted output from a tunable telecom laser (Santec TSL-710) on a 12 GHz fast photodiode (Newport 1554-B). The wavelength of the reference telecom laser is read directly before and after a given measurement sweep using the wavemeter (Bristol Instruments Model 621). The beatnote is read using an RF spectrum analyzer (Rohde and Schwartz FSW26). In Fig. <ref>c we present methane absorption features for four separate scans. The MIR wavelength axis is obtained by combining the interpolated λ_p and λ_s axes by energy conservation. The presented scans in Fig. 4c are background-corrected to match the theoretical curves from HITRAN by fitting the experimental transmission data to T(λ) = V_1(λ) / V_2(λ) = x_1 e^-α(λ) L + x_2 where the fitting parameter x_1 accounts for transmission/detection efficiency differences in the two paths generating voltages V_1(t) and V_2(t), while x_2 accounts for MIR light hitting detector V_1 that did not couple into the gas cell (the MIR beam diameter was ∼2x the gas cell diameter when viewed with an IR viewing card). The background fitting process did not affect the wavelength axis. As a result, the experimental backround-subtracted data plotted in Fig. <ref>c is (T(λ) - x_2)/x_1. HITRAN data plotted in Fig. <ref>c comes from calculating absorption coefficient α(λ) from the HITRAN database for methane at 20 Torr, then plotting T=exp(-α(λ) L). §.§ Resonant DFG spectroscopy For resonant DFG spectroscopy, we use the same setup as for methane OPO spectroscopy but instead use a gas cell of ammonia (1.5 cm length, 740 Torr, purchased from Wavelength References). λ_p is determined with a wavemeter before scanning λ_s from 1540–1620 nm, which is assumed to vary linearly across the scan range. Generated experimental data, consisting of discrete peaks of generated MIR output (see Fig. <ref>e), is processed by fitting Lorentzians to each peak, then calculating the transmission for the k-th peak T(λ_k) based on the ratio of peak areas from the sample path and reference path. ∼10 scans are taken, then averaged, to improve SNR. Experimental absorbance data plotted in Fig. <ref>f is A=-log(T(λ_k)/T_bg) where T_bg = 0.7 to account for the difference in transmission between sample and reference paths. HITRAN data plotted in Fig. <ref>f is obtained by calculating the absorption coefficient α(λ) from the HITRAN database for ammonia at 740 Torr then plotting A = α(λ) L. § ACKNOWLEDGEMENTS We thank NTT Research for their financial and technical support. We thank the United States government for their support through the Department of Energy Grant No. DE-AC02-76SF00515, the Defense Advanced Research Projects Agency (DARPA) LUMOS program (Grant No. HR0011-20-2-0046), the DARPA Young Faculty Award (YFA, Grant No. D19AP00040), the U.S. Department of Energy (Grant No. DE-AC02-76SF00515) and Q-NEXT NQI Center, and the U.S. Air Force Office of Scientific Research MURI grant (Grant No. FA9550-17-1-0002). A.Y.H. acknowledges NSF GRFP, Grant. No. 2146755. H.S.S. acknowledges support from the Urbanek Family Fellowship, and V.A. was partially supported by the Stanford Q-Farm Bloch Fellowship Program and the Max Planck Institute in Erlangen. This work was also performed at the Stanford Nano Shared Facilities (SNSF), supported by the National Science Foundation under award ECCS-2026822. We also acknowledge the Q-NEXT DOE NQI Center and the David and Lucille Packard Fellowship for their support. We thank Leo Hollberg for many useful discussions and lending the cells for the gas spectroscopy experiment. § AUTHOR CONTRIBUTIONS A.Y.H., H.S., C.L., V.A., and A.H.S-N. designed the device. A.Y.H., H.S, and T.P. fabricated the device. A.Y.H, H.S., T.P.M., and T.P. developed fabrication procedures together. A.Y.H. measured the device. A.Y.H. analyzed data with support from H.S., M.J., and J.M. M.M.F. and A.H.S.-N. advised the project and provided experimental/theoretical support. A.Y.H. drafted the manuscript with input from all the authors.
http://arxiv.org/abs/2307.07309v2
20230714123835
On the dynamic asymptotic dimension of étale groupoids
[ "Christian Bönicke" ]
math.DS
[ "math.DS", "math.MG", "math.OA", "22A22, 37B05, 51F30" ]
theorem satzTheorem[section] lemma[satz]Lemma kor[satz]Corollary thmxTheorem corx[thmx]Corollary prop[satz]Proposition conjecture[satz]Vermutung definition defi[satz]Definition bem[satz]Remark aufgabe[satz]Aufgabe beweis ex[satz]Example exs[satz]Examples beispiele[satz]Beispiele matrix,arrows On dynamic asymptotic dimension]On the dynamic asymptotic dimension of étale groupoids C. Bönicke]Christian Bönicke School of Mathematics, Statistics and Physics, Newcastle University, Newcastle upon Tyne NE1 7RU, United Kingdom [email protected] [2010] We investigate the dynamic asymptotic dimension for étale groupoids introduced by Guentner, Willett and Yu. In particular, we establish several permanence properties, including estimates for products and unions of groupoids. We also establish invariance of the dynamic asymptotic dimension under Morita equivalence. In the second part of the article, we consider a canonical coarse structure on an étale groupoid, and compare the asymptotic dimension of the resulting coarse space with the dynamic asymptotic dimension of the underlying groupoid. [ [ August 12, 2023 =================== § INTRODUCTION Dynamic asymptotic dimension is a dimension theory for dynamical systems introduced by Guentner, Willett and Yu in <cit.> in the general framework of étale groupoids. Since its inception, the concept has found numerous applications: it provides upper bounds on the nuclear dimension of the resulting groupoid C^*-algebras <cit.>, even in the presence of a twist <cit.>. It is also closely related to the diagonal dimension of sub-C^*-algebras recently introduced in <cit.>. In <cit.> it was shown that the groupoid homology groups of a totally disconnected étale groupoid vanish in all degrees exceeding the dynamic asymptotic dimension of the groupoid. Estimates on the dynamic asymptotic dimension (denoted henceforth by dad(·)) are known for many concrete classes of examples (see e.g. <cit.>). However, the basic theory is still not very well developed in the literature. The aim of this article is to remedy this situation and provide a set of general results to compute the dynamic asymptotic dimension of étale groupoids. To this end we prove several permanence properties of dynamic asymptotic dimension, which are summarised in the theorem below. Let G and H be étale Hausdorff groupoids. * If G and H are Morita equivalent, then G=H, * dad(G)= max_U∈𝒰dad(G|_U) for any finite open cover 𝒰 of G^0, * G× H≤G+H. Let us remark that it is straightforward to obtain multiplicative estimates for the dynamic asymptotic dimension of a product of groupoids. The main difficulty in proving (3) lies in reducing this crude estimate and obtain an additive estimate. Results of this kind have a long history: let us mention the classical result <cit.> in topological dimension theory, the results of Dranishnikov-Bell for Gromov's asymptotic dimension <cit.> and the results in <cit.> for Assuad-Nagata dimension. Our proof is more closely modelled on the beautiful approach presented in <cit.>, who in turn attribute some of the underlying ideas to Kolmogorov and Ostrand. We should mention that Pilgrim in <cit.> independently proved similar results for the transformation group case. However, the most natural context for dynamic asymptotic dimension is the world of groupoids. In the second part of this article, we investigate the relation between the dynamic asymptotic dimension of a groupoid G and its asymptotic dimension, when equipped with a canonical coarse structure denoted by ℰ_G. It was recently shown in <cit.> that free actions Γ↷ X of discrete groups on zero-dimensional second countable compact Hausdorff spaces satisfy dad(Γ⋉ X)∈{asdim(Γ),∞}. In fact it is believed, that such a result should be true beyond the zero-dimensional setting. Evidence towards this can be found in the results in <cit.>. The most natural context for the notion of dynamic asymptotic dimension however is the world of étale groupoids, and in this generality nothing seems to be known so far. Hence the second part of this article is concerned with initiating the study of this question more generally. Our main contribution is the following result. Let G be a σ-compact principal étale groupoid with compact and totally disconnected unit space, and let ℰ_G be the canonical coarse structure on G. Then asdim(G,ℰ_G)≤dad(G). If we further assume dad(G)<∞ then G=asdim(G,ℰ_G). Acknowledgement: I am grateful to Anna Duwenig and Rufus Willett for helpful comments on an earlier version of this paper. § PERMANENCE PROPERTIES OF DYNAMIC ASYMPTOTIC DIMENSION Given a groupoid G, we will denote its set of units by G^0 and let r,s G→ G^0 denote the range and source map of G, respectively. Throughout this text we will only deal with locally compact Hausdorff groupoids that are étale, meaning that the range and source maps are local homeomorphisms. Note, that open subgroupoids of étale groupoids are automatically étale again. Given a groupoid G and a subset A⊆ G^0, the restriction of G to A is the subgroupoid G|_A{g∈ G| s(g),r(g)∈ A} of G. Given an étale groupoid G and a subset K⊆ G, we will write ⟨ K⟩ for the subgroupoid of G generated by K. If K is open, then this subgroupoid is automatically open. With this in mind we can recall the definition of dynamic asymptotic dimension given in <cit.>. Let G be an étale Hausdorff groupoid and d∈ℕ. We say that G has dynamic asymptotic dimension at most d, if for every open relatively compact subset K⊆ G, there exists a cover of s(K)∪ r(K) by d+1 open sets U_0,…, U_d such that for each 0≤ i≤ d, the groupoid ⟨ K∩ G|_U_i⟩ is relatively compact in G. We will first list some elementary results. To make sense of the statement of the following lemma, recall that given a groupoid G with a non-compact unit space, we can form the Alexandrov groupoid by considering G^+ G∪ (G^0)^+, that is we take the Alexandrov or one-point compactification of the unit space adding no further arrows (confer <cit.> for details). The following Lemma below can be proven in an elementary way just from the definition above (see <cit.> for details). Let G be an étale Hausdorff groupoid. Then the following hold: * dad(H)≤dad(G) for all closed étale subgroupoids H⊆ G, * dad(G|_U)≤dad(G) for all open subsets U⊆ G^0, and * dad(G^+)=dad(G). Here is another useful permanence property that can be proven directly. Let G be an étale groupoid such that G=⋃_n∈ G_n for a nested sequence of open subgroupoids G_n⊆ G satisfying G_n⊆ G_n+1. Then dad(G)≤lim infdad(G_n). We may assume that dlim infdad(G_n)<∞ since otherwise there is nothing to show. Let K⊆ G be an open, relatively compact subset. We may assume without loss of generality that s(K)∪ r(K)⊆ K. Since all the G_n are open, compactness of K provides an N'∈ such that K⊆K⊆ G_N'. By definition of the lim inf there exists N≥ N' such that dad(G_N+1)≤ d. Hence there exist open subsets U_0,… ,U_d⊆ G^0_N+1 covering s(K)∪ r(K) such that the closure of the subgroupoid H_i⟨ K∩ G|_U_i⟩ in G_N+1 is compact. Note that H_i⊆ G_N and the closure of H_i in G coincides with the closure of H_i in G_N+1 since G_N⊆ G_N+1. Hence dad(G)≤ d as desired. §.§ Morita invariance Besides the obvious notion of isomorphism for étale groupoids, there are various notions of (Morita) equivalence in the literature and most of them are equivalent to each other. Let us recall the formulation that will be most useful for our purposes. Given an étale groupoid G and a surjective local homeomorphism ψ X→ G^0 from another locally compact Hausdorff space X onto G^0, we can define the ampliation, or blow-up of G with respect to ψ as G^ψ={(x,g,y)∈ X× G× X|ψ(x)=r(g), ψ(y)=s(g)}. It is routine to check that G^ψ with the multiplication (x,g,y)(y,h,z)=(x,gh,z) and inverse map (x,g,y)^-1=(y,g,x) is an étale groupoid in its own right when equipped with the relative topology from X× G× X. We will say that two étale groupoids G and H are equivalent if they admit isomorphic blow-ups, i.e. if there exist locally compact Hausdorff spaces X and Y together with surjective local homeomorphisms ψ:X→ G^0 and ϕ:Y→ H^0 such that G^ψ≅ H^φ. We refer the reader to <cit.> for a detailed overview of other notions of (Morita) equivalence and in particular <cit.>, where it is proved that many of the most common notions coincide. To prove that the dynamic asymptotic dimension is invariant under Morita equivalence, we first need another permanence property that is interesting in its own right. Recall, that a continuous homomorphism π:G→ H between two étale groupoids is called locally proper, if its restriction to G|_C is proper for every compact subset C⊆ G^0. Examples include the inclusion map of a closed étale subgroupoid H↪ G, the inclusion map G|_U↪ G for an open subset U⊆ G^0, or the projection map G⋉ X→ G, where G⋉ X is the transformation groupoid associated with a continuous action of an étale groupoid G on a locally compact Hausdorff space X. Let G and H be étale groupoids and π:G→ H a continuous and locally proper homomorphism. Then dad(G)≤dad(H). We may assume, that dad(H)=d<∞, since otherwise there is nothing to prove. Let K⊆ G be an open relatively compact subset. Then π(K) is a relatively compact subset of H. Let C be an open relatively compact subset of H with π(K)⊆ C. By assumption we may find open subsets U_0,…, U_d of H^0 which cover s(C)∪ r(C), such that for each 0≤ i≤ d the the subgroupoid H_i⟨ C∩ H|_U_i⟩ of H is relatively compact. If we let V_iπ^-1(U_i)∩ G^0, then V_i is an open subset of G^0 by continuity of π. Since π is a homomorphism, we clearly have s(K)∪ r(K)⊆⋃_i=0^d V_i. We claim that ⟨ K∩ G|_V_i⟩ is a relatively compact open subgroupoid of G. Note that ⟨ K∩ G|_V_i⟩⊆π^-1(H_i)∩ G_| s(K)∪ r(K). Since π is locally proper, the latter is a compact subgroupoid of G. This completes the proof. Note that an application of this result to the examples mentioned above gives an alternative way to prove items (1) and (2) in Lemma <ref> above. We are now ready to proceed with the proof of the main result of this section: The dynamic asymptotic dimension is invariant under (Morita) equivalence. Let ψ X→ G^0 be a surjective local homeomorphism and let G^ψ={(x,g,y)∈ X× G× X|ψ(x)=r(g), ψ(y)=s(g)} be the ampliation of G with respect to ψ. We are going to show G^ψ=G. It is routine to check that the canonical projection π G^ψ→ G is locally proper. Hence Proposition <ref> immediately yields the inequality G^ψ≤G. For the reverse inequality suppose dG^ψ<∞ and let K⊆ G be an open relatively compact subset. Using the assumption that ψ is a local homeomorphism, find C⊆ X open and relatively compact such that s(K)∪ r(K)⊆ψ(C). Then the set L (C× K× C)∩ G^ψ is open and relatively compact in G^ψ. Hence we can find U_0,…, U_d covering s(L)∪ r(L) such that ⟨ L ∩ G^ψ |_U_i⟩ is open and relatively compact in G^ψ. Then the sets V_iψ(U_i) form an open cover of s(K)∪ r(K). Indeed, given u∈ s(K) for example, there exists a g∈ K such that s(g)=u. Since s(K)∪ r(K)⊆ψ(C) there exist x,y∈ C such that ψ(x)=r(g) and ψ(y)=s(g)=u. So (x,g,y)∈ L. In particular, y∈ s(L)∪ r(L) so y∈ U_i for some i. But then u=s(g)=ψ(y)∈ψ(U_i)=V_i. Moreover, when g∈ K such that s(g),r(g)∈ V_i, then there exist x,y∈ U_i such that (x,g,y)∈ U_i× K× U_i⊆ L. So, g∈π(⟨ L ∩ G^ψ |_U_i⟩). It follows that ⟨ K∩ G|_V_i)⊆π(⟨ L ∩ G^ψ |_U_i⟩) and hence ⟨ K∩ G|_V_i) is relatively compact in G. This verifies the inequality G≤G^ψ and completes the proof. §.§ A union theorem This section is dedicated to the following result: Let G be an étale groupoid and 𝒰 a finite open cover. Then dad(G)=max_U∈𝒰dad(G|_U). The main technical observation needed in the proof is isolated in the following Lemma: Let G be an étale groupoid and V⊆ G^0 an open subset. Suppose V=V_0∪ V_1 is the union of two open subsets V_0,V_1⊆ G^0 and that K_0⊆ K_1⊆ K_2⊆ G are open, relatively compact, satisfying K_i=K_i∪ K_i^-1∪ s(K_i)∪ r(K_i) such that H_0⟨ K_0∩ G|_V_0⟩⊆ K_1 and H_1⟨ K_1^3∩ G|_V_1⟩⊆ K_2. Then ⟨ K_0∩ G|_V⟩⊆ K_2^5. Let g∈⟨ K_0∩ G|_V⟩. Then we can write g=g_1⋯ g_m where g_i∈ K_0 and s(g_i),r(g_i)∈ V for all 1≤ i≤ m. We first note that if 1≤ k<l≤ m such that r(g_k),s(g_l)∈ V_1, then g_k⋯ g_l∈ H_1. Armed with this observation we have the following cases: * If r(g_k)∈ V_0 for all 1≤ k≤ m, then g_1⋯ g_m-1∈ H_0 and so g∈ H_0K_0⊆ K_2^5. * Similarly, if s(g_k)∈ V_0 for all 1≤ k≤ m, then g∈ K_0 H_0⊆ K_2^5. * In all other cases, there must exist indices s,t such that g_s is the first element such that r(g_s)∈ V_1 and g_t is the last element such that s(g_t)∈ V_1. In this case g= (g_1⋯ g_s-2)g_s-1(g_s⋯ g_t)g_t+1(g_t+2⋯ g_n) ∈ H_0K_0H_1K_0H_0⊆ K_1^2 K_2 K_1^2⊆ K_2^5. It follows from Lemma <ref> that dad(G|_U)≤dad(G) for all U∈𝒰. Hence we only need to verify the inequality dad(G)≤max_U∈𝒰dad(G|_U). Inductively, it will be enough to show this for a cover by two open sets U_0 and U_1. Moreover, using Morita invariance, we may pass to the groupoid G[𝒰] to assume that the U_0 and U_1 are disjoint, i.e. a clopen cover. Given an open, relatively compact subset K⊆ G the set K∩ G|_U_0 is open and relatively compact in G|_U_0. So by assumption, we may find an open cover V_0,0,…, V_0,d of (s(K)∪ r(K))∩ U_0 such that ⟨ K∩ G|_V_0,i⟩ is relatively compact. Now set K_1 K∪⋃_i=0^d ⟨ K∩ G|_V_0,i⟩. Then K_1 is open and relatively compact in G. Using our assumption on G|_U_1 now, we can find an open cover V_1,0,…, V_1,d of s(K_1)∪ r(K_1) such that ⟨ K_1^3∩ G|_V_1,i⟩ is relatively compact for all 0≤ i≤ d. Set V_i V_0,i∪ V_1,i. Then V_0,…, V_d is an open cover of s(K)∪ r(K) and Lemma <ref> implies that for each 0≤ i≤ d the groupoid ⟨ K∩ G|_V_i⟩ is relatively compact. Note, that we can also combine Theorem <ref> with Proposition <ref> to obtain an estimate for infinite covers. We will leave the details to the reader. §.§ A product theorem The goal of this subsection is to establish an estimate for the dynamic asymptotic dimension of the product of two étale groupoids. Let G and H be étale Hausdorff groupoids. Then G× H≤G+H. Note, that one can find a multiplicative estimate for G× H in a straightforward fashion from the definitions. The proof of the additive formula in Theorem <ref> however is more involved. Let 𝒪_c(G) denote the set of open relatively compact subsets K⊆ G such that K=K∪ K^-1∪ s(K)∪ r(K). If G^0 is compact we will additionally require that G^0⊆ K. Let G be an étale groupoid with compact unit space. A d-dimensional control function for G is a map D_G:𝒪_c(G)→𝒪_c(G) such that for every K∈𝒪_c(G) there exists an open cover U_0,…, U_d of G^0 such that the groupoid ⟨ K∩ G|_U_i⟩ is contained in D_G(K) for all 0≤ i≤ d. Note that a d-dimensional control function for G exists if and only if dad(G)≤ d. The following definition and results are inspired by <cit.>. In order to state them let us introduce the following terminology: A collection {U_i| i∈ I} of open subsets of topological space X is called an n-fold open cover if {i∈ I| x∈ U_i} has cardinality at least n∈ℕ. Let k≥ d≥ 0. A (d,k)-dimensional control function for G is a map D_G:𝒪_c(G)⟶𝒪_c(G) such that for every K∈𝒪_c(G) there exists an (k+1-d)-fold open cover U_0,…, U_k of G^0 such that the groupoid ⟨ K∩ G|_U_i⟩ is contained in D_G(K) for all 0≤ i≤ k. We note that a (d,d)-dimensional control function is just a d-dimensional control function in the sense of the previous definition. There is also a version of the dimension control function for continuous homomorphisms: To prove the main technical proposition in this subsection we need the following facts about n-fold covers: Let X be a compact Hausdorff space. * A collection {U_0,…, U_d} of open subsets of X is an n-fold cover of X if and only if {U_i| i∈ F} is a cover of X for every subset F⊆{0,…,d} of cardinality d+2-n. * If {U_0,…, U_d} is an n-fold open cover of X, then there exist open subsets V_i⊆V_i⊆ U_i for all 0≤ i≤ d such that {V_0,…, V_d} is still an n-fold open cover of X. * For the forward direction, let {U_0,…, U_d} be an n-fold open cover of X, and let F⊆{0,…,d} be a subset of cardinality d+2-n. Then F^c has cardinality d+1-(d+2-n)=n-1. Since every x∈ X is contained in at least n of the sets {U_0,…, U_d}, there must exist an i∈ F such that x∈ U_i. Hence {U_i| i∈ F} is a cover of X. Conversely, assume for contradiction that there exists an x∈ X contained in at most n-1 members of the cover. Then F={i| x∈ U_i} has cardinality at most n-1 and it follows that F^c has cardinality at least d+1-(n-1)=d+2-n. By assumption x must then be contained in U_i for some i∈ F^c which contradicts our choice of F. * Using the first part of this Lemma, {U_i| i∈ F} is an open cover for each F⊆{0,…,d} of cardinality d+2-n. Since X is compact, it is normal. Hence for each fixed F, there exist open subsets V_i^F⊆ X such that V_i^F⊆ U_i and such that {V_i^F| i∈ F} still covers X. Let V_i⋃_F∋ iV_i^F. Then V_i is open, V_i⊆ U_i. Moreover, for any finite subset F of cardinality d+2-n the collection {V_i| i∈ F} covers X since each V_i contains the set V_i^F which already form a cover of X. Another application of item (1) concludes the proof. The following proposition shows how to obtain (d,k)-dimensional control functions for every k≥ d starting from a (d,d)-dimensional control function. This is the main technical ingredient needed to prove the main results. Let G be an étale groupoid with compact unit space. If G admits a d-dimensional control function D_G, set D_G^(d) D_G and inductively define functions D_G^(k) for k≥ d by D_G^(k+1)(K)=KD_G^(k)(K^3)K. Then D_G^(k) is a (d,k)-dimensional dimension function for all k≥ d. We proceed by induction on k≥ d. Since G admits a d-dimensional control function by assumption, the base case k=d is obvious. Suppose now that D_G^(k) is a (d,k)-dimensional dimension function and let K∈𝒪_c(G) be an open relatively compact subset. Then K^3∈𝒪_c(G) as well. Hence the induction hypothesis provides a (k+1-d)-fold open cover U_0,…, U_k of G^0 such that ⟨ K^3 ∩ G|_U_i⟩⊆ D_G^(k)(K^3) for all 0≤ i≤ k. Let U_i' KU_i be the K-orbit of U_i. Note that U_i⊆ U_i' since G^0⊆ K. Claim: ⟨ K∩ G|_U'_i⟩⊆ KD_G^(k)(K^3)K=D_G^(k+1)(K). Let g∈⟨ K ∩ G|_U_i'⟩. This means that g=g_n⋯ g_1 for g_1,…, g_n∈ K such that s(g_j),r(g_j)∈ U_i'=KU_i. It follows that for each k, there exists a h_j∈ K such that s(h_j)=s(g_j) and r(h_j)∈ U_i. If we set g_j' h_j+1g_jh_j^-1∈ K^3. Note that s(g_j'),r(g_j')∈ U_i and hence it follows that g=h_n+1^-1g_n'⋯ g_1'h_1∈ K ⟨ K^3 ∩ G|_U_i⟩ K⊆ K D^(k)_G(K^3) K=D_G^(k+1)(K). Let V_i⊆V_i⊆ U_i be such that * V_0,…, V_k is still a (k+1-d)-fold cover, and * KV_i⊆ KU_i=U_i'. For a 1-cover this can be found using <cit.>. In the general case we can also follow the proof of this result, but use Lemma <ref> above, when shrinking covers. The additional open set needed at stage k+1 will be the set U_k+1'⋃_S (⋂_j∈ S V_j∖⋃_i∉ SKV_i) where S runs through the subsets of {0,…, k} of cardinality k+1-d. It is clear that U_k+1' is open. We claim that ⟨ K∩ G|_U_k+1'⟩⊆ KD_G^(k)(K^3)K. Suppose that g=g_n⋯ g_1 with g_l∈ K and s(g_l),r(g_l)∈ U_k+1' for all 1≤ l≤ n. Then there exist subsets S_1,…, S_n+1⊆{0,…,k} of cardinality k+1-d such that s(g_l)∈⋂_j∈ S_l V_j∖⋃_i∉ S_lKV_i and r(g_n)∈⋂_j∈ S_n+1 V_j∖⋃_i∉ S_n+1KV_i. Observe, that S_1=… =S_n+1. Indeed, suppose for contradiction that there is some index 1≤ l≤ n such that S_l+1≠ S_l. Then we may assume without loss of generality that there exists an i∈ S_l∖ S_l+1. Since i∈ S_l we have s(g_l)∈ V_i. But then s(g_l+1)=r(g_l)=g_ls(g_l)∈ KV_i⊆KV_i for i∈ S_l+1^c, a contradiction. Since S_l+1=S_l for all 1≤ l≤ n, s(g_l),r(g_l)∈ V_j⊆ U_j for all j∈ S_l+1=S_l and hence g_l∈⟨ K∩ G|_U_j⟩. But then g∈⟨ K∩ G|_U_j⟩⊆ D_G^(k)(K^3)⊆ KD_G^(k)(K^3) K. Finally, we claim U_0',…, U_k+1' is a ((k+1)+1-d)-fold open cover of G^0. We know that V_0,…, V_k is a (k+1-d)-fold by the induction hypothesis. Fix x∈ G^0. If it belongs to k+2-d among the sets U_0',…, U_k' we are done. So let us assume that it belongs exactly to k+1-d of the sets U_0',…, U_k'. To complete the proof we will show that x∈ U_k+1'. To see this note that the assumption together with the fact that V_i⊆ U_i' implies that S={i≤ k| x∈ V_i} has cardinality k+1-d. Moreover, x∈⋂_i∈ SV_i and our hypothesis implies x∉⋃_i∈{0,…,k}∖ SKV_i, which together exactly means that x∈ U_k+1'. We will first prove the result in the case that G^0 and H^0 are compact. We may assume that dad(G) and dad(H) are both finite. Set kdad(G)+dad(H). By Proposition <ref> we may find a (dad(G),k)-dimensional dimension function D_G for G and a (dad(H),k)-dimensional dimension function D_H for H. Now let C⊆ G× H be open and relatively compact. Since increasing C only makes the problem harder, we may assume that C=K× L for open, relatively compact subsets G^0⊆ K⊆ G and H^0⊆ L⊆ H. Now find a (k+1-dad(G))-fold open cover of G^0 such that ⟨ K∩ G|_U_i⟩⊆ D_G(K) for all 0≤ i≤ k and a (k+1-dad(H))-fold open cover V_0,…, V_k of H^0 such that ⟨ L∩ H|_U_i⟩⊆ D_H(L). We claim that the sets U_0× V_0,…, U_k× V_k form an open cover of G^0× H^0. Indeed, let (x,y)∈ G^0× H^0. By our choices of the cover (U_i)_i above, the set {i| x∈ U_i} has cardinality at least dad^+1(H) and similarly, the set {i| y∈ V_i} has cardinality at least dad^+1(G). Since both of them are subsets of {0,…, k}, their intersection cannot be empty, which proves our claim. To complete the proof note that ⟨ C∩ (G× H)|_U_i× V_i⟩⊆⟨ K∩ G|_U_i⟩×⟨ L∩ H|_V_i⟩⊆ D_G(K)× D_H(L). Finally, consider the case that G^0 and H^0 are merely locally compact. Note that G^0 and H^0 are open in their respective one-point compactifications and (G^+× H^+)|_G^0× H^0=G× H. Hence Lemma <ref> allows us to compute dad(G× H)≤dad(G^+× H^+)≤dad(G^+)+dad(H^+)=dad(G)+dad(H). In <cit.> the authors prove a multiplicative formula for the nuclear dimension of tensor products of C^*-algebras. Combining Theorem <ref> with the main results in <cit.> yields improved estimates for the nuclear dimension of tensor products of groupoid C^*-algebras: Let (G_1,Σ_1) and (G_2,Σ_2) be two twisted étale groupoids. Then ^+1_nuc(C_r^*(G_1;Σ_1) ⊗ C_r^*(G_2;Σ_2))≤ (dad(G_1)+dad(G_2)+1)((G_1^0)+(G_2^0)+1). We only need to note that C_r^*(G_1,Σ_1)⊗ C_r^*(G_2,Σ_2)≅ C_r^*(G_1× G_2,Σ_1×Σ_2) and apply the results mentioned above. It seems reasonable to expect that there is a general Hurewicz type result for the dynamic asymptotic dimension. To be a little more precise: If π G→ H is a continuous homomorphism between two étale groupoids, we can define the dynamic asymptotic dimension of π as dad(π)sup{dad(π^-1(L))| L⊆ H open subgroupoid s.t. dad(L)=0}. We have the following examples: * If π:G→ H is locally proper, then dad(π)=0. * If π_G:G× H→ G is the projection, then dad(π_G)=dad(H). With these two examples in mind we conjecture that for any continuous groupoid homomorphism π:G→ H we have dad(G)≤dad(π)+dad(H). §.§ Applications to partial actions To illustrate our results, let us discuss a class of étale groupoids arising from partial actions. A partial action of a discrete group Γ on a locally compact Hausdorff space X is a pair θ=((D_γ)_γ∈Γ, (θ_γ)_γ∈Γ) where D_γ⊆ X is an open subset for all γ∈Γ and θ_γ D_γ^-1→ D_γ is a homeomorphism such that D_1=X, θ_1=𝕀_X, and such that θ_γη extends θ_γ∘θ_η (where the latter is defined on θ_η^-1(D_η∩ D_γ^-1)). Associated with such a partial action is the étale groupoid Γ⋉_θ X{(γ,x)∈Γ× X| x∈ D_γ} where the multiplication is defined as (γ,x)(η,θ_γ^-1x)=(γη, x) whenever θ_γ^-1(x)∈ D_η. Let Γ be a countable discrete group in the class of groups described in <cit.> and let θ=((D_γ)_γ∈Γ, (θ_γ)_γ∈Γ) be a free partial action of Γ on a zero-dimensional Hausdorff space X. If D_γ is clopen for all γ∈Γ, then dad(Γ⋉_θ X)≤asdim(Γ). The assumptions imply that the partial action admits a Hausdorff globalisation. In other words, there exists a free global action of Γ on some locally compact Hausdorff space Y containing X as an open subset and such that Γ⋉ X is Morita equivalent to Γ⋉ Y. Note that dim(Y)=0 as well by <cit.>. Hence Proposition <ref> implies that dad(Γ⋉ X)=dad(Γ⋉ Y) and by <cit.>, the latter is bounded above by asdim(Γ). Of course the collection D_γ need not always be closed. Nevertheless, we should expect the same upper bound on the dynamic asymptotic dimension as the following examples shows: Let X be a zero-dimensional metrisable space and θ:U→ V a homeomorphism between two open sets such that θ generates a free partial action of ℤ. Let ⋉_θ X{(n,x)| x∈ D_n} be the associated transformation groupoid. Using <cit.> there exist partial actions θ^(k) for k∈ℕ such that each θ^(k) admits a Hausdorff globalisation, and such that ℤ⋉_θ X can be written as an increasing union ℤ⋉_θ X=⋃_k∈ℕ⋉_θ^(k) X. Thus, for each k∈ℕ there exists a zero-dimensional Hausdorff space Y_k, and a (global) action ℤ↷ Y_k such that ℤ⋉ Y_k is Morita equivalent to Z⋉_θ^(k) X. Proposition <ref> implies dad(⋉_θ^(k)X)≤ 1 for all k∈ℕ and hence dad(⋉_θ X)≤ 1 as well by Proposition <ref>. § ASYMPTOTIC DIMENSION In this second part of the article we will compare the dynamic asymptotic dimension of an étale groupoid G with the classical asymptotic dimension of G with respect to a canonical coarse structure on G. Coarse structures on étale groupoids have been studied before by other authors, see for example <cit.>. Let us first specify which coarse structure we want to consider on a σ-compact étale groupoid G: Let ℰ_G be the collection of subsets of {(g,h)∈ G× G| r(g)=r(h)}, such that E∈ℰ_G if there exists an open relatively compact subset K⊆ G, such that E⊆{ (g,h)| g^-1h∈ K}∪Δ_G. Then ℰ is a coarse structure on G. The elements of ℰ_G are called controlled sets. Note, that the coarse structure on G also induces a coarse structure ℰ_G^x on each of the range fibres G^x by intersecting each controlled set E∈ℰ_G with G^x× G^x. Let Γ⋉ X be the transformation groupoid for an action of a countable discrete group Γ on a locally compact space X. Restricting the canonical coarse structure considered above to any range fibre (Γ⋉ X)^x and identifying it with Γ in the canonical way, gives rise to the coarse structure on Γ described by Roe in <cit.>. Let us now recall the definition of asymptotic dimension: If E is a controlled set for a coarse space (X,ℰ), then a family 𝒰={ U_i}_i∈ I of subsets of X is called E-separated if (U_i× U_j)∩ E=∅ for all i≠ j, and E-bounded if U_i× U_i⊆ E for all i∈ I. Moreover, X is said to have asymptotic dimension at most d if d is the smallest number with the following property: For any controlled set E there exists a controlled set F and a cover 𝒰 of X which is F-bounded and admits a decomposition 𝒰=𝒰_0⊔…⊔𝒰_d such that each 𝒰_i is E-separated. Since the asymptotic dimension of a subspace is at most the asymptotic dimension of the ambient space, we have the obvious estimate sup_x∈ G^0asdim(G^x,ℰ_G^x)≤asdim(G,ℰ_G). In the case of a transformation groupoid Γ⋉ X considered in Example <ref> all the range fibres canonically identify with the group Γ itself and hence the asymptotic dimension of (Γ⋉ X,ℰ_Γ⋉ X) coincides with the asymptotic dimension of Γ with respect to the canonical coarse structure. However, the reverse of inequality (<ref>) may fail, because even if asdim(G^x)<∞ for all x∈ G^0 the sets F_x obtained from the definition of asymptotic dimension that are controlling the size of the members of the cover, may grow in an uncontrollable manner as x varies across G^0. Another example where we do get an equality is the following: A graphing on an étale groupoid G is an open relatively compact set Q⊆ G∖ G^0 with Q=Q^-1 that generates G in the sense that G=⋃_n=1^∞ Q^n, where we adopt the convention that Q^0=G^0. We say that G is treeable G admits a graphing such that every g∈ G∖ G^0 has a unique (reduced) factorisation g=g_m⋯ g_1 with g_i∈ Q. Treeable groupoids are in some sense the analogues of free groups in the world of groupoids. Indeed, if S denotes a free generating set for 𝔽_n and we are given an action 𝔽_n↷ X on a compact space, then 𝔽_n⋉ X is a treeable groupoid with Q=S× X. We are now going to show that asdim(G,ℰ_G)=1 for any treeable groupoid G. Let Q be a graphing as in the definition above. This graphing induces a length function ℓ:G→ [0,∞) given as the length of the unique reduced factorisation of g as a product of elements in Q. The function ℓ is continuous since Q is open, and controlled and proper since Q is relatively compact. We denote by B_N{g∈ G|ℓ(g)≤ N}=⋃_n=0^N Q^n. Let K⊆ G be open and relatively compact. Then there exists an N∈ such that K⊆ B_N. We consider the “annuli" A_k:={ g∈ G| kN≤ℓ(g)≤ (k+1)N} If we only consider those annuli indexed by even (resp. odd) numbers A_2k (resp. A_2k+1) then these families are pairwise K-disjoint. However, they are not yet uniformly bounded. Hence we need to further subdivide each annulus. For k≥ 2 we define an equivalence relation on A_k by setting g∼_k h if g and h have the same past up to distance N(k-1) from the origin, i.e. if g=g_1⋯ g_m and h=h_1⋯ h_l are the unique reduced factorisations of G with elements in Q, then g_i=h_i for all i≤ N(k-1). This is clearly an equivalence relation on A_k and we denote the equivalence class of g∈ A_k by [g]_k. Let h∈ [g]_k. If we factorise g=g_1⋯ g_ℓ(g) then h=g_1⋯ g_N(k-1)h_N(k-1)+1⋯ h_ℓ(h) and hence d(g,h)=ℓ(g^-1h)=ℓ(g^-1_ℓ(g)⋯ g_N(k-1)^-1h_N(k-1)+1⋯ h_ℓ(h))≤ℓ(g)+ℓ(h)-2N(k-1)-1≤ 2N(k+1) -2N(k-1)=4N. Hence the diameter of each equivalence class is uniformly bounded by 4N. Since A_0 and A_1 are also bounded we can set 𝒰_0=⋃_k=1^∞{ [g]_2k| g∈ A_2k}∪{A_0} and 𝒰_1=⋃_k=1^∞{ [g]_2k+1| g∈ A_2k+1}∪{A_1}. Then 𝒰_0∪𝒰_1 is a uniformly bounded cover of G. Moreover, each 𝒰_i is N-disjoint. As we have seen, the even and odd annuli are already N-disjoint, so we only have to check for N-disjointness within each A_k for k≥ 2. So fix k≥ 2 and let g,h∈ A_k be such that they do not agree on the first N(k-1) elements in the unique factorisation. Let j∈{1,… ,N(k-1)} be the minimal number such that g_j≠ h_j in the unique factorisations of g and h. Then we compute d(g,h) =ℓ(g)+ℓ(h)-2(j-1) ≥ℓ(g)+ℓ(h) - 2N(k-1) ≥ 2kN-2N(k-1)=2N≥ N. This finishes the proof. The following Proposition gives the first half of Theorem <ref>. Let G be an étale groupoid with compact unit space. Then asdim(G,ℰ_G)≤dad(G). We may assume that ddad(G)<∞ since otherwise there is nothing to prove. Let E be a controlled set for G. Since G^0 is compact, there exists a relatively compact open subset G^0⊆ K⊆ G such that E⊆{ (g,h)| g^-1h∈ K}. Using the assumption, find open subsets U_0,…, U_d covering G^0 such that the subgroupoid H_i⟨ K∩ G|_U_i⟩ of G is relatively compact (and open) for every i=0,…,d. Let F{ (g,h)| g^-1h∈⋃_i=0^d H_i}. Then F is by its very definition a controlled set for G. Let ∼_i denote the equivalence relation on G_U_i given by g∼_i h :⇔ r(g)=r(h) and g^-1h∈ H_i. Let 𝒰_i{ [g]_i| g∈ G_U_i} be the collection of all equivalences classes of the relation ∼_i. Then each 𝒰_i is E-separated. Indeed, suppose [g]_i×[h]_i∩ E≠∅. Then there exist g_0,h_0∈ G_U_i such that g_0∼_i g and h_0∼_i h such that (g_0,h_0)∈ E. In particular, we have g_0^-1h_0∈ K, which implies g_0∼_i h_0 and hence [g]_i=[h]_i. Moreover, the collection 𝒰=⋃_i=0^d 𝒰_i is an F-bounded cover for G: We have to show [g]_i×[g]_i⊆ F for all i=0,…, d and g∈ G_U_i. If (g_1,g_2)∈ [g]_i×[g]_i then g_1∼_i g∼_i g_2 and hence g_1^-1g_2∈ H_i and hence (g_1,g_2)∈ F. We remark that our assumption that G^0 is compact in the previous proposition cannot be relaxed. Consider for example the action of the integers on by translation. This action is free and proper. Using this it is not hard to show that dad(⋉)=0. On the other hand, asdim()=1. In what follows we want to prove the converse in the case that G^0 is zero-dimensional. The proof is inspired by the recent article <cit.>. We need a variant of dynamic asymptotic dimension that keeps track of the size of the subgroupoids obtained in the definition. Given open relatively compact subsets K, L⊆ G, we say that an open subgroupoid H⊆ G has (K,L)-dad at most d if there exists a cover of H^0∩ (s(K)∪ r(K)) by open sets U_0,…, U_d such that ⟨ K∩ G|_U_i⟩⊆ L for all 0≤ i≤ d. Similarly, a coarse space (X,ℰ) has (E,F)-asdim at most d if there exists a cover 𝒰 of X which is F-bounded and admits a decomposition 𝒰=𝒰_0⊔…⊔𝒰_d such that each 𝒰_i is E-separated. Let G be an étale groupoid and V⊆ G^0 an open subset such that V=V_0∪⋯∪ V_n for some open subsets V_i⊆ G^0. Suppose further that K_0⊆ K_1⊆…⊆ K_n+1 is an increasing sequence in 𝒪_c(G) such that ⟨ K_i^15∩ G|_V_i⟩⊆ K_i+1 ∀ 0≤ i≤ n. Then ⟨ K_0∩ G|_V⟩⊆ K_n+1^5. This follows inductively from Lemma <ref>. As a simple application of this Lemma, we obtain: Let G be an étale groupoid. Suppose G^0=X_0⊔ X_1⊔⋯⊔ X_n-1 is a clopen partition of G^0. Assume that there exists an increasing sequence K_0⊆ K_1⊆⋯⊆ K_n in 𝒪_c(G) such that each restriction G|_X_i has (K_i^15∩ G|_X_i,K_i+1)-dad at most d. Then G has (K_0,K_n^5)-dynamic asymptotic dimension at most d. The next Lemma is well-known: Let G be a compact principal ample groupoid. Then there exists a clopen fundamental domain Y_*⊆ H^0, i.e. Y_* meets each H-orbit in H^0 exactly once. The key technicalities for the main result of this section are contained in the following Lemma. The main obstacle in generalising the proof presented in <cit.> for group actions to the case of general groupoids is that the decompositions of the fibres G^x obtained from the finite asymptotic dimension assumption are not uniform as x varies. In contrast, if we take a decomposition of Γ=T_0⊔…⊔ T_d as in the definition of asymptotic dimension at most d, then T_i× X gives a decomposition of the transformation groupoid, which works uniformly over X. Let G be a pricipal, ample groupoid and let Y⊆ G^0 be a clopen subset. Assume further, that for given K, L⊆ G compact open, the groupoid H_K(Y)⟨ K∩ G|_Y⟩ is compact. If (K,L)-asdim(G)≤ d, then (K,L)-dad(G|_Y)≤ d. Fix x∈ Y for the moment. We can use the assumption to obtain a partition G^x=T_0^x⊔…⊔T_d^x where each T^x_i further decomposes as T^x_i=_jD^x_i,j such that each D^x_i,j is L-bounded and D^x_i,j_1 and D^x_i,j_2 are K-disjoint for all j_1≠ j_2. Intersecting each component of this partition with H_K(Y) gives a partition of H_K(Y)^x with the same properties. Let D_i,j^xD_i,j^x∩ H_K(Y) and T_i^xT_i,j^x∩ H_K(Y) denote these intersections. Our first goal is to show that we can make this decomposition work uniformly over a neighbourhood V_x of x. Note that since H_K(Y) is compact, only finitely many of the sets D_i,j^x are non-empty and each of them has to be finite. For fixed i,j write D_i,j^x={g_1,…,g_n}. Using continuity of the product map we can find a compact open bisection U_k around each g_k such that U_k^-1U_l⊆ L. We may also assume that the U_k are pairwise disjoint and all have the same range (if that's not the case, replace U_k by U_k∩ r^-1(⋂_l r(U_l))). Let D_x,i,j⋃_l=1^n U_l (we put the x in the subscript to indicate that this set still depends on x, but need not be a subset of G^x). By construction D_x,i,j is compact open and L-bounded. By enumerating the set {j| D_i,j^x≠∅} we can successively shrink the sets D_x,i,j to make sure that they remain K-disjoint. Let V_x⋂_i,j r(D_x,i,j). Then V_x is a clopen neighbourhood of x. Replace each D_x,i,j by D_x,i,j∩ r^-1(V_x) and let T_x,i_i D_x,i,j. Then for each y∈ V_x H_K(Y)^y=T_x,0^y⊔…⊔ T_x,d^y and each T_x,i^y further decomposes as T^y_x,i=_jD^y_x,i,j. For y=x we recover the original decomposition found in the beginning, i.e. D^x_x,i,j=D_i,j^x. Using that Y is compact we can thus partition Y by finitely many sets V_1,… V_p such that for each k∈{1,…, p} there exists a partition H_K(Y)∩ G^V_k=T_k,0⊔… T_k,d such that each partition further decomposes as a K-disjoint union T_k,i=_jD_k,i,j, of L-bounded sets. Let D_i,j=_k=1^p D_k,i,j and T_i=_k=1^p T_k,i. Pick a clopen subset Y_*⊆ Y that meets each H_K(Y)-orbit exactly once and set U_i{s(g)| g∈ H_K(Y)∩ T_i, r(g)∈ Y_*, s(g)∈ Y}. Then U_0,…, U_d are clearly clopen subsets of Y. We have to show that ⟨ K∩ G|_U_i⟩ is contained in L. To this end write an arbitrary g∈⟨ K∩ G|_U_i⟩ as g=g_1g_2⋯ g_n with g_k∈ K and s(g_k),r(g_k)∈ U_i. It follows that for each 1≤ k≤ n there exist h_k,h'_k∈ T_i with r(h_k),r(h_k')∈ Y_* and s(g_k)=s(h_k) and r(g_k)=s(h'_k). Note that the set {r(h_k),r(h'_k)| 1≤ k≤ n} is contained in a single H_K(Y)-orbit. Since Y_* meets each H_K(Y)-orbit exactly once we must have r(h_1)=r(h'_1)=…=r(h_k)=r(h'_k). Hence h_k^-1h'_kg_k∈Iso(G)=G^0, so using principality of G we get g_k=(h'_k)^-1h_k. Moreover, since g_k∈ K, the elements h_k and h_k' are in the same D_i,j. So we can write g=(h'_1)^-1h_1(h'_2)^-1h_2⋯ (h'_n)^-1h_n. Note further, that since s(h_k(h'_k+1)^-1)=r(h'_k+1)=r(h_k)=r(h_k(h'_k+1)^-1) we can use principality again to conclude that h_k=h'_k+1. In particular, there exists a unique j such that h_k,h'_k∈ D_i,j for all 1≤ k≤ n. Putting these two facts together we obtain g=(h'_1)^-1h_n∈ D_i,j^-1D_i,j⊆ L as desired. We can now proceed with the proof of the second half of Theorem <ref>. Let G be a principal étale groupoid with σ-compact and totally disconnected unit space. If dad(G)<∞, then dad(G)≤asdim(G,ℰ_G). If G^0 is compact, equality holds. We first prove the result under the additional assumption that G^0 is compact. Let dad(G)≤ D and d=asdim(G). Let K be a compact open subset of G. We want to show dad(G)≤ d. Inductively find an increasing sequence (K_i)_i of compact opens such that K∪ G^0⊆ K_0 and such that G has (K_i^15, K_i+1)-asdim at most d for every i. Now use the assumption dad(G)≤ D to find clopen subsets X_0,… X_D such that H_i⟨ K_D∩ G|_X_i⟩ is a compact open subgroupoid. Apply Lemma <ref> to each X_i to see that (K_i^15,K_i+1)-dad(G|_X_i)≤ d. Hence Lemma <ref> implies that (K_0,K_D^5)-dad(G)≤ d. As we started with an arbitrary K, this implies that dad(G)≤ d as desired. If G^0 is just locally compact and σ-compact, then there exists a nested sequence of compact open sets W_n⊆ G^0 covering G^0. Let G_n G|_W_n be the restriction of G to W_n. Then (G_n)_n is a nested sequence of compact open subgroupoids of G and hence dad(G_n)≤dad(G)<∞. By the first part of this proof, dad(G_n)≤asdim(G_n,ℰ_G_n)≤asdim(G,ℰ_G) and hence the result follows from Proposition <ref>. Let us remark that the assumption dad(G)<∞ in the previous theorem cannot be dropped: Consider for example the free group 𝔽_2. Since 𝔽_2 is residually finite, it admits a decreasing sequence N_1⊇ N_2⊇… of finite index normal subgroups such that ⋂_k∈ℕ N_k={e}. For each k∈ℕ there is a canonical surjective group homomorphism 𝔽_2/N_k+1→𝔽_2/N_k. Let X be the inverse limit of the sequence 𝔽_2/N_1 ←𝔽_2/N_2 ←⋯, which as a topological space is a Cantor set. The group 𝔽_2 acts from the left on each quotient 𝔽_2/N_k and this induces a free and minimal action on X. The space X also admits a unique Borel probability measure μ, induced by the uniform probability measures on the (finite) quotients 𝔽_2/N_k. It follows that the transformation groupoid 𝔽_2⋉ X is non-amenable, and hence in particular it cannot have finite dynamic asymptotic dimension by <cit.>. On the other hand we have asdim(𝔽_2 ⋉ X)=asdim(𝔽_2)=1. plain
http://arxiv.org/abs/2307.07422v1
20230708172155
Can LLMs be Good Financial Advisors?: An Initial Study in Personal Decision Making for Optimized Outcomes
[ "Kausik Lakkaraju", "Sai Krishna Revanth Vuruma", "Vishal Pallagani", "Bharath Muppasani", "Biplav Srivastava" ]
cs.CL
[ "cs.CL" ]
Explicit a posteriori error representation for variational problems and application to TV-minimization [ August 12, 2023 ======================================================================================================== Increasingly powerful Large Language Model (LLM) based chatbots, like ChatGPT and Bard, are becoming available to users that have the potential to revolutionize the quality of decision-making achieved by the public. In this context, we set out to investigate how such systems perform in the personal finance domain, where financial inclusion has been an overarching stated aim of banks for decades. We asked 13 questions representing banking products in personal finance: bank account, credit card and certificate of deposits and their inter-product interactions, and decisions related to high-value purchases, payment of bank dues, and investment advice, and in different dialects and languages (English, African American Vernacular English, and Telugu). We find that although the outputs of the chatbots are fluent and plausible, there are still critical gaps in providing accurate and reliable financial information using LLM-based chatbots. § INTRODUCTION Consider a freshman that has just started making personal financial decisions. They open a bank account to save up money and get their first credit card. They are given some seed money by their family and they also start earning by working on campus. The student is encouraged by their support system to start thinking about saving into products like Certificate of Deposits (CDs) that earn higher interest. As the student makes a series of decisions in their academic and subsequent professional life, they need to make sound financial decisions and may look for resources online to assist them. An optimal decision needs to consider how the banking products interact with each other along with the changing needs of the student. For users like this student, increasingly powerful LLM-based chatbots that have the potential to revolutionize the quality of decision for personal finance are becoming available. LLMs have demonstrated tremendous potential across diverse domains <cit.>, such as natural language processing <cit.> and protein structure <cit.>, and have been claimed to show sparks of artificial general intelligence <cit.>. These models have been implemented in several applications, ranging from mental health assistants <cit.> to financial advisement <cit.>. In the finance domain, LLMs have been used to develop applications such as fraud detection, risk management, and financial forecasting <cit.>. They have been used to analyze financial data, predict stock prices, and generate automated reports. However, with the advent of recent models such as OpenAI's ChatGPT, Google's Bard, and BloombergGPT <cit.>, a comparative chatbot study is needed to evaluate their ability to be financial advisors. In this paper, we present an initial study of ChatGPT and Bard in providing personal decision-making for optimized outcomes. It is widely known that LLMs based systems have unique limitations. For example, they may struggle with common-sense reasoning tasks <cit.>, encounter challenges when handling symbols <cit.>, and are susceptible to hallucinations <cit.>. With this work, we make the following contributions: * identify a personal financial planning scenario involving a series of tasks (plans) and optimization of decisions. * show how leading LLM-based chatbots perform in them and analyze their behavior. * lay out challenges that future chatbots in this area should overcome to provide trusted financial recommendations. We thus highlight the potential and limitations of current LLM-based systems - ChatGPT and Bard - in their role as financial advisors. We included all the queries posed and responses from both ChatGPT and Bard in our GitHub repository[https://github.com/ai4society/LLM-CaseStudies/tree/main/Finance] along with a few snapshots of the actual conversations. § PERSONAL FINANCE USE CASE §.§ Setup: Tools and Procedure §.§.§ Chatbots Tested * ChatGPT: ChatGPT <cit.> is an LLM-based chatbot created by OpenAI that was trained on large amount of text data from the internet, including books and articles. ChatGPT is capable of answering questions, generating text and converse with users in a natural way. It can also learn from users and adapt to new information. * Bard: Bard <cit.> is an LLM-based chatbot created by Google that was trained on large amount of text data and is capable of generating human-like text in response to user prompts and queries. Like ChatGPT, it is also capable of conversing with users about wide variety of topics in a natural way and adapt to new information. §.§.§ Product Interaction Categories Product interaction refers to interaction between different products like Credit Card (CC), Certificate of Deposit (CD) and Account Balance (AB). Each product has different quantitative properties. For example, credit card due, limit line and billing cycle are some of the properties that would provide credit card information (not private information) of the user. Different properties pertaining to these products are: * Purchase Amount (PA): It is the amount spent by the user on purchase of a product. * Billing Cycle (BC): It is the billing cycle of user's credit card. * Due Amount (DA): The amount that is due on the user's credit card for the specified billing cycle. * Credit Line (CL): The maximum amount that user could spend using their credit card. If the amount spent exceeds this value, the credit card company could charge additional interest. * Cashback Percentage (CP): The % of amount which will be returned to the user in the form of cashback on buying furniture using their credit card. * Account Balance (AB): The amount of cash present in user's personal bank account. * Annual Percentage Rate (APR): The APR is charged if there is due on the credit card after the due date. Some financial institutions choose to charge a late fee if the minimum due (MD) is not paid. It is calculated by the formula, Daily Period Rate (DPR) x Billing Cycle (in days) x Average Daily Balance (ADB). * Certificate of Deposit Percentage (CDP): The % of interest accumulated on the cash deposited by the user in the form of CD. Based on different combinations of these products, we classified the queries into 4 categories. These four categories along with the queries posed under each category, the variables used in each query and the constraints the chatbot has to take into consideration to make a sound recommendation are shown in Table <ref>. In the CC category, we considered a different dialect of English called African American Vernacular English (AAVE) and Telugu, one of the well-known languages from India, to observe how the chatbots handle queries in a different language or dialect. §.§ Findings In this subsection, we present the findings from the interesting (and sometimes insightful) conversations we had with Bard and ChatGPT. §.§.§ Differences Between the Chatbots Table <ref> shows the differences that were identified between Bard and ChatGPT when queries listed out in Table <ref> were asked. We compare these models on various criteria related to their performance in answering queries. The criteria include accuracy, utilization of user information, personalized suggestions, use of visual aids, bias in recommendations, provision of multiple response drafts, learning from mistakes, and understanding of different dialects and languages. §.§.§ Error Categories We identified some limitations / errors in the responses generated by both the chatbots and classified them into the following categories: * Lack of Personalized Recommendations: When the agent makes a generalized recommendation without using all the information provided by the user, we consider this as lack of personalized recommendation. * Mathematical Errors: We consider errors like rounding errors, calculation errors, etc. as mathematical errors. * Perceptual Errors: When the agent misinterprets information given by the user or makes assumptions on unknown data, we consider these as perceptual errors. * Grammatical Errors: We consider typos, grammatical errors, etc. as grammatical errors (we encountered these errors only in Telugu text generated by ChatGPT). * Lack of Visual Aids: When the agent doesn't use visual aids like tables, graphs, etc. in its response, we consider these as lack of visual aids. Table <ref> shows the percentage of queries for which the chatbots exhibited each of these errors. We also list out the individual query identifiers. Qi denotes the query identifier as previously defined (and also shown in Table <ref>). ABi and ACi refer to the corresponding Bard and ChatGPT responses respectively. 'i' denotes the identifier (number). Figures <ref> and <ref> show the response generated by Bard and ChatGPT chatbots respectively. For this one query, Bard made use of a table (though it misinterpreted user information) and ChatGPT did not. § DISCUSSION AND CONCLUSION The application of language models in the finance industry has witnessed a surge in recent times due to their ability to process vast volumes of unstructured data and extract valuable insights. This paper delves into the performance of two prominent language models, Bard and ChatGPT, within the finance domain. We also find the following challenges in evaluating LLM-based systems for finance domains: * C1: Changing nature of answers for the same question. How does one create reference test cases since the answers change over time? * C2: Inability of the chatbots to do numeric reasoning * C3: Presenting results with easy to follow graphics. * C4: Support for languages used by customers from different population groups. We considered AAVE - (African American Vernacular English) and Telugu, an Indian language spoken by nearly 100m people world-wide. * C5: Evaluation the response of users from a diverse set of background. We only considered college students in this study. C1 can be mitigated by carefully cataloging questions and system answers by identifiers that account for changing behavior over time. For C2, integration with numeric solvers like Wolfram may help <cit.> although this makes the systems non-learnable over time. For C3, different data presentation strategies need to be tried. For C4, the LLM models or the chatbots need to be enhanced. For C5, more experiments are needed with inputs carefully modeling the characteristics of the different user groups. These are just preliminary challenges and we expect them to grow as more researchers will try LLM-based systems in complex and diverse application scenarios. While our study only comprised thirteen queries, we meticulously selected them to cover various categories of credit card finance. However, there exists ample scope for more extensive testing of these chatbots by expanding the number of queries under each category or including additional categories like student loans and stock purchases. By doing so, we can gain a better understanding of the efficacy of language models in different financial domains and improve their functionality in real-world scenarios.
http://arxiv.org/abs/2307.03995v1
20230708153652
Linear approximation to the statistical significance autocovariance matrix in the asymptotic regime
[ "V. Ananiev", "A. L. Read" ]
physics.data-an
[ "physics.data-an", "stat.ME" ]
ReviewRanker: A Semi-Supervised Learning Based Approach for Code Review Quality Estimation Masum Hasan August 12, 2023 ========================================================================================== § INTRODUCTION In high energy physics searches for new particles that appear in the data as resonances <cit.>, one usually scans a mass region and hopes to find a peak of high significance at some mass. The significance at each mass of the scan is generally found by applying Wilks' theorem <cit.> to the likelihood-ratio test statistic (LRT) <cit.> for each point, and results in a field of significances measured across the search region. While the resonance may appear anywhere in the search region, the analysis usually targets the highest (local) significance, which leads to the recurring challenge of estimating the global significance of this observation. The necessity of calculating the probability for a background fluctuation to give such a peak of significance anywhere in the search region, and not simply where the significance is maximal, is commonly referred to as the look-elsewhere effect (LEE). There have been a number of studies investigating the LEE, and in our work we pay particular attention to those describing the significance field with a Gaussian process. While some studies <cit.> set the upper bound on the trials factor, which converts a local p-value into a global one, and only use a Gaussian process implicitly to link the low and high significance regions, other studies <cit.> require explicit values for the Gaussian process parameters. In this paper we establish a chain of lightweight steps from a non-linear parametric statistical model to the trials factor by estimating the covariance matrix of the significance field. To construct the estimate involving only one background only fit to the data, we apply linear expansion to the non-linear background shape. The way to calculate the covariance matrix starting from a linear model was briefly discussed by Demortier <cit.>. As part of our work, we give a strict mathematical formulation of the method and demonstrate a practical application of it to non-linear background shapes, with the estimated covariance matrix serving as a proxy for the straightforward trials factor estimate. A common input for the methods that quantify the LEE is a set of maximum likelihood fits to some number of Monte Carlo generated data realizations. They may be used to estimate the trials factor in the lower significance region, or the covariance matrix of the Gaussian process itself (the significance autocovariance). The challenge, then, is to fit enough datasets to estimate the trials factor with a satisfactory precision, while keeping the number of fits as small as possible. In high-energy physics searches for a new particle or a resonance, typically, the likelihood-ratio test statistic is used to construct the p-value for each point on a search grid. In the asymptotic regime, the test statistic follows a χ^2 distribution. For analyses that use a Gaussian process to model the significance, the number of degrees of freedom of the test statistic distribution is, typically, 1. For this case, in Chapter <ref>, we suggest a method to estimate the significance covariance matrix that makes use of a single background-only fit to the data. We replace the set of fits that were required in our previous work, with derivatives of the best-fit-to-the-data background model. Fortunately, the derivatives can often be extracted from the fit software. Core assumptions. In section <ref> we show that three quite generic requirements: * the background model should be well approximated by its linear expansion around the best fit parameters, * the assumption that the fluctuations in different bins of the data set are independent, * the fluctuations in each bin follow a Gaussian distribution, together, are consistent with the assumptions made in the empirical study by Ananiev & Read <cit.>, which relied on the additivity (superposition) principle for the fluctuations to empirically estimate the covariance matrix of the significances. We argue, therefore, that this work serves as a theoretical basis for the method of the Asimov set of background samples introduced in the study, and at the same time may rely on its validations. §.§ Statistical model The basic structure of a statistical model commonly used in high-energy physics experiments that search for a new particle or a resonance was described in detail in the empirical study <cit.>. For the present study, we chose the H→γγ inspired model as a benchmark, because it satisfies without approximation the second and third requirements above. The search is conducted with the likelihood ratio test statistic evaluated for each point M of the search grid ℳ. In this binned model, the expected background b_i(θ⃗), used as null-hypothesis H_0, together with the expected signal μ s_i(θ⃗) form the alternative H_1, expected signal + background estimate: n_i(μ, θ⃗, M) = b_i(θ⃗) + μ s_i(θ⃗, M), where i enumerates bins, θ⃗ denotes the vector of nuisance parameters and μ is the signal strength nuisance parameter. In the asymptotic regime (e.g. large sample), and neglecting constant terms, log-likelihoods for H_0 and H_1 may be approximated as follows: -2lnℒ_0(μ=0, θ⃗) = ∑_i ( d_i - b_i(θ⃗)/σ_i)^2, -2lnℒ_1(μ, θ⃗, M) = ∑_i ( d_i - b_i(θ⃗) - μ s_i(M, θ⃗)/σ_i)^2, where i enumerates bins, M ∈ℳ denotes the point in the search region ℳ of parameters which are not present under the background-only hypothesis, θ⃗ are the nuisance parameters, and d_i corresponds to the binned data with errors σ_i. We have assumed that the errors σ_i are independent of the nuisance parameters θ⃗. With a linear correction to σ_i it is still possible to get a closed form expression for the test statistic and significance. The calculation of the covariance would require sampling toys to average out the fluctuations. No additional fits would be required, however, so this may be a potential option for more sophisticated analyses. Our goal is to estimate the covariance matrix Σ_MN of the statistical significances Z_M and Z_N evaluated at two different points of the search region ℳ: Σ_MN = ⟨ Z_M Z_N ⟩_d, M, N ∈ℳ, Z_M = (μ̂) √(t_μ(M))∼𝒩[0, 1], t_μ(M) = -2 lnℒ_0(μ=0, θ⃗_0)/ℒ_1(μ̂, θ⃗_0 + θ⃗_1, M)∼χ^2_d.o.f=1, where t_μ(M) is the likelihood-ratio test statistic (LRT), Z_M is the so-called signed-root LRT, θ⃗_0 are the nuisance parameters that maximize the background-only likelihood ℒ_0, and θ⃗_0 + θ⃗_1 together with the signal strength μ̂ maximize the signal+background likelihood ℒ_1. We would like to remark that for the signal+background model we are fitting θ⃗ as a deviation from θ⃗_0. This is essential for the proper separation of variables in the subsequent calculations. We assume that the best fit of the backgound model b_i to the data d_i is available for the study as b_i(θ⃗̂⃗) = b̂_i. In order to simplify the notation, we make use of the freedom to choose the reference point for the model parameters θ⃗ and define the best fit parameters to be θ⃗̂⃗ = 0⃗. § METHOD To simplify the notation, we redefine d_i, s_i and b_i to include σ_i: d_i/σ_i↦ d_i, s_i/σ_i↦ s_i, b_i/σ_i↦ b_i. The log-likelihoods then become: -2lnℒ_0 = ∑_i ( d_i - b_i(θ⃗) )^2, -2lnℒ_1 = ∑_i ( d_i - b_i(θ⃗) - μ s_i(θ⃗) )^2. For every realization of the data (e.g. an LHC run), we expect the deviations of the fit parameters μ and θ⃗ from 0 to be small (in the absence of a signal), and therefore the first-order expansion of b_i(θ⃗) and s_i(θ⃗) around 0⃗ to be accurate enough. The log-likelihoods then are: -2lnℒ_0 = ∑_i ( d_i - b̂_i - Δ_i βθ^β)^2, -2lnℒ_1 = ∑_i ( d_i - b̂_i - Δ_i βθ^β - μ s_i(0⃗) )^2, where Δ_i α = ∂ b_i(θ⃗)/∂θ^α|_θ⃗ = 0⃗ is the Jacobian of the best-fit background model and the Einstein summation rule applies to the indices β. Since the signal model s_i contributes to the log-likelihoods eq. (<ref>) only at lowest order, thus is constant, we simplify s_i(0⃗) to s_i from now on. The equations that define optimal values of θ⃗_0, θ⃗_1, and μ then are: ∂ℒ_0/∂θ_α|_θ⃗_0∝ ∑_i (d_i - b̂_i - Δ_i βθ_0^β)·Δ_iα = 0, ∂ℒ_1/∂θ_α|_θ⃗_1, μ̂∝ ∑_i (d_i - b̂_i - Δ_i β (θ_0^β + θ_1^β) - μ̂ s_i)·Δ_iα = 0, ∂ℒ_1/∂μ|_θ⃗_1, μ̂∝ ∑_i (d_i - b̂_i - Δ_i β (θ_0^β + θ_1^β) - μ̂ s_i)· s_i = 0. To reduce the number of indices, we rewrite the expressions above with bra-ket notation: ⟨d -b̂|Δ = ⟨θ_0|Δ^⊺Δ, 0⃗ = ⟨θ_1|Δ^⊺Δ + μ̂⟨s|Δ, ⟨d - b̂|s⟩ = ⟨θ_0 + θ_1|Δ^⊺|s⟩ + μ̂⟨s|s⟩, where in eq. (<ref>) we used eq. (<ref>) to cancel the θ⃗_0 contribution. We can solve eq. (<ref>) and eq. (<ref>) for θ⃗_0 and θ⃗_1 correspondingly: ⟨θ_0| = ⟨d-b̂|Δ(Δ^⊺Δ)^-1, ⟨θ_1| = - μ̂⟨s|Δ(Δ^⊺Δ)^-1. It is important to mention that, although Δ itself is generally singular, the product Δ^⊺Δ appears to be a Hessian of -2lnℒ_1 with respect to θ⃗_1. For the background model best-fit point θ⃗ = 0⃗ to be a minimum, it is required that the Hessian be positive definite, thus Δ^⊺Δ is invertible. We substitute eq. (<ref>) and eq. (<ref>) into eq. (<ref>) and solve for μ̂: μ̂(M) = ⟨d-b̂| P |s_M⟩/⟨s_M| P |s_M⟩, P = 1 - Δ(Δ^⊺Δ)^-1Δ^⊺. An interesting and important fact is that P is a projector and it is symmetric: P^2 = P, P = P^⊺. A projector is always positive semi-definite, which means that the product below is non-negative for any non-zero s⃗: ⟨s| P |s⟩ = ⟨s| P^2 |s⟩ = ( P |s⟩)^2 ≥ 0, ∀s⃗≠0⃗ . Let us estimate the test statistic t_M: t_M = (-2 lnℒ_0) - (-2 lnℒ_1) = 2 ⟨d - b̂ - Δθ⃗_0|Δθ⃗_1 + μ̂ s⟩ + ⟨Δθ⃗_1 + μ̂ s|Δθ⃗_1 + μ̂ s⟩. We again use eq. (<ref>) to cancel the θ⃗_0 contribution and eq. (<ref>) to substitute the solution for θ⃗_1: t_M = μ̂⟨d-b̂| P |s_M⟩ = μ̂^2 ⟨s_M| P |s_M⟩. The significance Z_M, as defined in eq. (<ref>), is: Z_M = μ̂√(⟨s_M| P |s_M⟩) = ⟨d-b̂| P |s_M⟩/√(⟨s_M| P |s_M⟩). The square root in eq. (<ref>) is always defined, as the product under the square root is always positive (eq. (<ref>)). For the covariance matrix estimation, we would need to average over data. We are looking for a solution with uncorrelated fluctuations in each bin (sec. <ref>), and we recall that we normalized the errors to 1 in eq. (<ref>), therefore, the following is true: E_d{|d-b̂⟩⟨d-b̂|} = 1. The covariance matrix, then, is: Σ_MN = E_d{ Z_M Z_N } = E_d{⟨s_M| P |d-b̂⟩/√(⟨s_M| P |s_M⟩)⟨d-b̂| P |s_N⟩/√(⟨s_N| P |s_N⟩)} = ⟨s_M| P /√(⟨s_M| P |s_M⟩) E_d{|d-b̂⟩⟨d-b̂|} P |s_N⟩/√(⟨s_N| P |s_N⟩) = ⟨s_M|/√(⟨s_M| P |s_M⟩) P |s_N⟩/√(⟨s_N| P |s_N⟩), To see the parallel with Demortier <cit.>, one needs to think of the background model as a linear combination of vectors in Δ. Then eq. (<ref>) defines a vector |v_M⟩ = P|s_M⟩/√(⟨s_M|P|s_M⟩), which was introduced by Demortier and is orthogonal to each of the vectors constituting the background shape. The test statistic, then, can be rewritten as t_M = (⟨d - b̂|v_M⟩)^2, and the covariance can be expressed as Σ_MN = ⟨v_M|v_N⟩. where we used the symmetry and projector properties of P. It should be noted that from the data fluctuations d⃗ - b⃗̂⃗ contributing to the covariance matrix in the form Fluct. ∝ E_d{|d - b̂⟩⟨d - b̂|}, a superposition principle, relied on in ref. <cit.>, can be derived: Σ_MN = ∑_f Σ^f_MN, where f enumerates independent fluctuations in different bins. In summary, we can estimate the autocovariance matrix of the significance field from the signal model and derivatives of the background model: Σ_MN = ⟨s_M|/√(⟨s_M| P |s_M⟩) P |s_N⟩/√(⟨s_N| P |s_N⟩), M, N ∈ℳ P = 1 - Δ(Δ^⊺Δ)^-1Δ^⊺, Δ_i α = ∂ b_i(θ⃗)/∂θ^α|_θ⃗ = 0⃗. § JUSTIFICATION OF THE SET OF ASIMOV BACKGROUND SAMPLES In this section we would like to compare the derived expression eq. (<ref>) for the linear approximation of the significance covariance matrix to the empirical study <cit.> and the H →γγ inspired model introduced there. To carry out the calculations we used the SigCorr package that we developed specifically for trials factor studies, which now includes functionality for the linear approximation <cit.>. We estimate the linear approximation using eq. (<ref>) with the true parameters of the model, which were predefined in the paper. The resulting matrix shown in figure <ref> clearly resembles the one presented in the empirical study. We also show, in figure <ref>, the difference between the linear approximation computed on the model's true parameters (figure <ref>) and the empirical estimate. We confirm that the empirical covariance matrix is compatible with the linear approximation suggested in this paper within the accuracy of the empirical estimate. On the one hand, the compatibility of the linear approximation and the empirical study allows us to refer to the validations conducted in the empirical study, including those regarding trials factor estimation, and to re-apply them to the method suggested in this paper. The direct calculation of the up-crossings from the covariance matrix, described in <cit.>, becomes particularly appealing now, since it requires only a single fit of the statistical model to the data. The linear approximation, on the other hand, serves as the theoretical basis for the empirical set of Asimov background samples used to estimate the covariance matrix in the aforementioned work. § CONCLUSION In this work we proposed a novel method for the estimation of the covariance matrix of statistical significance in new particle searches using a linear expansion of the statistical model around its background-only best fit to the data. In addition to the closed form expression for the linear approximation of the significance covariance matrix, we also presented elegant expressions for the best fitted signal strength and statistical significance in this approximation. We proved that the suggested covariance matrix satisfies the superposition principle with regard to the fluctuations of the data, which makes it a good proxy to the covariance matrix constructed with the set of Asimov background samples<cit.>. Finally, we compared these two approaches with the example of a H →γγ inspired model and showed that the deviations are compatible with the error of the set of Asimov background samples. We, therefore, claim that all the validations conducted in the empirical study, including those regarding trials factor estimation, hold for the linear approximation suggested in this paper, and the linear approximation serves as a theoretical basis for the empirical set of Asimov background samples construction. We would like to thank Elliot Reynolds for the encouraging discussion at the HDBS Workshop at Uppsala. This research was supported by the European Union Framework Programme for Research and Innovation Horizon 2020 (2014–2021) under the Marie Sklodowska-Curie Grant Agreement No.765710. JHEP
http://arxiv.org/abs/2307.04931v1
20230710224039
Modelling the effect of 3D temperature and chemistry on the cross-correlation signal of transiting ultra-hot Jupiters: A study of 5 chemical species on WASP-76b
[ "Joost P. Wardenier", "Vivien Parmentier", "Michael R. Line", "Elspeth K. H. Lee" ]
astro-ph.EP
[ "astro-ph.EP" ]
firstpage–lastpage Turán number for bushes Zoltán Füredi Alfréd Rényi Institute of Mathematics, Budapest, Hungary. E-mail: . Research partially supported by National Research, Development and Innovation Office NKFIH grants 132696 and 133819. Alexandr Kostochka University of Illinois at Urbana–Champaign, Urbana, IL 61801 and Sobolev Institute of Mathematics, Novosibirsk 630090, Russia. E-mail: . Research supported in part by NSF grant DMS-2153507 and NSF RTG grant DMS-1937241. =============================================================================================================================================================================================================================================================================================================================================================================================================================================================== Ultra-hot Jupiters are perfect targets for transmission spectroscopy. However, their atmospheres feature strong spatial variations in temperature, chemistry, dynamics, cloud coverage, and scale height. This makes transit observations at high spectral resolution challenging to interpret. In this work, we model the cross-correlation signal of five chemical species – Fe, CO, H_2O, OH, on WASP-76b, a benchmark ultra-hot Jupiter. We compute phase-dependent high-resolution transmission spectra of 3D SPARC/MITgcm models. The spectra are obtained with gCMCRT, a 3D Monte-Carlo radiative-transfer code. We find that, on top of atmospheric dynamics, the phase-dependent Doppler shift of the absorption lines in the planetary rest frame is shaped by the combined effect of planetary rotation and the unique 3D spatial distribution of chemical species. For species probing the dayside (e.g., refractories or molecules like CO and OH), the two effects act in tandem, leading to increasing blueshifts with orbital phase. For species that are depleted on the dayside (e.g., H_2O and TiO), the two effects act in an opposite manner, and could lead to increasing redshifts during the transit. This behaviour yields species-dependent offsets from a planet’s expected K_p value that can be much larger than planetary wind speeds. The offsets are usually negative for refractory species. We provide an analytical formula to estimate the size of a planet’s K_p offsets, which can serve as a prior for atmospheric retrievals. We conclude that observing the phase-resolved absorption signal of multiple species is key to constraining the 3D thermochemical structure and dynamics of ultra-hot Jupiters. radiative transfer – methods: numerical – planets and satellites: atmospheres – planets and satellites: gaseous planets § INTRODUCTION Ultra-hot Jupiters are an extreme class of exoplanet with equilibrium temperatures greater than ∼2000 K (). They offer a unique opportunity to study atmospheric physics and chemistry under conditions that do not prevail on any of the planets in our own Solar System. To date, the formation history of ultra-hot Jupiters is largely unknown, but constraining the elemental abundance ratios of their atmospheres can shed light on their origins, accretion mechanisms, and their migration through the protoplanetary disk (). Ultra-hot Jupiters are ideal targets for atmospheric characterisation in transmission, thanks to their extended atmospheres, short orbital periods (1-2 days), and simple chemical inventory. However, one aspect that complicates the interpretation of their spectra is their inherent “3D-ness” (). Ultra-hot Jupiters are tidally locked, which means that they have a permanent dayside and a permanent nightside with very different temperature structures and chemical compositions. On the hot, puffy dayside refractories and alkalis such as Fe, Mg, Ca, Ba, K, and Na exist in their atomic or ionised form, while molecules such as H_2, H_2O, and TiO get thermally dissociated. On the nightside, the temperature is much lower, allowing for cloud formation to occur (). The large day-night contrast results in steep thermochemical gradients and scale-height variations across the terminator region of the atmosphere, which is probed by transmission spectroscopy. Additionally, the day-night contrast drives fast winds in the order of (). The wind profile of ultra-hot Jupiters can be decomposed into two contributions: a day-to-night flow that carries material from the dayside to the nightside of the planet, and (depending on the drag conditions in the atmosphere) a superrotating jet around the equator (). Arguably the best technique for studying ultra-hot Jupiters is ground-based high-resolution spectroscopy (HRS – ). Thanks to its ability to resolve individual spectral lines and perform local measurements, HRS can shed light on atmospheric physics that is not accessible to low-resolution (i.e., HST and JWST) observations. As a planet orbits its star, its radial velocity changes and its spectral lines are periodically Doppler-shifted. This allows for the planet signal to be isolated from stellar and telluric contributions. Over the past few years, planets such as WASP-33b (e.g., ), WASP-76b (e.g., ), WASP-121b (e.g., ), KELT-9b (e.g., ), and KELT-20b (e.g., ) have been targeted by a large number of HRS observations, both in the optical and the infrared. These have enabled the detection of a plethora of chemical species[See Table 1 in <cit.> for a relatively recent overview of detected species in the atmospheres of gas giants.], as well as wind-speed measurements (e.g., ). Additionally, for various ultra-hot Jupiters, HRS observations revealed evidence for hot-spot shifts and thermal inversions on the dayside (e.g., ), cloud formation on the nightside, and asymmetries between the morning and evening limbs of the planet (). At high resolution, the “3D-ness” of ultra-hot Jupiters causes the absorption lines in their transmission spectrum to be shifted, broadened, and distorted (e.g., ). This is because stellar light rays encounter different pressures, temperatures, abundances, and line-of-sight velocities as they pass through the atmosphere. A few lines are strong enough to be seen directly, but the vast majority of the planet spectrum lies buried in stellar photon noise. One way to detect the planet signal is to cross-correlate the spectrum with a template model and combine the strengths of all the absorption lines (typically associated with a single chemical species). This results in a cross-correlation function (CCF – ), which is a measure for the similarity between the planet spectrum and the template as a function of radial velocity (i.e., Doppler shift). The total Doppler shift of the planet spectrum is induced by the systemic velocity V_sys of the star, the orbital velocity K_p of the planet, its rotation, and its atmospheric dynamics. However, since the (K_p, V_sys) values of a planet are known, it is possible to transform the CCF to a planetary rest frame, in which the only Doppler contributions are from rotation and dynamics. These “anomalous” Doppler shifts contain information about the 3D nature of the planet. Because ultra-hot Jupiters are tidally locked, they rotate by degrees during their transit (), assuming an edge-on orbit. This means that the transmission spectrum may probe different parts of the atmosphere at different orbital phases. At the start of the transit, the leading limb (or morning limb) is largely comprised of dayside atmosphere, while the trailing limb (or evening limb) mainly covers the nightside. Then, as the transit progresses, the dayside rotates into view on the trailing limb, and the nightside rotates into view on the leading limb (). Because the terminator regions of ultra-hot Jupiters are characterised by extreme spatial variations in temperature, chemistry, dynamics, and scale height, the rest-frame CCF can be expected to undergo substantial changes over the course of the transit. From an observational standpoint, however, “phase-resolving” the absorption signal of a species in a transiting exoplanet atmosphere is a challenge. To our knowledge, this has only been attempted for (), (), and (). For WASP-76b and WASP-121b, the CCFs of neutral iron (Fe or Fe i) show an increasing blueshift during the transit, with the peak position moving from about 0 km/s at ingress to about -10 km/s at egress (). In the case of WASP-76b, the absorption trail features a “kink” around mid-transit in the CCF map (e.g., Fig. 1 in ). Multiple mechanisms were suggested for this behaviour, including iron condensation on the leading limb of the planet (), a scale-height (temperature) difference between both limbs (), the presence of optically-thick clouds on the leading limb (), or a combination of these effects. More recently, <cit.> proposed that the planet's (spatially varying) magnetic field can also play a role. Using the VLT/ESPRESSO dataset from <cit.>, <cit.> went on to study the phase-dependent behaviour of a large number of other species in WASP-76b besides iron, namely H, Li, Na, Mg, K, Ca ii, V, Cr, Mn, Co, Ni, and Sr ii. They found that the CCFs of all species except atomic hydrogen and lithium were more blueshifted in the final quarter of the transit compared to the first quarter. Recent GEMINI-N/MAROOX-X observations of WASP-76b by <cit.> confirmed these trends. Moreover, <cit.> reported that the vast majority refractories and alkalis – species expected to be abundant on the dayside – give rise to absorption trails with the same “kink” feature as iron[Ionised calcium (Ca ii) is an exception, as its absorption originates from higher regions in the atmosphere which are likely subject to atmospheric escape.]. Based on these observations, the authors suggested that the iron signal of WASP-76b is shaped by a global mechanism that also affects other species in the optical, rather than condensation alone. In the infrared, <cit.> used CARMENES data to measure the H_2O and the HCN signals of WASP-76b. They also found substantial differences between the first and second half of the transit, both in terms of Doppler shift and CCF strength. Transit observations resolved with orbital phase are a powerful means to perform local measurements in an exoplanet atmosphere and thus obtain information about its “3D-ness”. For example, by dividing the VLT/ESPRESSO dataset from <cit.> into two halves, <cit.> were able to retrieve the temperature profile and iron abundance of WASP-76b at four different longitudes. Furthermore, they separately constrained the wind speeds on the trailing and leading limb of the planet. Performing similar retrieval studies in the infrared would be valuable for two reasons. Firstly, they allow to get a better handle on the planet's “3D-ness”, as different species probe different atmospheric regions. Secondly, measuring the abundances of molecules such as CO, H_2O, and OH allows to compute refractory-to-volatile ratios (e.g., Fe/O), which are important in the context of planet formation (). However, the fact that the planet is 3D will make it more difficult to make these inferences, as abundances vary spatially. Therefore, we need 3D forward models to understand how 3D effects manifest in high-resolution spectra and how to best parameterise these effects in 1D or pseudo-2D models used in retrievals. Also, we require 3D forward models to understand what we can really learn from multi-species observations. The aim of this work is to further explore the connection between the “3D-ness” ultra-hot Jupiters and their CCF signals in transmission. To this end, we build on earlier modelling work described in <cit.>. We use a 3D Monte-Carlo radiative transfer framework to simulate phase-dependent transmission spectra for different atmospheric scenarios of WASP-76b, based on outputs of a global circulation model (GCM). We then compute the CCF signals and K_p–V_sys maps for five different chemical species: Fe and TiO in the optical, and CO, H_2O, and OH in the infrared. The motivation for considering these species is that they all have distinct 3D spatial distributions across the planet. Therefore, the absorption lines associated with these species will probe different regions of the atmosphere, each with their own properties. Furthermore, the behaviour we identify for a certain species will be representative of other atoms and molecules with the same spatial distribution. For example, the signals we simulate for iron will be a good proxy for the signals of other refractories too. The structure of this manuscript is as follows. In Section <ref> we describe our WASP-76b models, our radiative-transfer framework, and methods for computing CCF signals, K_p–V_sys maps and absorption regions. In Section <ref>, we present, discuss and interpret our results. Finally, Section <ref> provides a conclusion. § METHODS §.§ Model atmospheres §.§.§ General overview In this work, we consider four different 3D models of the atmosphere of WASP-76b, based on outputs of the SPARC/MITgcm global circulation model (). For the setup of the GCM simulations, we refer the reader to <cit.> and <cit.>. All models assume solar values for metallicity and C/O ratio. We compute the abundances through chemical equilibrium, such that the number fraction of a species in a given atmospheric cell only depends on the local pressure and temperature. In addition, the GCM accounts for condensation through “rainout”, whereby a certain fraction of a species (e.g., Fe or Mg) is removed from a cell when the local temperature lies below the (pressure-dependent) condensation temperature of a condensate containing that species (e.g., ). The process is called rainout, because it assumes that condensates instantly settle to a deeper layer where they do not impact the radiation balance of the atmosphere. The four models are summarised in Table <ref>. Our nominal model is the same as the weak-drag model from <cit.>. It has a drag timescale τ_drag = 10^5 s, which has been found to provide a better match to the WASP-76b observations than a drag-free atmosphere (). The drag timescale represents the typical time it takes for an air parcel to lose a significant fraction of its kinetic energy. It encapsulates a number of different processes, such as turbulent mixing (), Lorentz-force braking of winds of charged particles due to the planet's magnetic field (), and Ohmic dissipation (). As a result of drag forces, the equatorial jet of the planet is suppressed, such that the atmospheric dynamics are dominated by the day-to-night flow. The second model we consider is the cold-morning-limb model from <cit.> (see “modification 2” in their Fig. 12), in which we artificially reduce the temperature of the leading limb. Consequently, the atmosphere features a strong thermal asymmetry between the (hotter) trailing limb and (cooler) leading limb. In <cit.>, we demonstrated that this model is able to reproduce the shape of the iron signal of WASP-76b (), as opposed to an atmosphere without an east-west asymmetry. In the cold-morning-limb model, the absorption lines undergo an increasing blueshift during the first half of the transit, but they retain a constant Doppler shift of about during the second half. Our third model is an optically-thick-clouds model à la <cit.>, who reported that the iron signal of can also result from an atmosphere with optically thick clouds. In one of their best-fitting models, they assume the presence of an optically thick cloud deck extending at most 10 scale heights (∼4.3 dex in pressure) above the intersection between the local temperature profile and the Al_2O_3 condensation curve. The vertical extent of the cloud is less than 10 scale heights in case the temperature profile and the condensation curve intersect again at some lower pressure. We add an optically thick cloud deck to our GCM output in exactly the same way (clouds are added post-hoc, so the temperature structure in the GCM is calculated without clouds). The rationale for selecting Al_2O_3 is that this is the cloud species with the highest condensation temperature. Hence, it will have the most drastic impact on the planet's transmission spectrum, as it can exist in hotter regions compared to other cloud species. Because cloud physics is complicated (e.g., ), this modelling approach is very much a simplification of reality. However, the model forms a good limiting case – it allows to assess the strongest impact that clouds can possibly have on the CCF signal. Our final model is the atmosphere without TiO and VO from <cit.>. It represents a scenario in which TiO and VO are cold-trapped due to condensation (). To emulate the effects of cold-trapping, the opacities of TiO and VO are set to zero during the GCM calculations. Because these molecules are important short-wave absorbers, their absence will change both the dynamics and the temperature structure of the atmosphere. As shown in <cit.>, the no-TiO/VO model naturally has a large temperature asymmetry between its trailing and leading limb, owing to a strong hotspot shift on the dayside that extends to relatively low pressures. Furthermore, our no-TiO/VO model is drag-free (τ_drag→∞), so it features an equatorial jet. The model provides a good test to assess which observational features are robust against a variety of different modelling assumptions. §.§.§ Mapping pressures onto altitudes As described in <cit.>, the GCM uses pressure as a vertical coordinate. However, to compute the transmission spectrum of the planet, the atmosphere must be defined on an altitude grid. Thus, before we can feed the models into the radiative-transfer framework, we need to perform the mapping P → z in every atmospheric column, with P the pressure and z the altitude coordinate. To this end, we follow the approach from <cit.> (see their Section 3.2), whereby we assume that the atmosphere is an ideal gas in hydrostatic equilibrium. For every atmospheric cell i, we compute the scale height as follows: H_i = k_B T_i/μ_i g_i, with k_B the Boltzmann constant, T_i the cell's temperature, μ_i its mean-molecular weight, and g_i its gravity. One important improvement we make compared to <cit.> is that we also account for mean-molecular weight variations across the atmosphere when computing H_i, in addition to temperature and gravity variations. On most of the dayside, the mean-molecular weight is significantly lower than on the nightside due to hydrogen (H_2) dissociation – lowering its value from μ ≈ 2.33 m_h to μ ≈ 1.27 m_h (with m_h the mass of a hydrogen atom). As a result, the scale-height difference between the dayside and the nightside of the models is even larger than suggested in <cit.>. We verified, however, that (not) accounting for thermal dissociation in the P → z mapping does not drastically alter the shape of the final CCF signals (see Appendix <ref>), so the results from <cit.> remain valid. Fig. <ref> shows a to-scale plot of the nominal model mapped onto its altitude grid. The bottom of the atmosphere, with and , is situated at a radius R = 1.85 R_jup. At the substellar point (on the dayside), the 10-μbar isobar lies at R = 2.44 R_jup. At the antistellar point (on the nightside), it lies at R = 2.06 R_jup. To prevent absorption lines from being “truncated” by the model boundaries in the radiative transfer, we extrapolate the entire atmosphere to a radius R = 2.64 R_jup (black dashes in Fig. <ref>), assuming that temperatures, abundances, and wind speeds remain constant above the upper GCM boundary of 2 μbar. Because the nightside has a much smaller scale height, it is extrapolated to trivially low pressures where the absorption is zero. §.§ Radiative transfer §.§.§ Monte-Carlo radiative transfer with gCMCRT To compute transmission spectra associated with the 3D model atmospheres, we use gCMCRT[gCMCRT is publicly available from https://github.com/ELeeAstro/gCMCRT ] (). gCMCRT is an updated, GPU-compatible version of Monte-Carlo radiative transfer code from <cit.>. In <cit.>, we adapted the framework for high-resolution purposes. The main advantage of gCMCRT is that it fully exploits the architecture of a GPU, which comprises hundreds to thousands of individual cores (processing units). Hence, a large number of photon packets can be simulated in parallel, making gCMCRT a lot faster than its predecessor. In <cit.>, we had to restrict our simulations to ∼10,000 wavelength points for computational reasons, but with gCMCRT we can efficiently model high-resolution spectra across the full bandwidth of instruments like VLT/ESPRESSO and Gemini-S/IGRINS. For each of the four WASP-76b models, we simulate the orbit over an angle of 31.3 degrees, covering the transit as well as ingress and egress. We compute 25 transmission spectra, equidistant in orbital phase. Furthermore, we assume an edge-on orbit, a semi-major axis of 0.033 AU, a stellar radius of 1.73 R_sun, and an orbital period of 1.81 days – commensurate with the parameters of the WASP-76 system (). We ignore effects of limb darkening as these were reported have a negligible impact on the Doppler shifts obtained from cross-correlation (). As discussed in <cit.>, Monte-Carlo radiative transfer is a stochastic technique. To compute a transmission spectrum, we initialise n photon packets with a random impact parameter and impact angle at each wavelength. During ingress and egress we only illuminate the part of the limb that is blocking the star. The spectrum converges to the true solution in the limit n →∞ (we use n = 10^5 in this work). For each photon packet, we compute the optical depth τ along the line of sight, whereby we Doppler-shift the opacities in each atmospheric cell according to the local line-of-sight velocity v_los that results from winds and planetary rotation (see Fig. <ref>). We refer to Section 3.3 in <cit.> for the relevant equations. Because we account for scattering through absorption cross-sections (a treatment justified in transmission as scattering causes photons to depart from the line of sight and not contribute to the flux), we effectively use gCMCRT as a randomised-transit-chord algorithm. The propagation direction of the photon packets does not change after their initialisation. Once the optical depth associated with the photon packets has been computed, the “transit area” A_p(λ) of the planet can be found from () A_p(λ) = A_0 + A_annu⟨ 1 - e^-τ⟩|_λ, with A_0 the projected area of the planetary interior and A_annu the area of the atmospheric annulus (extending from the bottom to the top of the model atmosphere). The angle brackets imply an average over all photon packets with wavelength λ. During ingress and egress, we scale down the value of A_p with the fractional overlap (< 1) between the stellar and the planetary disk to obtain the correct transit depth. As in <cit.>, we also compute spectra associated with individual sectors on the limb (see their Fig. 3): the trailing equator, the trailing pole(s), the leading pole(s), and the leading equator. The trailing (leading) equator is the limb region between -45^∘ and +45^∘ latitude that is last (first) to appear in front of the star during ingress. The trailing (leading) poles are the regions between -90^∘ and -45^∘, and +45^∘ and +90^∘ that are last (first) to appear in front of the star during ingress. All sectors span a quarter of the limb, but as shown in Fig. 3 in <cit.>, the poles are disjoint. For a tidally locked planet, the trailing regions rotate towards the observer, while the leading regions rotate towards the star (away from the observer). To compute spectra for each sector, we also use equation <ref>, but we only perform the average over the photon packets impinging on that sector. §.§.§ Modelling spectra in the optical (Fe and TiO signals) In the optical, we model the transit of the four WASP-76b models across the full ESPRESSO wavelength range (0.38–0.79 μm) at a spectral resolution R = 300,000 (>2× the ESPRESSO resolution). This results in a total of ∼220,000 wavelength points[With 10^5 photon packets per wavelength, this means that the total number of photon packets simulated across the spectrum is of the order 10^10.]. For memory-related reasons, we split the computation in two batches of ∼110,000 wavelength points and we stitch the spectra together at the end. Since we read all opacity data at once at the start of the simulation, the GPU memory needs to hold the full 3D opacity structure of the atmosphere at each wavelength. In the radiative transfer, we include (continuum) opacities associated with H_2, He, and H scattering, collision-induced absorption (CIA) by H_2-H_2 and H_2-He, and bound-free and free-free transitions of H^-. References to these opacities can be found in Table 2 in <cit.>. Also, we consider the following line species: Fe, Fe ii, K, Na, Ti, Mn, Mg, Cr, Ca ii, TiO, VO, H_2O, and OH. Atomic opacities are taken from the <cit.> database and we apply pressure broadening using a code based on <cit.>. In <cit.>, the atomic opacities were generated with () and no pressure broadening was applied. Furthermore, imposed a line-wing cut-off, as opposed to our current treatment. The opacities of TiO and VO are from the EXOPLINES database (), and were generated by <cit.> using the TOTO () and the VOMYT () line lists. For H_2O, we use the POKAZATEL line list (). Finally, the OH opacities are taken from (). Compared to <cit.>, we thus make a total of four changes to the radiative transfer. Firstly, we use iron line lists with pressure broadening and no line-wing cut-off, and we use opacities for a larger number of species. Secondly, we account for variations in mean-molecular weight when evaluating the scale height (see Section <ref>). Thirdly, we reduce the spectral resolution from 500,000 to 300,000. Finally, we consider the full ESPRESSO wavelength range instead of a small set of ∼10,000 wavelength points. Fig. <ref> in Appendix <ref> depicts the effect that each of these changes has on the iron signal of the cold-morning-limb model originally presented in <cit.>. The figure shows that the “new” iron opacities and the new resolution do not significantly impact the CCF map. As expected, the new scale heights and the new wavelength range lead to the biggest changes, but the overall trends in the CCF map remain the same. §.§.§ Modelling spectra in the infrared (CO, H_2O, and OH signals) In the infrared, we model the transit of the four WASP-76b models across the full IGRINS wavelength range (1.43–2.42 μm) at (∼3× the IGRINS resolution). This leads to ∼71,000 wavelength points. We can afford a lower resolution here as the absorption features of the relevant molecules tend to be intrinsically broader than in the optical, so they can still be resolved at a lower resolution. We performed a comparison similar to Fig. <ref> to verify that the Doppler shifts obtained from cross-correlation remain the same (within 0.5 km/s) at higher spectral resolutions. To compute the infrared spectra, we consider the same continuum opacities as in the optical. Additionally, we use the line species CO (), H_2O (), OH (), CH_4 (), CO_2 (), HCN (), and NH_3 (). §.§ Computing observables §.§.§ CCF maps For each transit, we cross-correlate all 25 spectra with a template – see Section 3.6 in <cit.> for relevant equations. This gives rise to a CCF map with Doppler shift (radial velocity, or RV) as a horizontal coordinate and orbital phase as a vertical coordinate. We compute CCF maps for Fe and TiO (based on the optical spectra), as well as for CO, H_2O, and OH (based on the infrared spectra). To generate the template for a species X, we compute the mid-transit spectrum of the nominal model without Doppler shifts, whereby we only include the opacities of the continuum and X. Before we perform the cross-correlation, we subtract the continuum from both the templates and the spectra. We do this by splitting a spectrum into bins of 1000 wavelength points and fitting a low-order polynomial to the minima of these bins. We then subtract the polynomial from the spectrum to obtain a “flat” baseline. This procedure mimics the steps taken in the analysis of real high-resolution data. We also compute CCF maps associated with the four limb sectors. As demonstrated in <cit.>, the CCF map of the full limb can be interpreted as the sum of the CCF maps of the individual sectors, thanks to the linearity of the cross-correlation. The benefit of this approach is that it allows to link certain features of the CCF map to specific atmospheric regions. §.§.§ K_p–V_sys maps In most high-resolution datasets, the CCF values associated with individual integrations must be “stacked” across the whole transit to get a strong enough planet detection. A common way to do this is by constructing a map (e.g., ). The signal emerging in the map can be seen as a time average, because it is a sum over all orbital phases. Once the CCF map of a certain species is computed, we obtain the corresponding K_p–V_sys map by integrating the CCF values along a curve of the form v(ϕ) = V_sys + K_psin(ϕ), with v(ϕ) the radial velocity at phase angle ϕ∈ [-15.7^∘, +15.7^∘], V_sys the systemic velocity, and K_p the orbital velocity. In other words: SNR(K_p, V_sys) = 1/ξ∑_i^N_ϕCCF(ϕ_i, v(ϕ_i) ). In this equation, SNR is the value of the K_p–V_sys map at (K_p,V_sys), ξ is a scaling factor, and N_ϕ the number of simulated transit spectra. For each orbital phase, we obtain the CCF value at v(ϕ_i) by linearly interpolating between the two values at the nearest radial velocities in the CCF map. §.§ Computing absorption regions Following the approach from <cit.>, we also compute absorption regions for each of the atmospheric models (see their Section 3.2.2). The information needed to infer these regions is a byproduct of the radiative transfer. The idea is that the spectrum does not contain any information about parts of the atmosphere where all the light is absorbed (e^-τ∼0) or where all the light is transmitted (e^-τ∼1). Instead, the observation probes the region where the transition from optically thick to optically thin occurs. Hence, given a wavelength λ, we define the absorption region as being spanned by all transit chords that satisfy β < e^-τ < 1 - β. In <cit.>, we opted for β = 0.1 and β = 0.01, and we named the corresponding regions the 10–90% and the 1–99% absorption regions, respectively. The condition β < e^-τ < 1 - β only constrains the extent of the absorption regions in the altitude direction. However, to obtain a region that is finite along the line of sight as well, we only select the central part of the transit chords where the total optical depth increases from βτ to (1-β)τ[As motivated in <cit.>, this definition ensures that an absorption region is symmetric about the limb plane in the limit of a uniform 1D atmosphere.]. These two conditions allow us to infer the approximate regions that are probed by the transmission spectrum at a certain wavelength. Because we “truncate” the transit chords along the line of sight, the extent of the absorption regions is also independent of the (arbitrary) upper model boundary. § RESULTS & DISCUSSION §.§ 3D temperatures, abundances, and line-of-sight velocities Fig. <ref> shows the temperature structure of the four WASP-76b models from Table <ref> in the equatorial plane. As discussed in Section <ref>, the daysides are more “puffy” than the nightsides, owing to their higher temperature and lower mean molecular weight. The daysides also feature a strong thermal inversion. For example, at the substellar point in the nominal model, the temperature increases from at to ∼3500 K at 1 mbar. The nightside does not feature a thermal inversion and this is the reason why the cloud deck mostly spans 10 scale heights in the optically-thick-clouds scenario. At the antistellar point, the temperature drops from ∼1700 K at 1 bar to ∼1000 K at 10 μbar. Fig. <ref> shows the abundances of Fe, CO, H_2O, OH, and TiO across the equatorial plane. All species have a unique 3D spatial distribution. Iron is abundant on the dayside, but absent on the nightside due to condensation. Water, on the other hand, is abundant on the nightside, but subject to thermal dissociation on the dayside (). These “mirrored” chemical distributions imply that iron lines mainly probe the dayside of the planet, while the water lines mainly probe the nightside. The CO abundance is nearly constant across the atmosphere – its value does not vary by more than ∼0.3 dex. Because CO has a strong triple bond between its constituent atoms, it is neither affected by condensation, nor by thermal dissociation. In fact, the only ultra-hot Jupiter hot enough to dissociate CO is KELT-9b (). Consequently, the absorption lines of CO are the most reliable gauge of the 3D temperature structure and wind profile of the planet. They only probe spatial variations in temperature and dynamics, and not so much in chemistry. See <cit.> for further discussion. The distribution of OH is a bit more complicated. On the dayside, the molecule forms when water is dissociated into OH and atomic hydrogen. However, higher up in the atmosphere, OH itself also falls prey to thermal dissociation, producing atomic oxygen and atomic hydrogen. As a result, the OH abundance first increases with altitude and then decreases. On the nightside, OH is absent, because hydrogen and oxygen are contained in water at lower temperatures. Finally, TiO is subject to both dissociation on the dayside and condensation on the nightside. Therefore, the only observable TiO is present in a narrow region around the limb where the temperature is lower than the dissociation temperature, but higher than the condensation temperature of TiO. Fig. <ref> shows the line-of-sight velocities v_los due to winds (and planetary rotation) in the equatorial plane and the terminator plane of the nominal and the no-TiO/VO model, at mid-transit. These are the velocities by which the opacities in different cells are Doppler-shifted during the radiative transfer. v_los<0 implies that absorbers are moving towards the observer, causing a blueshift to the transmission spectrum, while v_los>0 means that absorbers are moving away, inducing a redshift. As illustrated in Fig. <ref>, the nominal model only features day-to-night winds (both planes are completely blueshifted in the top row), such that the only redshift contributions come from rotation (see bottom row). The no-TiO/VO model has an equatorial jet and this is why half of the equatorial plane is blueshifted, while the other half is redshifted. Note, though, that the jet only occupies a small region in the terminator plane, spanning an angle of ∼25 degrees at pressures ≲ 1 bar on both limbs (the latitudinal extent of the jet is of the order of the equatorial Rossby deformation radius – see ). However, despite the smaller “effective area” occupied by superrotating winds compared to the day-to-night flow, it may still possible to make inferences about the equatorial jet based on the absorption signal of the full limb (e.g., ). §.§ Prelude: the nominal vs. the cold-morning-limb model Fig. <ref> shows the CCF signals of the nominal model for each of the five chemical species. Remarkably, they all have very similar absorption signatures, except for TiO. However, in the nominal model, the iron signal does not feature the “kink” that has been observed in the real data of WASP-76b ( – see also ). The cold-morning-limb model, on the other hand, does give rise to a kink in the iron signal (blue curve in the right panel of Fig. <ref>), whereby the blueshift (RV < 0) increases during the first half of the transit and remains constant during the second half. For a full discussion of this behaviour, we refer to <cit.>. As opposed to the nominal model, the cold-morning-limb model shows a range of different CCF signals for the five species. In the following sections, our aim is to understand how these CCF signals come about and what physics causes the differences between the models. To build some basic intuition, we start by discussing the CCF maps of the nominal model. Subsequently, we focus on the behaviour of the other three models: the cold-morning limb model, the optically-thick clouds model, and the no-TiO/VO model. §.§ CCF maps for the nominal model Fig. <ref> depicts the CCF maps of the four limb sectors and the full limb of the nominal model, for all species. The CCF maps of the individual sectors can be seen as the “building blocks” of the more complicated absorption signal that emerges from the entire atmosphere. This is because the the CCF map of the full limb is the sum of the maps of the limb sectors. Figs. <ref> and <ref> show the absorption regions that are probed on the trailing part of the equatorial plane by (randomly chosen) line cores of Fe, CO, H_2O, OH, and TiO, respectively. §.§.§ Recap: two important effects for iron <cit.> (see their Fig. 9) showed that there are two important effects that drive the Doppler shift of iron in the nominal model: (i) the variation in the signal strengths of the limb sectors during the transit, and (ii) atmospheric dynamics. In tandem, these effects cause the absorption signal to become increasingly blueshifted during the transit, even though there is no significant thermal or chemical asymmetry between the planet's trailing and leading limbs. On top of this “baseline behaviour”, limb asymmetries (e.g., ) can further enhance changes in the Doppler shift with orbital phase (see Section <ref>). Effect (i) is due to the day-night temperature contrast of the planet, in combination with tidally-locked rotation. Ignoring the contribution from winds, the absorption signal of the redshifted leading limb becomes weaker during the transit, as the dayside rotates out of view. On the other hand, the signal of the blueshifted trailing limb becomes stronger, as the dayside rotates into view. Therefore, in a scenario without winds, the absorption signal of the full limb transitions from being mainly redshifted in the first half of the transit to being mainly blueshifted in the second half[See the third row of Fig. 9 in <cit.> (nominal model w/o winds).]. Effect (ii), atmospheric dynamics, impacts the absorption signal of the planet in multiple ways. Firstly, winds shift the whole CCF to negative radial velocities, as the day-to-night flow causes the whole terminator plane to be blueshifted (see Fig. <ref>). Secondly, they “smoothen” the CCF map, resulting in a more gradual change of the net Doppler shift as a function of orbital phase. Finally, the angle between the polar wind vectors and the line-of-sight vector becomes smaller during the transit (see Fig. 10 in ). This projection effect causes the absorption signals associated with the polar sectors to become more blueshifted over time. The signal of the leading equator shows the same behaviour. §.§.§ Fe signals The first row of Fig. <ref> shows the CCF maps for iron. As mentioned in the previous paragraph, the iron signals of the nominal model were already presented in <cit.>, but not for the entire ESPRESSO wavelength range. Although the atmosphere does not feature strong limb asymmetries, the iron lines become progressively more blueshifted during the transit, owing to the effects discussed in the previous section. <cit.> (see their Section 5.1) suggested that the change in signal strength of the limb sectors was because the observation first probes the nightside on the trailing limb, and later the dayside – with the exact opposite occurring on the leading limb. However, Fig. <ref> demonstrates that something else is going on. Inside an iron line core, the absorption region lies on the dayside at every orbital phase. The reason why the signal of the trailing limb becomes stronger, though, is the fact that the projected separation (onto the limb plane) between the absorption region of the line core (in blue) and the absorption region of the continuum (in red) becomes larger during the transit. As shown in the top-left panel of Fig. <ref>, this causes an iron line to become stronger relative to its continuum, which is exactly what the magnitude of the CCF encodes. Therefore, in the case of iron, changes to a sector's signal strength are a consequence of geometry, rather than absorption regions shifting between the dayside and the nightside of the planet. However, the day-night contrast is still crucial for the projection effect to occur. Furthermore, Fig. <ref> illustrates that if iron was uniformly distributed across the atmosphere, its absorption lines would still only probe the dayside (see also the next paragraph about CO). Hence, it is not the 3D chemical map of iron that confines its absorption regions to the dayside. Instead, the scale-height difference between the dayside and the nightside causes a “shielding effect” – because the dayside is more puffy, the absorption lines probe altitudes at which the opacity of the nightside is negligible. §.§.§ CO signals The second row of Fig. <ref> shows the CCF maps of CO. Both the CCF maps and the absorption regions of CO (see Fig. <ref>) display nearly identical behaviour compared to iron. Although the abundance of CO is uniform across the atmosphere, its absorption lines virtually only probe the dayside. That is, the 10-90% absorption regions of the CO line core are situated on the dayside at all orbital phases. The reason for this is the “shielding effect” discussed in the previous paragraph. Stellar light rays first encounter the dayside, and the τ∼1 region lies at altitudes where the nightside does not contribute to the total optical depth. For CO, the signal strength of the trailing equator also increases during the transit, again due to a projection effect. Around the CO line plotted in Fig. <ref>, the “continuum” is caused by water absorption. As a result, the absorption region of the continuum behaves exactly like that of water in Fig. <ref> (see also the next paragraph). Interestingly, at ϕ = -11.7^∘ the CO line core and its adjacent continuum probe different sides of the planet. §.§.§ H_2O signals The third row of Fig. <ref> shows the CCF maps of water. Remarkably, the phase-dependence of the signal strengths displays the opposite behaviour compared to iron and CO. For iron and CO, the trailing-equator signal becomes stronger during the transit, but for water it becomes weaker. The absorption regions in Fig. <ref> demonstrate why this is the case. At the start of the transit, a water line core probes the nightside. Then, as the planet rotates, its absorption region shifts towards the dayside. However, because of the lack of water at low pressures (due to thermal dissociation), the absorption region is “pushed” down to higher pressures on the dayside. At these higher pressures, the absorption regions of the line core and the continuum lie closer together, which explains why the signal strength of the trailing limb decreases over the transit. Naturally, the opposite occurs on the leading limb, as shown by the CCF maps of the limb sectors in Fig. <ref>. Based on the behaviour of the individual limb sectors, one would expect the blueshift of the full limb to decrease during the transit. After all, the signal strength of the (least blueshifted) leading sectors becomes stronger over time. However, the reason why the net Doppler shift of the planet does not decrease is that the signals of the leading limb are stronger than those of the trailing limb over almost the entire transit. Therefore, the water signal of the full limb is dominated by the leading sectors, which are subject to an increasing blueshift over time. To build more physical understanding, Fig. <ref> shows the CCF maps of CO and water for two additional realisations of the nominal model: (i) a scenario with rotation only, in which the winds are zero, and (ii) a scenario with a shorter drag timescale , in which the winds are weaker (as they are subject to stronger drag) compared to the original model with . In the rotation-only case, the Doppler shifts of the individual limb sectors are constant during the transit. This means that the phase-dependence of the absorption trail of the full limb is purely governed by the varying signal strengths of the limb sectors. Therefore, in the rotation-only case, we do see that CO and water display the exact opposite behaviour: the CO signal goes from redshifted to blueshifted , while the water signal goes from blueshifted to redshifted. In the scenario with strong drag (second and fourth row in Fig. <ref>), planet rotation still dominates the shape of the absorption signals of the full limb. However, the signals are now more blueshifted because of the prevalence of day-to-night winds. As shown in Fig. <ref>, increasing the drag timescale ( ) causes the “step” feature in the planet's water signal to disappear. As previously mentioned, this is because the leading-limb signal becomes stronger than the trailing-limb signal over the entire transit. Hence, it is the planet's 3D wind-profile that is inducing an asymmetry in the nominal model: on the trailing limb, the variance in probed wind speeds is larger, causing the line contrast to become smaller and the CCF of the trailing sectors to become broader. Ultimately, this causes the water signal to look relatively similar to that of iron and CO, even though the water lines probe completely different regions of the atmosphere. Our result is in qualitative agreement with <cit.> and <cit.>, who also found an increasing blueshift for water with their “baseline” 3D models of WASP-76b. Furthermore, <cit.> also reported a step feature in the water signal of one of their magnetic-drag models, hinting at weaker winds and a more visible signature of planet rotation. On a final note, the absorption regions of water become very narrow towards the end of the transit (ϕ = 11.7^∘ in Fig. <ref>). This is because the distance between isobars is smaller at higher pressures. Also, the absorption regions coincide with a steep vertical gradient in the water abundance. Therefore, shifting the transit chord to a lower pressure will result in a sharp decrease in integrated abundance (and thus optical depth τ), while moving it to higher pressures will result in τ≫ 1. §.§.§ OH signals The fourth row of Fig. <ref> shows the CCF maps of OH. In the nominal model, the OH signal resembles that of iron and CO. However, the strength of the OH signal drops by a factor ∼2 during the transit. Also, Fig. <ref> demonstrates that the change in the signal strength of the trailing equator (as well as the other limb sectors) is more extreme than for the other species – at the start of the transit it is almost zero. Fig. <ref> illustrates the cause of this significant variation. Because the nightside is depleted of OH and because the higher-altitude regions on the dayside have a low OH abundance, the absorption regions of the line core and the continuum overlap at the start of the transit. This produces a negligible CCF signal. However, as the planet rotates, the more OH-abundant parts of the dayside rotate into view, and the line contrast increases. §.§.§ TiO signals The bottom row of Fig. <ref> shows the CCF maps of TiO. Whereas the other species clearly show an increasing blueshift during the transit, the blueshift of TiO is decreasing in the nominal model (albeit marginally). The reason for this is that the signal strengths of the limb sectors behave like those of water (see Fig. <ref>). Yet, in contrast to water, the signal of the trailing sectors is strong enough at the beginning to contribute to the Doppler shift of the full limb. At the end of the transit, the CCF map is dominated by the signal from the leading sectors. As shown in the figures, the signal of the (most blueshifted) trailing limb becomes weaker during the transit, as the absorption region of the TiO line core shifts from the nightside to the dayside. We note, though, that the absorption region very much hinges on the TiO abundances in the first column on the nightside (see Fig. <ref>), where the temperature profile allows for TiO to exist at all pressures. Without this column, the absorption regions would have been situated at lower altitudes, resulting in much weaker absorption lines (the column has to exist, though, as the atmosphere transitions from dayside to nightside). On the other hand, if the temperature gradient at the terminator was smoother, for example due to H_2 dissociation/recombination (), TiO would have existed across a wider range of longitudes. Therefore, in this model, it is the steepness of the temperature gradient at the terminator that determines whether or not TiO may be observable. §.§ CCF maps for other models Fig. <ref> shows the CCF maps computed for all models and all species (only the CCF maps of the full limb are shown). The colourmaps were normalised per row to allow for inter-model comparisons. Because TiO is cold-trapped in the no-TiO/VO model, it is not observable. Hence, there are 19 maps in total, rather than 20. Fig. <ref> depicts the absorption trails of all species in the same panel, for the nominal model and the cold-morning-limb model. §.§.§ Fe signals As expected, the cold-morning-limb model in Fig. <ref> shows a strong increase in blueshift over the first half of the transit. The reason for this is that the signals of the leading sectors are much weaker compared to the nominal model (see Fig. 13 in ). Hence, the signals of the trailing sectors already start to dominate the sum before mid-transit. After mid-transit, the blueshift remains constant, because the only contributions to the signal come from the trailing sectors. For further discussion, we refer to <cit.>. The CCF map of the optically-thick-clouds model is very similar to that of the nominal model, suggesting that adding a cloud deck has minimal impact on the iron signal. The reason for this is that the absorption regions of iron on the dayside lie at much higher altitudes compared to the optically thick clouds on the nightside (see Figs. <ref> and <ref>). Yet, the signal strength of the full limb does decrease slightly as a result of clouds. This is because the cloud deck lies at higher altitudes than the absorption regions of the continuum in the cloud-free atmosphere. Since the cloud deck is nearly symmetric, the signals of the trailing and leading limbs are affected equally – the line contrast and the magnitude of the CCF become marginally smaller, but the shape does not change. Thus, in this particular model, we find that nightside clouds are unable to mute the iron absorption signal, as it originates from too high altitudes. To make a cloud mute the absorption features of iron, as in <cit.>, it should be located at a significantly higher altitude than the continuum in the cloud-free case. Hence, based on our modelling efforts, we still find that a temperature (or scale-height) asymmetry between the trailing and leading limb is the most likely explanation for the strongly blueshifted iron signals of WASP-76b () and WASP-121b (). The iron signal of the no-TiO/VO model also exhibits stronger blueshifts compared to the nominal model. This is also related to a temperature asymmetry (see Fig. <ref>). Due to a large hotspot shift on the dayside, the 3D temperature structure of the model is lopsided, with the trailing limb being hotter and more extended than the leading limb. Hence, the signal of the blueshifted trailing limb contributes more strongly to the CCF map of the full planet. §.§.§ CO signals The behaviour of CO across the different models is very similar to that of iron (e.g., compare the first and the second rows in Figs. <ref> and <ref>). In the cold-morning-limb model, for example, the signal also undergoes a strong increase in blueshift during the first half of the transit, owing to the temperature asymmetry between the trailing and the leading sectors. The CCF map of CO is somewhat affected by the presence of optically thick clouds. This indicates that there are weaker CO lines that probe the atmosphere at lower altitudes, and which are thus muted by the cloud deck. For these weaker lines, the absorption regions are likely to lie partly on the dayside and partly the nightside, as CO is equally abundant on both hemispheres. With this in mind, the (stronger) CO line considered in Fig. <ref> may not be fully representative. However, note that the blue absorption regions plotted in Fig. <ref> only pertain to the line core – the line wings, which also contribute to the CCF, must probe lower altitudes. §.§.§ H_2O signals The signals of the nominal model, the cold-morning-limb model, and the no-TiO/VO model show the same behaviour, with the no-TiO/VO signal being slightly more blueshifted due to stronger day-to-night winds. The reason for this is that the 3D spatial distribution of water in each of the models is very similar. Water is present on the nightside, as well as at higher pressures on the dayside where the scale heights on the trailing limb and the leading limb are still the same . Consequently, the temperature asymmetries in the cold-morning-limb model and the no-TiO/VO model do not manifest in the CCF maps. In contrast to iron, the presence of optically thick clouds strongly suppresses the water signal. This is because the cloud deck is situated at roughly the same altitude as the water absorption regions. Hence, the line contrast is small and the vast majority of water lines are muted. §.§.§ OH signals The OH signal of the cold-morning-limb model also features an increasing blueshift during the first half of the transit. However, the signal from the leading sectors is very weak. The reason for this is that the colder leading limb is depleted of OH, while the high-altitude part of the dayside that is in view does hot have sufficient OH abundance to cause significant absorption. In the second half of the transit, the trailing sectors completely dominate the absorption signal. The same idea holds for the no-TiO/VO model, which does not have any detectable OH on the leading limb. Just like water, OH probes relatively low altitudes. Therefore, the introduction of optically thick clouds heavily mutes the OH absorption lines. At mid-transit, the absorption signal is almost zero. The strongest contributions come from the leading equator around ingress and from the trailing equator around egress. Again, however, it is questionable whether optically thick clouds allow for OH to be detected at all, given that the CCF signal only emerges from a narrow range of orbital phase angles. §.§.§ TiO signals In the cold-morning-limb model, the contribution from the leading equator is zero (not shown in a plot). This is why the TiO signal is more blueshifted than in the nominal model. Additionally, the TiO signal appears to have a more bimodal structure. As a result, the signal is “smeared” over a range of K_p and V_sys values (see Fig. <ref>), which could make TiO harder to detect in this scenario. When optically thick clouds are introduced, TiO absorption is muted in all limb sectors except the leading pole (not shown). This is why the blueshift of the full-limb signal in Fig. <ref> closely resembles that of the leading pole in Fig. <ref>. §.§ K_p–V_sys maps §.§.§ Systematic peak offsets Fig. <ref> shows the K_p–V_sys maps associated with the CCF maps from Fig. <ref>. Because the absorption signals all exhibit Doppler shifts in the planetary rest frame, the SNR peaks in the K_p–V_sys maps are offset from (0, 0) km/s. All SNR peaks, except for the the TiO signal of the nominal model, are consistently located at lower V_sys and lower K_p values than would be expected based on the orbital motion of the planet and the radial velocity of the star. The red dashed curves in Fig. <ref> illustrate why this is the case. These are the curves that give rise to the highest integrated SNR value (equation <ref>). Because all absorption signals are blueshifted on average, the best-fitting curve has a negative horizontal offset, yielding Δ V_sys < 0. Also, because the signals become more blueshifted over time (with the exception of TiO in the nominal model), the slope of the curve is negative, corresponding to Δ K_p < 0. Along the V_sys axis, the offset of the SNR peak is typically a few km/s. To zeroth order, Δ V_sys can be interpreted as the average wind speed across the terminator. The maximum shift we encounter is Δ V_sys = -7 km/s for the iron signal of the no-TiO/VO model. Intuitively, it makes sense that the shifts are the largest for this model, as it has no drag and thus the highest wind speeds. Along the K_p axis, the peak offset can be more significant. Typically, the K_p shift is much larger than the Doppler shift measured from the CCF at any orbital phase. This is because the value of Δ K_p does not encode information about the absolute value of the line-of-sight velocities. Rather, it reflects the rate of change of the planet's Doppler shift during the transit (and thus how steep the slope of the best fitting absorption trail needs to be). For this reason, signals with a strong phase-dependence show the most extreme values of Δ K_p. For example, both the cold-morning-limb model and the no-TiO/VO model yield Δ K_p≈ -20 km/s for iron and CO. §.§.§ The signature of planet rotation The K_p–V_sys maps in Fig. <ref> (nearly) all show negative K_p and V_sys offsets. However, there are theoretical scenarios in which Δ K_p and/or Δ V_sys can be positive. To explore these scenarios, we revisit the two alternative versions of the nominal model presented in Fig. <ref>. Their K_p–V_sys maps are depicted in Fig. <ref>. The left panels show the SNR peaks of CO and water for the model with rotation only. In this scenario, the SNR peaks acquire a “boomerang” shape. That is, there is a family of (K_p, V_sys) values that “fit” the absorption signal of the planet equally well. Fig. <ref> demonstrates why this is the case. For a model with rotation only, the absorption signal of the planet clearly features two components: a blueshifted component associated with the trailing limb and a redshifted component associated with the leading limb. Such a signal can be described by different trails that give rise to roughly the same integrated SNR: a trail fitting the trailing limb (Δ K_p≈ 0; Δ V_sys<0), a trail fitting the leading limb (Δ K_p≈ 0; Δ V_sys>0) and a trail that fits both components (large K_p offset; V_sys≈0). The latter has a negative slope for CO (Δ K_p < 0), but a positive slope for water (Δ K_p > 0), owing to the 3D distribution of these species across the atmosphere. The right panels in Fig. <ref> show how the K_p–V_sys maps of the planet change when weak winds are added to the model (the “strong drag” scenario). Because of the presence of day-to-night winds, the planet signal becomes more blueshifted and the SNR peaks shift to negative Δ V_sys. Also, the “boomerang” shape partly disappears, as winds tend to make the absorption trail of the planet smoother. However, especially for water, there are still a wide range of combinations that fit the absorption signal of the planet well. §.§.§ Estimating the K_p shift of a planet due to rotation In the scenario where planet rotation is dominating the Doppler shift of the planet, we can estimate the K_p shift imposed on the planet signal. To this end, we assume that the absorption signal is dominated by the leading limb at the start of the transit and by the trailing limb at the end of the transit. During transit, it holds that cos(ϕ) ≈ 1, such that the change in radial velocity ΔRV between two phases due to orbital motion is ΔRV = 2 π K_p( Δϕ/360^∘), with Δϕ the phase difference (in degrees). Therefore, in the planetary rest frame, the K_p shift resulting from planet rotation can be computed from Δ K_p = ΔRV/2π( 360^∘/Δϕ), with ΔRV the radial-velocity difference between the trailing and the leading limb, and Δϕ the phase difference between ingress and egress. The most extreme RV value that can be acquired by both limbs is ± v_eq, the rotational velocity of the planet at the equator. Hence, a rough approximation[In reality, the average Doppler shift across the limb will be smaller than v_eq, as regions away from the equator lie closer to the rotation axis. Nonetheless, Fig. <ref> demonstrates that the assumption ΔRV = 2v_eq is not too unrealistic, as the peaks of the CCFs of the full limb lie relatively close to v_eq (± 5.3 km/s). This is because the signal from the equatorial sectors is stronger than that of the polar sectors.] is ΔRV≈±2v_eq, resulting in Δ K_p≈±v_eq/π( 360^∘/Δϕ). Invoking v_eq = 2π R_p / P, Δϕ = 2 arcsin(R_*/a) ≈ 2 R_*/a, 360^∘ = 2π rad, and Kepler's third law, we can also write equation <ref> as Δ K_p≈±R_p/R_*( 2π GM_* /P)^1/3, with R_p the planet radius, P the orbital period, a the semi-major axis of the orbit, R_* the stellar radius, M_* the stellar mass, and G the gravitational constant, respectively. For signals dominated by the leading limb in the first half of the transit and by the trailing limb in the second half (e.g., Fe and CO), Δ K_p will be negative. For signals dominated by the trailing limb in the first half of the transit and by the leading limb in the second half (e.g., H_2O), Δ K_p will be positive. Hence, the sign of Δ K_p depends on the 3D distribution of a species across the atmosphere. Evaluating equation <ref> for the parameters of WASP-76b[See e.g., ], we find Δ K_p≈ ±21 km/s, which is in rough agreement with the K_p shifts reported for CO and water in Fig. <ref>. For WASP-121b, another well-studied ultra-hot Jupiter, we find Δ K_p≈ ±28 km/s. This demonstrates that the K_p offsets observed for a planet can be much larger than the actual line-of-sight velocities in its atmosphere. §.§.§ Comparison to transit observations A considerable number of HRS observations of ultra-hot Jupiters have revealed peak offsets in K_p–V_sys maps that hint at atmospheric dynamics and/or 3D spatial variations in temperature and chemistry. <cit.> and <cit.> presented K_p–V_sys maps for a plethora of species in the atmosphere of WASP-76b (H, Li, Na, Mg, K, Ca ii, V, Cr, Mn, Co, Ni, Sr ii, VO, Ca, Ba ii, O, Fe, and Fe ii). For the vast majority of these species, they reported negative K_p and V_sys offsets, which is in good agreement with this work (see Fig. <ref>). Note that many of the species observed in the optical are refractories and alkalis, which are abundant on the dayside of the planet. Therefore, their absorption signals should behave in the same way as those of iron, CO, and OH modelled in this work. For species such as H, O, and Ca ii, <cit.> and/or <cit.> found positive K_p offsets. This is because the absorption lines of these species probe higher regions of the atmosphere that are likely prone to atmospheric escape (e.g., ). Such physics is not included in our model. In the infrared, CARMENES observations of WASP-76b revealed positive K_p offsets for H_2O, HCN (+50 km/s, ), and OH (+35 km/s, ), suggesting a decreasing blueshift of the absorption lines over the course of the transit. For water, our models are able to produce a positive Δ K_p when the line-of-sight velocities are dominated by planet rotation (see Fig. <ref>). However, the expected offset would be of the order in this scenario. A +50 km/s shift in K_p is hard to explain with our current framework. As for OH, our models predict Δ K_p to be negative, rather than positive. Further observations[The studies by <cit.> and <cit.> were based on the same archival CARMENES data, so further observations would be needed to rule out the presence of any systematics in the dataset.] and/or modelling studies will be required to elucidate the differences between our models and the findings by <cit.> and <cit.>. Optical transmission observations of WASP-121b have shown negative K_p and V_sys offsets for Fe (), Cr, V, Fe ii (), Ca, K, Co, Cu, V ii, Ti ii, Mg, and Sc ii (). Many of the more “exotic” species reported by <cit.> only showed weak or tentative detections, so their (K_p, V_sys) values should be treated with caution. However, the observations demonstrate that the majority of refractories and alkalis undergo increasing blueshifts during the transit, just like on WASP-76b. <cit.> also reported a few species with Δ K_p≈ 0 km/s and Δ V_sys<0 km/s (Mn, Co ii, Ni), which could imply that these species are only observable on the trailing limb of the planet. More recently, <cit.> also recovered negative K_p and V_sys offsets when cross-correlating ESPRESSO data of WASP-121b with a template containing Fe, Mg, Cr, Ti, V, Na, and Ca lines. For Ca ii, <cit.> found a positive Δ K_p, again indicating that its absorption lines probe higher regions of the atmosphere with different dynamics. For KELT-20b/MASCARA-2b, <cit.> and <cit.> reported “double-peak” features in the K_p–V_sys maps of neutral iron. These could hint at the fact that planet rotation is the dominant contributor to the line-of-sight velocities, such that the absorption signal is made up of separate components associated with the trailing and leading limb, respectively (as in Figs. <ref> and <ref>). What is puzzling however, is that the SNR peaks lie 70–80 km/s apart along the K_p axis, while equation <ref> only predicts Δ K_p≈ 20 km/s for KELT-20b. Furthermore, <cit.> observed five transits of KELT-20b, and only in two transits was the double-peak feature recovered. Other transiting ultra-hot Jupiters for which peak offsets in K_p–V_sys maps were found are WASP-189b (), HAT-P-70b (), and KELT-9b (). For 18 species present in the atmosphere of KELT-9b, <cit.> extracted K_p values spanning a range of 60 km/s (see their Fig. 6). Interpreting their observations with our current set of models is hard, as the equilibrium temperature of KELT-9b is roughly two times that of WASP-76. §.§.§ A note on high-resolution retrievals of ultra-hot Jupiters In this work, we showed that different species are subject to different Doppler shifts and K_p–V_sys offsets in transmission. At the moment, retrieval frameworks typically include one Δ K_p and one Δ V_sys parameter to describe the “bulk” Doppler shift of the entire spectrum as a function of phase (). In the optical, such an approach is justified for the vast majority species (i.e., most alkalis and refractories) as these are expected to have similar distributions across the atmosphere, resulting in similar K_p–V_sys offsets (e.g., Figs. 1 and 9 in ). However, as noted by <cit.> and <cit.>, care should be taken with species that probe the planet's exosphere (e.g., H, O, Fe ii, Mg ii, and Ca ii). Both studies excluded these species from their retrievals, as the exosphere is non-hydrostatic (impacting line strengths) and features strong outflows (impacting line shapes and positions). A 1D retrieval model with one set of parameters and a single scale factor (a parameter controlling the line strengths of the model) cannot account for the behaviour of all species at the same time. Following <cit.>, a good practice for high-resolution retrievals would be to plot the K_p–V_sys maps of all species to be included in the forward model, and examine their peak offsets. If the peak offsets of two (groups of) species are substantially different, they may require their own set of (Δ K_p, Δ V_sys) parameters. Another, option is to run a separate retrieval for each (group of) species. The latter does not increase the complexity of the forward model, but doubles the computing time. In the infrared, things are more intricate than in the optical as water and CO – the two most prominent species – probe completely different parts of the atmosphere (see Fig. <ref>), each with their own temperature, dynamics, and scale height. Therefore, fitting the same Δ K_p, Δ V_sys, and temperature profile to the absorption lines of both species may be problematic. The most straightforward solution would be to run two separate retrievals for water and CO. Contrary to CO, which probes the dayside during the entire transit, the absorption regions of water shift across the terminator as a function of orbital phase. On the trailing limb, the absorption regions shift from the nightside to the dayside, while they shift from the dayside to the nightside on the leading limb. Therefore, water would be the ideal molecule to study with a 2D retrieval model (), which is able to assign separate abundances to the trailing and leading limb of the planet, respectively. § CONCLUSION Developing a deeper understanding of the “3D-ness” of exoplanet atmospheres is crucial to fully leverage the information content of both their high-resolution and low-resolution spectra. With JWST delivering its first data (e.g., ) and a new generation of ground-based telescopes (E-ELT, GMT, TMT) on the horizon, modelling studies that bridge the gap between theory and observation play an essential role in the interpretation of current and future observations. In this work, we simulated the cross-correlation signals of Fe, CO, H_2O, OH, and TiO for four different 3D models of a benchmark ultra-hot Jupiter (WASP-76b) in transmission. Because ultra-hot Jupiters show extreme spatial variations in temperature and chemistry across their terminators, their transmission spectra contain a wealth of information about the 3D structure of the atmosphere. VLT/ESPRESSO and GEMINI-N/MAROON-X are able to phase-resolve the absorption signals of ultra-hot Jupiters in the optical (). With novel spectrographs such as GEMINI-S/IGRINS () and VLT/CRIRES+ (), this will now also possible in the infrared. Moreover, once the E-ELT is on sky, phase-resolving the CCF will become standard practice for any high-resolution observation of a hot gas giant, as the signal-to-noise will be high enough to detect the planet in only a fraction of a transit. Also, the E-ELT will offer the opportunity to take ingress and egress spectra, whereby only a part of the planet disk is blocking the star. We summarise our most important findings below: ∙ For species that probe the dayside of an ultra-hot Jupiter (refractories like Fe, or stable molecules like CO and OH), the net blueshift should increase during the transit, resulting in a negative K_p offset. This holds even in the absence of an east-west asymmetry (e.g., due to a hotspot offset). The increasing blueshift is due to the combined effect of the 3D spatial distribution of the species and planet rotation. Our findings are in good agreement with optical high-resolution observations of WASP-76b and WASP-121b (e.g., ). Conversely, for species that probe the nightside (such as H_2O and TiO), their 3D spatial distribution and planet rotation act in an opposite manner. Depending on the 3D wind profile of the planet, this can lead to weaker blueshifts with orbital phase, or even increasing redshifts. Such behaviour results in a positive K_p offset. ∙ The K_p offset of a species reflects the rate of change of its Doppler shift in the planetary rest frame. Therefore, as opposed to Δ V_sys (which is of the same order as the wind speeds), Δ K_p can be much larger than the line-of-sight velocities in the planet's atmosphere at any time. Δ K_p < 0 when the Doppler shift becomes more negative during the transit, while when the Doppler shift becomes more positive. In this work, we derived a formula to estimate the typical K_p offset of a planet. For and , Δ K_p can be as large as ± 21 km/s and , respectively. ∙ When performing atmospheric retrievals on transmission spectra of ultra-hot Jupiters, separate temperature profiles and values should be retrieved for species that probe the dayside and the nightside of the atmosphere, respectively (e.g., CO and H_2O in the infrared). Our analytical formula can provide a reasonable prior for the range of possible departures from a planet's orbital K_p value. ∙ For WASP-76b, our nominal GCM model does not predict strong differences between the cross-correlation signals of Fe, CO, H_2O and OH in transmission. However, our model with a colder morning limb, which produces the same “kink” feature as seen in the data ( ), predicts a more diverse set of absorption signals for the chemical species studied. We conclude that observing the phase-dependent absorption signal of multiple species that probe distinct parts of the atmosphere allows to differentiate between two models that fit the signal of a single species equally well. ∙ Even though CO is uniformly distributed across the atmosphere of an ultra-hot Jupiter, it predominantly probes the dayside. This is because of a “shielding effect”. Since the dayside is more extended than the nightside, CO absorption happens at high altitudes on the dayside where the nightside contribution to the optical depth is zero. ∙ H_2O absorption lines can be strongly muted by optically-thick clouds on the nightside of ultra-hot Jupiters. On the other hand, nighside clouds will not have a big impact on the absorption signals of Fe and CO, as these species probe higher altitudes on the dayside. § ACKNOWLEDGEMENTS We are grateful to Ray Pierrehumbert for sharing computing resources. We also thank David Ehrenreich and Ray Pierrehumbert for insightful discussions. JPW sincerely acknowledges support from the Wolfson Harrison UK Research Council Physics Scholarship and the Science and Technology Facilities Council (STFC). Finally, we thank the anonymous referee for thoughtful comments that helped improve the quality of the manuscript. § DATA AVAILABILITY The data and models underlying this article will be shared on reasonable request to the corresponding author. mnras § IMPACT OF NEW MODELLING APPROACHES ON THE CCF MAP Fig. <ref> shows a comparison between the CCF map of iron obtained for the cold-morning-limb model in <cit.> (top left) and the cold-morning-limb model from this work (bottom right). The underlying atmosphere is the same, but a few changes were made to the radiative transfer: (i) accounting for scale-height differences due to hydrogen dissociation, (ii) including opacities for more species, and using iron line lists with pressure broadening and no line-wing cut-off, (iii) increasing the wavelength range, and (iv) decreasing the resolution (see Section <ref>). In summary, we find that changes (iii) and (iv) have the biggest impact on the CCF map. However, the overall behaviour of the iron signal proves robust – an increasing blueshift and signal strength over the course of the transit, with the blueshift remaining constant at about -8 km/s after mid-transit.
http://arxiv.org/abs/2307.04742v1
20230710175309
Parallel Tempered Metadynamics: Overcoming potential barriers without surfing or tunneling
[ "Timo Eichhorn", "Gianluca Fuwa", "Christian Hoelbling", "Lukas Varnhorst" ]
hep-lat
[ "hep-lat" ]
WUB/23-00 [email protected] [email protected] [email protected] [email protected] Department of Physics University of Wuppertal Gaußstraße 20, 42119 Wuppertal, Germany At fine lattice spacings, Markov chain Monte Carlo simulations of QCD and other gauge theories are plagued by slow (topological) modes that give rise to large autocorrelation times. These, in turn, lead to statistical and systematic errors that are difficult to estimate. Here, we demonstrate that for a relevant set of parameters considered, Metadynamics can be used to reduce the autocorrelation times of topological quantities in 4-dimensional SU(3) gauge theory by at least two orders of magnitude compared to conventional update algorithms. However, compared to local update algorithms and the Hybrid Monte Carlo algorithm, the computational overhead is significant, and the required reweighting procedure may considerably reduce the effective sample size. To deal with the latter problem, we propose modifications to the Metadynamics bias potential and the combination of Metadynamics with parallel tempering. We test the new algorithm in 4-dimensional SU(3) gauge theory and find, that it can achieve topological unfreezing without compromising the effective sample size. Preliminary scaling tests in 2-dimensional U(1) gauge theory show these modifications lead to improvements of more than an order of magnitude compared to standard Metadynamics, and an improved scaling of autocorrelation times with the lattice spacing compared to standard update algorithms. Parallel Tempered Metadynamics: Overcoming potential barriers without surfing or tunneling Lukas Varnhorst August 12, 2023 ========================================================================================== § INTRODUCTION In recent years, physical predictions based on lattice simulations have reached sub-percent accuracies <cit.>. With ever-shrinking uncertainties, the need for precise extrapolations to the continuum grows, which in turn necessitates ever finer lattice spacings. Current state-of-the-art methods for simulations of lattice gauge theories either rely on a mixture of heat bath <cit.> and overrelaxation <cit.> algorithms for pure gauge theories, or molecular-dynamics-based algorithms like the Hybrid Monte Carlo algorithm (HMC) <cit.> or variations thereof for simulations including dynamical fermions. For all these algorithms, the computational effort to carry out simulations dramatically increases at fine lattice spacings due to critical slowing down. While the exact behavior depends on a number of factors, such as the update algorithms, the exact discretization of the action, and the choice of boundary conditions, the scaling of the integrated autocorrelation times with the inverse lattice spacing can usually be described by a power law. In addition to the general diffusive slowing down, topologically non-trivial gauge theories may exhibit topological freezing <cit.>. This effect appears due to the inability of an algorithm to overcome the action barriers between topological sectors, which can lead to extremely long autocorrelation times of topological observables and thus an effective breakdown of ergodicity. Over the years, several strategies have been developed to deal with this situation. On the most basic level, it has become customary, in large scale simulations, to monitor the topological charge of the configurations in each ensemble, thus avoiding regions of parameter space, which are affected by topological freezing <cit.>. Another possibility to circumvent the problem consists in treating fixed topology as a finite volume effect and either correcting observables for it <cit.>, or increasing the physical volume sufficiently to derive the relevant observables from local fluctuations <cit.>. It is also possible to use open boundary conditions in one lattice direction <cit.>, which invalidates the concept of an integer topological charge for the prize of introducing additional boundary artifacts and a loss of translational symmetry. Despite the success of these strategies in many relevant situations, the need for a genuine topology changing update algorithm is still great. This is evident from the large number and rather broad spectrum of approaches that are currently being investigated in this direction. Some of these approaches address critical slowing down in general, whereas others focus particularly on topological freezing. These approaches include parallel tempering <cit.>, modified boundary conditions <cit.> and combinations of both <cit.>; multiscale thermalization <cit.>, instanton(-like) updates <cit.>, Metadynamics <cit.>, Fourier acceleration <cit.> and trivializing maps <cit.>, also in combination with machine learning <cit.>. For a recent review, see e.g. <cit.>. Additionally, recent years have seen multitudinous efforts to use generative models to sample configurations <cit.>. In this work, we propose a new update algorithm, parallel tempered Metadynamics, or PT-MetaD for short, and demonstrate its efficiency in 4-dimensional SU(3) at parameter values, where conventional update algorithms suffer from topological freezing. In its basic variant, which we present here, PT-MetaD consists of two update streams simulating the same physical system. One of the streams is an efficient, conventional algorithm, while the second one includes a bias potential that facilitates tunneling between topological sectors. At regular intervals, swaps between the two streams are suggested, so that the good topological sampling from the second stream carries over to the first one. The algorithm thus combines ideas from parallel tempering <cit.>, Metadynamics <cit.> and multicanonical simulations <cit.>, leading to an efficient sampling of topological sectors while avoiding the problem of small effective sample sizes, which is usually associated with reweighting techniques such as Metadynamics or multicanonical simulations. Additionally, the inclusion of fermions into PT-MetaD is conceptually straightforward. This paper is organized as follows. We start out by giving a general introduction to Metadynamics in <Ref>. Afterwards, <Ref> describes our simulation setup, including our choice of actions, observables, and update algorithms. Some details on the application of Metadynamics in the context of SU(3) gauge theory are also given. In <Ref>, we present baseline results obtained with conventional update algorithms, including a rough determination of gradient flow scales for the DBW2 action. In <Ref> we present results obtained with pure Metadynamics for 4-dimensional SU(3), and discuss several possible improvements. In <Ref> we introduce parallel tempered Metadynamics and show some scaling tests of the new algorithm in 2-dimensional U(1) gauge theory, as well as exploratory results in 4-dimensional SU(3). Finally, in <Ref>, we conclude with a summary and outlook on the application of the new algorithm to full QCD. § METADYNAMICS Consider a system described by a set of degrees of freedom {U}, where the states are distributed according to the probability density p(U) = 1/Z e^-S(U), with the partition function Z defined as Z = ∫𝒟[U] e^-S(U). The expectation value of an observable O is defined as ⟨ O ⟩ = ∫𝒟[U] p(U) O(U). In the context of lattice gauge theories, the integration measure 𝒟[U] is usually the product of Haar measures for each link variable, but more generally 𝒟[U] may be understood as a measure on the configuration space of the system. Metadynamics <cit.> is an enhanced-sampling method, based on the introduction of a history-dependent bias potential V_t(s(U)). This potential is introduced by replacing the action S(U) with S^M_t(U) = S(U) + V_t(s(U)), where t is the current simulation time. This potential modifies the dynamics of the system and depends on a number of observables s_i(U), with i ∈{1, …, N }, that are referred to as collective variables (CVs). These CVs span a low-dimensional projection of the configuration space of the system, and may generally be arbitrary functions of the underlying degrees of freedom {U}. However, when used in combination with molecular-dynamics-based algorithms such as the Hybrid Monte Carlo algorithm, the CVs need to be differentiable functions of the underlying degrees of freedom. During the course of a simulation, the bias potential is modified in such a way as to drive the system away from regions of configuration space that have been explored previously, eventually converging towards an estimate of the negative free energy as a function of the CVs, up to a constant offset <cit.>. Usually, this is accomplished by constructing the potential from a sum of Gaussians g(s), so that at simulation time t, the potential is given by V_t(s) = ∑_t' ≤ t∏_i=1^N g(s_i - s_i(t')). The exact form of the Gaussians is determined by the parameters w and δ s_i: g(s_i) = w exp(-s_i^2/2 δ s_i^2). Both parameters affect the convergence behavior of the potential in a similar way: Increasing the height w or the widths δ s_i may accelerate the convergence of the potential during early stages of the simulation, but lead to larger fluctuations around the equilibrium during later stages. Furthermore, the widths δ s_i effectively introduce a smallest scale that can still be resolved in the space spanned by the CVs, which needs to be sufficiently small to capture the relevant details of the potential. If the bias potential has reached a stationary state, i.e., its time-dependence in the region of interest is just an overall additive constant, the modified probability density, which we shall also refer to as target density, is given by p'(U) = 1/Z' e^-S(U) - V(s(U)), with the modified partition function Z' = ∫𝒟[U] e^-S(U) - V(s(U)). Expectation values with respect to the modified distribution can then be defined in the usual way, i.e., via ⟨ O ⟩' = ∫𝒟[U] p'(U) O(U). On the other hand, expectation values with respect to the original, unmodified probability density can be written in terms of the new probability distribution with an additional weighting factor. For a dynamic potential, there are different reweighting schemes to achieve this goal <cit.>, but if the potential is static, the weighting factors are directly proportional to the exponential of the bias potential: ⟨ O ⟩ = ∫𝒟[U] p'(U) O(U) e^V(s(U))/∫𝒟[U] e^V(s(U)). The case of a static potential is thus essentially the same as a multicanonical simulation <cit.>. In situations where the evolution of the system is hindered by high action barriers separating relevant regions of configuration space, Metadynamics can be helpful in overcoming those barriers, since the introduction of a bias potential modifies the marginal distribution over the set of CVs. For conventional Metadynamics, the bias potential is constructed in such a way that the marginal modified distribution is constant: p'(s_i) = ∫𝒟[U] p'(U)δ(s_i-s_i(U))=const. Conversely, for a given original distribution p(Q) and a desired target distribution p'(Q), the required potential is given by: V(s) = log(p'(s)/p(s)) However, it should be noted that even if the bias potential completely flattens out the marginal distribution over the CVs, the simulation is still expected to suffer from other (diffusive) sources of critical slowing down as is common for Markov chain Monte Carlo simulations. § SIMULATION SETUP AND OBSERVABLES §.§ Choice of gauge actions For our simulations of SU(3) gauge theory, we work on a 4-dimensional lattice Λ with periodic boundary conditions. Configurations are generated using the Wilson <cit.> and the DBW2 <cit.> gauge action, both of which belong to a one-parameter family of gauge actions involving standard 1 × 1 plaquettes as well as 1 × 2 planar loops, which may be expressed as S_g = β/3∑_x ∈Λ( ∑_μ < ν c_0 ( 3 - [𝒲_μ, ν(x)] ) + ∑_μ≠ν c_1 ( 3 - [𝒲_μ, 2ν(x)] ) ). Here, 𝒲_k μ, l ν(x) refers to a Wilson loop of shape k × l in the μ-ν plane originating at the site x. The coefficients c_0 and c_1 are constrained by the normalization condition c_0 + 8 c_1 = 1 and the positivity condition c_0 > 0, where the latter condition is sufficient to guarantee that the set of configurations with minimal action consists of locally pure gauge configurations <cit.>. For the Wilson action (c_1 = 0), only plaquette terms contribute, whereas the DBW2 action (c_1 = -1.4088) also involves rectangular loops. It is well known that the critical slowing down of topological modes is more pronounced for improved gauge actions in comparison to the Wilson gauge action <cit.>: A larger negative coefficient c_1 suppresses small dislocations, which are expected to be the usual mechanism mediating transitions between topological sectors on the lattice. Among the most commonly used gauge actions, this effect is most severely felt by the DBW2 action. In previous works <cit.>, local update algorithms were found to be inadequate for exploring different topological sectors in a reasonable time frame. Instead, the authors had to generate thermalized configurations in different topological sectors using the Wilson gauge action, before using these configurations as starting points for simulations with the DBW2 action. Thus, this action allows us to explore parameters where severe critical slowing down is visible, while avoiding very fine lattice spacings and thereby limiting the required computational resources. §.§ Observables The observables we consider here are based on various definitions of the topological charge, and Wilson loops of different sizes at different smearing levels. The unrenormalized topological charge is defined using the clover-based definition of the field-strength tensor: Q_c = 1/32 π^2∑_x ∈Λϵ_αβγδ[F^clov_αβ(x) F^clov_γδ(x)] This field-strength tensor is given by F^clov_αβ(x) = -i/8(C_μν(n) - C_νμ(n) ), where the clover term C_αβ(x) is defined as C_αβ(x) = P_α, β(x) + P_β, -α(x) + P_-α, -β(x) + P_-β, α(x), P_α, β(x) denotes the plaquette: P_α, β(x) = U_α(x) U_β(x + α̂) U_α^†(x + β̂) U_α^†(x) Alternatively, the topological charge may also be defined via the plaquette-based definition, here denoted by Q_p: Q_p = 1/32 π^2∑_x ∈Λϵ_αβγδ[F^plaq_α, β(x) F^plaq_γ, δ(x)] Similar to the clover-based field-strength tensor, F^plaq_α, β(x) is defined as: F^plaq_αβ(x) = -i/2(P_μ, ν(n) - P_ν, μ(n) ) Note that both Q_c and Q_p formally suffer from 𝒪(a^2) artifacts, although the coefficient is typically smaller for the clover-based definition Q_c. The topological charge is always measured after applying 𝒪(30) steps of stout smearing <cit.> with a smearing parameter ρ = 0.12. To estimate the autocorrelation times of the system, it is also useful to consider the squared topological charge <cit.>. Additionally, we also consider the Wilson gauge action and n × n Wilson loops for n ∈{2, 4, 8} at different smearing levels. We denote these by S_w and 𝒲_n respectively. §.§ Update algorithms Throughout this work, we employ a number of different update schemes: To illustrate critical slowing down of conventional update algorithms and to set a baseline for comparison with Metadynamics-based algorithms, we use standard Hybrid Monte Carlo updates with unit length trajectories (1HMC), a single heat bath sweep (1HB), five heat bath sweeps (5HB), and a single heat bath sweep followed by four overrelaxation sweeps (1HB+4OR). The local update algorithms are applied to three distinct SU(2) subgroups during each sweep <cit.>, and the HMC updates use an Omelyan-Mryglod-Folk fourth-order minimum norm integrator <cit.> with a step size of ϵ = 0.2, which leads to acceptance rates above 99% for the parameters used here. We compare these update schemes to Metadynamics HMC updates with unit length trajectories (MetaD-HMC), and a combination of parallel tempering with Metadynamics (PT-MetaD) which is discussed in more detail in <Ref>. An important requirement for the successful application of Metadynamics is the identification of appropriate CVs. In our case, the CV should obviously be related to the topological charge. However, it should not always be (close to) integer-valued, but rather reflect the geometry of configuration space with respect to the boundaries between topological sectors. On the other hand, the CV needs to track the topological charge closely enough for the algorithm to be able to resolve and overcome the action barriers between topological sectors. A straightforward approach is to apply only a moderate amount of some kind of smoothing procedure, such as cooling or smearing, to the gauge fields before measuring the topological charge. Since these smoothing procedures involve some kind of spatial averaging, the action will become less local, which complicates the use of local update algorithms. Therefore, we use the HMC algorithm to efficiently update the entire gauge field at the same time, which requires a differentiable smoothing procedure such as stout <cit.> or HEX smearing <cit.>. Due to its simpler implementation compared to HEX smearing, we choose stout smearing here. Previous experience <cit.> seems to indicate that four to five stout smearing steps with a smearing parameter ρ = 0.12 strike a reasonable balance between having a smooth CV and still representing the topological charge accurately. The force contributed by the topological bias potential may be written in terms of the chain rule: F_μ, meta(x) = - ∂ V_meta/∂ Q_meta∂ Q_meta/∂ U^(n)_μ_n(x_n) ×∂ U^(n)_μ_n(x_n)/∂ U^(n-1)_μ_n-1(x_n-1)…∂ U^(1)_μ_1(x_1)/∂ U_μ(x) Here we have introduced the notation V_meta for the bias potential and Q_meta for the CV to clearly distinguish it from other definitions of the topological charge. The first term in the equation, corresponding to the derivative of the bias potential with respect to Q_meta, is trivial, but the latter two terms are more complicated: The derivative of Q_meta with respect to the maximally smeared field U^(n) is given by a sum of staples with clover term insertions, and the final term corresponds to the stout force recursion <cit.> that also appears during the force calculation when using smeared fermions. Note that in machine learning terminology, this operation is essentially a backpropagation <cit.> and may be computed efficiently using reverse mode automatic differentiation. More details on the calculation of the force can be found in <Ref>. The bias potential is constructed from a sum of one-dimensional Gaussians, as described in <Ref>, and stored as a histogram. Due to the charge conjugation symmetry, we can update the potential symmetrically. Values at each point are reconstructed by linearly interpolating between the two nearest bins, and the derivative is approximated by their finite difference. To limit the evolution of a system to relevant regions of the phase space, it is useful to introduce an additional penalty term to the potential once the absolute value of Q_meta has crossed certain thresholds Q_min and Q_max. If the system has exceeded the threshold, the potential is given by the outermost value of the histogram, plus an additional term that scales quadratically with the distance to the outer limit of the histogram. Unless mentioned otherwise, we have used the following values as default parameters for the potential: Q_max/min = ±8, n_bins = 800, w = 0.05, while δ Q^2 has always been set equal to the bin width, i.e., (Q_max - Q_min) / n_bins. Note that it is often convenient to build up a bias potential in one or several runs, and then simulate and measure with a static potential generated in the previous runs. In some sense, this can be thought of as a combination of Metadynamics and multicanonical simulation. § RESULTS WITH CONVENTIONAL UPDATE ALGORITHMS To establish a baseline to compare our results to, we have investigated the performance of some conventional update algorithms using the Wilson and DBW2 gauge actions. Furthermore, we have made a rough determination of the gradient flow scales t_0 and w_0 for the DBW2 action. Some preliminary results for the Wilson action were already presented in <cit.>. §.§ Critical slowing down with Wilson and DBW2 gauge actions In order to study the scaling of autocorrelations for different update schemes, we have performed a series of simulations with the Wilson gauge action on a range of lattice spacings. The parameters were chosen in such a way as to keep the physical volume approximately constant at around (1.1)^4, using the scale given by the rational fit function in <cit.>, which was based on data from <cit.>. A summary of the simulation parameters can be found in <Ref>. Since autocorrelation times near second-order phase transitions are expected to be described by a power law, we use the following fit ansatz in an attempt to parameterize the scaling: τ_int = c (a/r_0)^z All autocorrelation times and their uncertainties are estimated following the procedure described in <cit.>. <Ref> shows the scaling of the integrated autocorrelation times of 2 × 2 Wilson loops 𝒲_2 and the square Q_c^2 of the clover-based topological charge with the lattice spacing. Additionally, the figure also includes power law fits to the data and the resulting values for the dynamical critical exponents z(𝒲_2) and z(Q_c^2). Both observables were measured after 31 stout smearing steps with a smearing parameter ρ = 0.12. While the integrated autocorrelation times of both observables increase towards finer lattice spacings and are adequately described by a power law behavior, the increase is much steeper for the squared topological charge than for the smeared 2 × 2 Wilson loops. Below a crossover point at a ≈0.08, the autocorrelation times of the squared topological charge start to dominate. They can be described by both, a dynamical critical exponent z ≈ 5 or, alternatively, by an exponential increase, that was first suggested in <cit.>. This behavior is compatible with the observations in <cit.>. In contrast, the autocorrelation time of Wilson loops is compatible with a much smaller exponent z ≈ 12. As can be seen in <Ref>, the critical exponent does not change significantly with the size of the Wilson loop after 31 stout smearing steps. Generally, the integrated autocorrelation times of smeared Wilson loops slightly increase both with the size of the loops and the number of smearing levels. The only exception to this behavior occurs for larger loops, where a few steps of smearing are required to obtain a clean signal and not measure the autocorrelation of the noise instead. Regarding the different update algorithms, the unit length HMC does show a somewhat better scaling behavior for all observables than the local update algorithms, but it is also about a factor 7 more computationally expensive per update step (see <Ref>).[Since we are ultimately interested in dynamical fermion simulations, we do not consider the more efficient, local HMC variant presented in <cit.>, as it is applicable to pure gauge theories only.] For all local update algorithms considered here, the critical exponents are very similar, but the combination of one heat bath and four overrelaxation steps has the smallest prefactor. It is interesting to note, that this algorithm is also faster by more than a factor 2 than the five-step heat bath update scheme, which does not profit from the inclusion of overrelaxation steps. The single step heat bath without overrelaxation, although numerically cheaper, does have the worst prefactor of the local update algorithms. Note that the reported numbers differ from those in <cit.> due to a different fit ansatz (in the proceedings, the fit ansatz included an additional constant term). For the DBW2 action, the problem is more severe. <Ref> shows the time series of the topological charge for two runs using the 1HB+4OR and the 1HMC update scheme. Both simulations were done on a 16^4 lattice at β = 1.25 using the DBW2 action. Evidently, both update schemes are unable to tunnel between different topological sectors in a reasonable time. Only a single configuration during the 1HB+4OR run and two (successive) configurations during the 1HMC run fulfill the condition Q_c > 0.5. §.§ Scale setting for the DBW2 action To the best of our knowledge, scales for the DBW2 action in pure gauge theory have only been computed based on simulations with β≤ 1.22 <cit.>, and interpolation formulas are only available based on data with β≤ 1.04 <cit.>. Since here we perform simulations at β = 1.25, we compute approximate values for t_0 <cit.> and w_0 <cit.>, which allows us to estimate our lattice spacings for comparison to the Wilson results. Both scales are based on the density E, which is defined as: E = 1/4V∑_x ∈Λ F_μν^a(x) F_μν^a(x) = -1/2V∑_x ∈Λ[F_μν(x) F_μν(x)] Similar to the topological charge definitions, we adopt a plaquette- and clover-based definition of the field strength tensor, with the only difference being that the components are also made traceless, and not just anti-hermitian. The gradient flow scales t_0 and w_0 are both defined implicitly: ℰ(t) = t^2 ⟨E ⟩ |_t = t_0 = 0.3 W(t) = t d/dt ℰ(t) |_t = w_0^2 = 0.3 The flow equation was integrated using the third-order commutator free Runge-Kutta scheme from <cit.> with a step size of ϵ = 0.025. Measurements of the clover-based energy density were performed every 10 integration steps, and t^2 ⟨ E(t) ⟩ was fitted with a cubic spline, which was evaluated with a step size of 0.001. For every value of β, two independent simulations with 100 measurements each were performed on 48 × 32^3 lattices. Every measurement was separated by 200 update sweeps with the previously described 1HB+4OR update scheme, and the initial 2000 updates were discarded as thermalization phase. Our results are displayed in <Ref>. Using the physical values from <cit.>, these results imply a physical volume of approximately (0.95)^4 and a temperature of around 207 for the 16^4 lattice from the previous section. In order to facilitate comparison with other results, we also provide an interpolation of our lattice spacing results. For this purpose, we use a rational fit ansatz with three fit parameters log(t_0 / a^2) = 8 π^2/33β1 + d_1 / β + d_2 / β^2/1 + d_3 / β that is asymptotically consistent with perturbation theory <cit.> and has a sufficient number of degrees of freedom to describe our data well. For our reference, clover-based t_0 scale setting, this results in a fit with χ^2 / d.o.f.≈ 1.31 and parameters d_1 ≈ 1.0351, d_2 ≈ -1.3763, d_3 ≈ 0.4058, which is displayed in <Ref>. We want to emphasize that these results are not meant to be an attempt at a precise scale determination, but rather only serve as an approximate estimate. Especially for the finer lattices, the proper sampling of the topological sectors can not be guaranteed, and the comparatively small volumes may introduce non-negligible finite volume effects. § RESULTS WITH METADYNAMICS <Ref> shows the time series of the topological charge from simulations with the HMC and the MetaD-HMC with five and ten stout smearing steps on a 22^4 lattice at β = 6.4035, using the Wilson gauge action. Both MetaD-HMC runs tunnel multiple times between different topological sectors, whereas the conventional HMC essentially displays a single tunneling event between sectors Q = 0 and Q = 1. A noteworthy difference between the two MetaD-HMC runs is the increase of fluctuations with higher amounts of smearing. If too many smearing steps are used to define the CV, the resulting Q values will generically be closer to integer, so more simulation time is spent in the sector boundary regions. This will eventually drive the system to coarser regions of configuration space. Since these regions do not contribute significantly to expectation values in the path integral, it is desirable to minimize the time that the algorithm spends there. This is directly related to the issue of small effective sample sizes, which we will discuss in more detail in <Ref>. A similar comparison for the DBW2 action can be seen in <Ref>. Here, two MetaD-HMC runs with four and five stout smearing steps on a 16^4 lattice at β = 1.25 are compared to the 1HMC and 1HB+4OR runs, which were already shown in <Ref>. Both conventional update schemes are confined to the zero sector, whereas the two MetaD-HMC runs explore topological sectors up to Q = 6. More quantitatively, the integrated autocorrelation time of Q_c^2 on the DBW2 stream is estimated to be τ_int(Q_c^2) = 2188 ± 478 for the MetaD-HMC algorithm with 4 smearing steps, whereas lower bounds for the autocorrelation times for the 1HMC and 1HB+4OR update schemes are 4e5, which implies a difference of more than two orders of magnitude. To illustrate the role of the CV Q_meta, it may be helpful to compare the time series of Q_meta and Q_c, as shown in <Ref>. The two observables are clearly correlated, but Q_meta is distributed more evenly between integers. §.§ Computational overhead and multiple timescale integration A fair comparison of the different update schemes also needs to take the computational cost of the algorithms into account. <Ref> shows the relative timings for the different update schemes used here, measured for simulations carried out on 16^4 lattices. While the MetaD-HMC was not optimized for performance, it is still clear that the additional overhead introduced by the computation of the Metadynamics force contribution is significant for pure gauge theory. The relative overhead is especially large compared to local update algorithms, which are already more efficient than the regular HMC. Note, however, that due to its more non-local character, the relative loss in efficiency when switching to Metadynamics from either a local update algorithm or HMC, is already noticeably smaller for the DBW2 gauge action. Since the majority of the computational overhead comes from the Metadynamics force contribution, and the involved scales are different from those relevant for the gauge force, it seems natural to split the integration into multiple timescales in a similar fashion to the Sexton-Weingarten scheme <cit.>: The force contributions from the bias potential are correlated to the topological charge, which is an IR observable, whereas the gauge force is usually dominated by short-range, UV fluctuations. Therefore, it is conceivable that integrating the Metadynamics force contribution on a coarser timescale than the gauge force could significantly decrease the required computational effort, while still being sufficiently accurate to lead to reasonable acceptance rates. We have attempted to use combinations of both the Leapfrog and the Omelyan-Mryglod-Folk second-order integrator with the Omelyan-Mryglod-Folk fourth-order minimum norm integrator. Unfortunately, we were unable to achieve a meaningful reduction of Metadynamics force evaluations without encountering integrator instabilities and deteriorating acceptance rates. However, this approach might still be helpful for simulations with dynamical fermions, where it is already common to split the forces into more than two levels. Even if such a multiple timescale approach should prove to be unsuccessful in reducing the number of Metadynamics force evaluations, we expect the relative overhead of Metadynamics to be much smaller for simulations including dynamical fermions. In previous studies <cit.>, it was found that compared to conventional HMC simulations, simulations with Metadynamics and 20 steps of stout smearing were about three times slower in terms of real time. §.§ Scaling of the reweighting factor and improvements to the bias potential Due to the inclusion of the bias potential, expectation values with respect to the original, physical probability density are obtained by reweighting. As with any reweighting procedure, the overlap between the sampled distribution and the distribution of physical interest needs to be sufficiently large for the method to work properly. A common measure to quantify the efficiency of the reweighting procedure is the effective sample size (ESS), defined as ESS = (∑_i w_i )^2 /∑_i w_i^2, where w_i is the respective weight associated with each individual configuration. In the case of Metadynamics, this is simply e^V(Q_meta,i). We found the normalized ESS, i.e. the ESS divided by the total number of configurations, to generally be of order 𝒪(10^-2) or lower when simulating in regions of parameter space, where conventional algorithms fail to explore topological sectors other than Q = 0. Although the low ESS ultimately results from the fact, that the bias potential is constructed in such a way as to have a marginal distribution over the CV that is flat, we can nonetheless distinguish two parts of this effect. On the one hand, there is the inevitable flattening of the intersector barriers by the bias potential, which is necessary to facilitate tunneling between adjacent topological sectors. On the other hand, however, the different weight of the different topological sectors is also cancelled by the bias potential. While it is necessary for a topology changing update algorithm to reproduce the intersector barriers faithfully, the leveling of the weights of the different topological sectors is entirely unwanted. It enhances the time that the simulation spends at large values of Q, so that these sectors are overrepresented compared to their true statistical weight. It is therefore conceivable, that by retaining only the intersector barrier part of the bias potential, the relative weights of the different topological sectors will be closer to their physical values, and the ESS will increase. In previous tests in 2-dimensional U(1) gauge theory, we found that the bias potentials could be described by a sum of a quadratic and multiple oscillating terms <cit.>: V(Q) = A Q^2 + ∑_i = 1^N B_isin^2(π f_i Q) Here, we fit our bias potentials, that are obtained from the 2-dimensional U(1) simulations, to this form. We then obtain a modified bias potential by subtracting the resulting quadratic term from the data. This modification of the bias potential is effective in reducing the oversampling of topological sectors with large |Q|, as evidenced by the larger normalized ESS in <Ref>. The resulting marginal distribution over the topological charge is then no longer expected to be constant, but rather resemble a parabola. Here and in <Ref> of this work, we perform scaling tests of the proposed improvements in 2-dimensional U(1) gauge theory, where high statistics can be generated more easily than in 4-dimensional SU(3) gauge theory. The action is given by the standard Wilson plaquette action S_g = β∑_n ∈Λ(1 - [P_t, x(n)] ), and updates are performed with a single-hit Metropolis algorithm. The topological charge is defined using a geometric, integer-valued definition: Q = 1/2π[∑_n ∈Λlog P_t, x(n) ] For all Metadynamics updates, we use a field-theoretic definition of the topological charge that is generally not integer-valued: Q_meta = 1/2π[∑_n ∈Λ P_t, x(n) ] Since the charge distributions obtained from the two definitions already show reasonable agreement without any smearing for the parameters considered here, we can use local update algorithms and directly include the Metadynamics contribution in the staple. A similar idea that encourages tunneling in the Schwinger model by adding a small modification to the action was proposed in <cit.>. <Ref> contains the relative ESS and integrated autocorrelation times for different lattice spacings on the same line of constant physics in 2-dimensional U(1) theory. We compare Metadynamics runs using bias potentials obtained directly from previous simulations with Metadynamics runs using potentials that were modified to retain the relative weights of the topological sectors as described above. We see large improvements for both the ESS and τ_int in the modified case, even for the finest lattices considered. We expect that the quadratic term is mostly relevant for small volumes and high temperatures. With larger volumes and lower temperatures, the slope should decrease, and with it the importance of correctly capturing this term. On the other hand, the oscillating term is expected to grow more important with finer lattice spacings, as the barriers between the different sectors grow steeper. Thus, the oscillating term needs to be described more and more accurately towards the continuum. A standard technique to decrease, but not completely eliminate, action barriers is well-tempered Metadynamics <cit.>.In this approach, the height of the added Gaussians w decays with increasing potential. In our tests, we found that this method does increase the ESS, but at the cost of higher autocorrelation times to the point where any gains from the ESS that would be visible in the uncertainties of observables are nullified. Although it might still have some use in accelerating the build-up process or as a possible intermediate stream for PT-MetaD (see <Ref>), we decided not to explore this option further at this point. §.§ Accelerating the equilibration/buildup of the bias potential Another avenue of improvement is accelerating the build-up of the bias potential, for which we again explore two possible ideas. This aspect becomes especially relevant when considering large-scale simulations, where runs are often limited to 𝒪(10^4) update sweeps, and a lengthy buildup phase of the bias potential would render the method infeasible. The first idea is to exploit the aforementioned well-tempered variant of Metadynamics, by choosing a larger starting value of the Gaussian height w and letting it decay slowly so as to minimize the change in the potential that arises from the decay. While this approach adds another fine-tunable parameter, namely the decay rate, we found that this did indeed significantly cut down on the number of update iterations required to thermalize the potential. A small caveat is, that in order to choose the optimal decay rate, one would have to have knowledge on the approximate height of the action barriers, which is not always the case. A way of improving the build-up time without any prior knowledge of the bias potential is to use an enhancement of Metadynamics which is most commonly referred to as multiple walkers Metadynamics <cit.>, where the potential is simultaneously built up by several independent streams in a trivially parallelizable way. To add to this, we make each stream start in a distinct topological sector by the use of instanton configurations, which can easily be constructed in 2-dimensional U(1) gauge theory <cit.>. Namely, an instanton configuration with charge Q is given by U_t^I(Q; t, x) = exp( -2 π i x Q_j/N_x N_t), U_x^I(Q; t, x) = exp( 2 π i t Q_j/N_tδ_x, N_x). The parallel and serial build are compared in <Ref> where the potential parameters for each stream are given by: Q_max/min = ±7, n_bins = 1400 and w = 0.002. Since this method is an embarrassingly parallel task, we expect it to easily carry over to higher-dimensional, non-abelian theories with topological properties. In the case of 4-dimensional SU(3) the direct construction of instantons with higher charge is not quite as simple as in 2-dimensional U(1) gauge theory. The construction of lattice instantons with even charge is described in <cit.>, and lattice instantons with odd charge can be constructed by combining multiple instantons with charge Q = 1 <cit.>. Regardless, having exact instantons is not required, since we only need each stream to start in a sector, where it is then very likely to fall into the local minimum of the specified sector. Independent of the possible improvements mentioned here, a fine-tuning of the standard Metadynamics parameters could also prove to be worthwhile in regard to accelerating the buildup and improving the quality of the bias potential. § COMBINING METADYNAMICS WITH PARALLEL TEMPERING In order to eliminate the problem of small effective sample sizes observed in our Metadynamics simulations due to the required reweighting, we propose to combine Metadynamics with parallel tempering <cit.>. This is done in a spirit similar to the parallel tempering on a line defect proposed by Hasenbusch <cit.>. We introduce two simulation streams: One with a bias potential, and the other without it, while actions S(U) are the same for both streams. Note that since we are working in pure gauge theory, this means the second stream without bias potential can be updated with local update algorithms. After a fixed number of updates have been performed on the two streams, a swap of the two configurations is proposed and subject to a standard Metropolis accept-reject step, with the action difference given by Δ S^M_t = [S^M_t(U_1) + S(U_2)] - [S^M_t(U_2) + S(U_1)] = V_t(Q_meta,1) - V_t(Q_meta,2), where the indices of the quantities denote the number of the stream and V_t is the bias potential in the first stream. It is apparent and important to note that the action difference is simple to compute regardless of what the physical action looks like. Even in simulations where dynamical fermions are present, the contributions from the physical action are always cancelled out by virtue of the two streams having the same action parameters; only the contribution from the Metadynamics bias potential remains. Since the second stream samples configurations according to the (physical) target distribution, no reweighting is needed and thus the effective sample size is not reduced. Additionally, if the swaps are effective, this stream will inherit the topological sampling from the stream with bias potential and thus also sample topological sectors well. Effectively, the accept-reject step for swap proposals serves as a filter for configurations with vanishing statistical weight, thereby decreasing the statistical uncertainties on all observables weakly correlated to the topological charge. What remains to be seen is, whether the efficiency of the sampling of the topological sectors carries over from the Metadynamics stream to the measurement stream. In this section, we address this question both via a scaling test in 2-dimensional U(1) and with exploratory runs in 4-dimensional SU(3) with the DBW2 gauge action in a region where conventional update algorithms are effectively frozen. §.§ Scaling tests in 2-dimensional U(1) We carried out a number of simulations in 2-dimensional U(1) gauge theory for several lattice size and couplings with the same parameters as used in the test described in <Ref>. We use the potentials already build for these Metadynamics runs as static bias potentials in a number of parallel tempered Metadynamics runs. For each set of parameters, we carry out one run with the respective unmodified potential and one run with a potential modified as described in <Ref>. In these runs, swaps between the two streams were proposed after each had completed a single update sweep over all lattice sites. well as the resulting autocorrelation times of the topological charge Q can be found in <Ref>. To ensure that actual tunneling occurs, we also monitor the sum of the squared topological charges on both streams. This observable allows us to distinguish the fluctuations in Q originating from true tunneling events, mostly appearing in the stream with bias potential, from repeated swaps between the two streams without tunneling, which might also introduce a fluctuation of Q in the streams without actually overcoming any potential barriers. <Ref> shows the scaling of the total amount of independent configuration, which is given by the quotient of the effective sample size <Ref> and the integrated autocorrelation time of the topological susceptibility. The performance of the standard Metropolis algorithm is compared to parallel tempered and standard Metadynamics, with both modified (see <Ref>) and non-modified bias potentials. Clearly, the parallel tempered Metadynamics update schemes perform best for small lattice spacings. Most importantly, the ratio of independent configurations in the sample seems to reach a plateau for finer lattice spacings, which is in stark contrast to conventional Metadynamics. It is also worth noting, that the modified bias potential provides better results than the non-modified one. This is consistent with our expectation, that large excursions in the topological charge, which produce irrelevant configurations, are curbed by the modified bias potential. For a more detailed look at the effectiveness of the new algorithm, <Ref> compares the results of parallel tempered Metadynamics with those of standard Metadynamics at our finest lattice, with and without modification of the bias potential, and with the exact solutions <cit.>. First we note, that there is no significant difference in the performance between standard and parallel tempered Metadynamics in the topology related observables Q and Q^2, at least in the case of a modified bias potential. This is a very encouraging result, since the topological sampling of parallel tempered Metadynamics can not possibly exceed that of standard Metadynamics, as ultimately it is inherited from there. On the other hand, the inclusion of the irrelevant higher sectors with the unmodified bias potential does increase the error bars and there is some indication, that not all of the topological sector sampling is carried over into the measurement run of parallel tempered Metadynamics. Looking at an observable which is not related to topology, such as the plaquette, reveals that parallel tempered Metadynamics is superior to pure Metadynamics. This is clearly the effect of the better effective sample size and the larger number of independent configurations. In summary, our scaling tests in 2-dimensional U(1) suggest, that parallel tempered Metadynamics with a modified bias potential has a much improved topological sampling, which seems to be almost equivalent to standard Metadynamics, while at the same time not suffering from a reduced effective sample size. There is some indication, that the ratio of independent to total configurations does reach a stable plateau in the continuum limit. These results encourage us to perform an exploratory study in pure SU(3) gauge theory in 4 dimensions. §.§ First results in 4-dimensional SU(3) For our exploratory study in 4-dimensional SU(3), we turn to the DBW2 gauge action at β=1.25 on a V=16^4 lattice, which we have already used in <Ref>. For our first run, which is depicted in the left panels of <Ref>, we have combined a local 1HB+4OR measurement run with a 4stout MetaD-HMC run that dynamically generates the bias potential. Between swap proposals, updates for the two streams are performed at a ratio of 10 (1HB+4OR) to 1 (MetaD-HMC), which roughly reflects the relative wall clock times between the algorithms. One can see that the measurement run starts exploring other topological sectors almost as soon as the parallel run with active bias potential has gained access to them. In the later stages of the run, when the bias potential is sufficiently built up to allow the Metadynamics run to enter higher topological sectors, one can see that the swap rate is lowered by the action difference between the topological sectors, leading to an overall swap rate of ∼ 0.063. This effect mirrors the reduction of the effective sample size in pure Metadynamics updates and may be ameliorated by removing the quadratic term in the bias potential, as discussed in <Ref>. In fact, the relevant point is that the action difference between the maxima of the bias potential for different topological sectors reflects the relative weight of these sectors in the path integral and should not be flattened out. Ideally, we want the bias potential to only reproduce the barriers between the sectors, not their relative weights. For a second exploratory parallel Metadynamics run, we therefore opted for a static bias potential of this sort. Lacking data that are precise enough to model the bias potential in detail, as we did in 2-dimensional U(1), we started from the bias potential of a previous Metadynamics run and extracted the high frequency (in the CV) part of the topological barriers, while eliminating the long range part corresponding to the relative weight of the topological sectors. For this purpose, we chose to perform a singular spectrum analysis (SSA) <cit.> and crosschecked the result with a simple, piece-wise subtraction of the Q^2 term between consecutive local maxima. As displayed in <Ref>, both methods result in a similar modified bias potential that seems to reproduce the intersector barriers rather well. The right panels of <Ref> display the results of the corresponding parallel tempered Metadynamics run. As one can see, large topological charge excursions of the Metadynamics run are now curbed, and the swap acceptance rate has increased to ∼0.25. In addition, the acceptance rate is approximately constant over the entire run, as it should be expected for a static bias potential. We would like to emphasize, that the bias potential we extracted is a rather rough guess. With a larger amount of data, it might be possible to extract a better bias potential, possibly leading to even better acceptance rates. Considering the rather simple ultimate form of the bias potential used, it might also be possible to model it with sufficient accuracy for a good initial guess at other run parameters. We plan to address these points in a future publication. In any case, these first results clearly show that the parallel tempered Metadynamics algorithm is able to achieve enhanced topological sampling in 4-dimensional SU(3) without the reduction of the effective sample size that is typical for algorithms with a bias potential. § CONCLUSION AND OUTLOOK In this paper, we have demonstrated that Metadynamics can be used to significantly reduce the integrated autocorrelation times of topological quantities in lattice simulations. In simulations of 4-dimensional SU(3) gauge theory with the DBW2 action, we have observed reductions of the autocorrelation times of more than two orders of magnitude. However, the direct application of Metadynamics is not entirely unproblematic: Compared to local update algorithms, there is a large computational overhead due to the costly Metadynamics force evaluations, and the reweighting procedure required to obtain unbiased expectation values can significantly reduce the effective sample size. In order to circumvent this reduction, we have proposed two improvements: The first consists of modifying the bias potential, so that all topological sectors are represented with their correct weight; the second is adding a dedicated measurement stream parallel to the Metadynamics run, which uses a conventional update algorithm. Periodically, swaps between the two streams are suggested and subject to an accept-reject step. The accept-reject step during swap proposals then effectively serves as a filter for configurations with low statistical weight. This parallel tempered Metadynamics algorithm, including both improvements, has been successfully applied to 4-dimensional SU(3) gauge theory. Furthermore, scaling tests in 2-dimensional U(1) gauge theory indicate gains of more than an order of magnitude compared to standard Metadynamics, and an improved scaling of autocorrelation times with the lattice spacing compared to standard update algorithms. Additionally, we have demonstrated that the buildup of the Metadynamics bias potential may be accelerated by running multiple Metadynamics simulations in parallel. We believe these results are promising, and plan to study the scaling behavior of the methods tested here in more detail for 4-dimensional SU(3) gauge theory, and eventually in full QCD. Conceptually, there seem to be no obstacles for implementing parallel tempered Metadynamics in full QCD. We also plan to explore possible optimizations for parallel tempered Metadynamics. These include optimizing the bias potential via enhanced buildup and extraction and, possibly, describing it parametrically. Furthermore, it would be interesting to investigate whether adding intermediate runs to a parallel tempered Metadynamics stream could increase performance, despite the additional computational cost. We thank Philip Rouenhoff for collaboration in early stages of this work. We gratefully acknowledge helpful discussions with Szabolcs Borsanyi, Stephan Dürr, Fabian Frech, Jana Günther, Ruben Kara, Andrey Kotov, and Kalman Szabo. Calculations were performed on a local PC cluster at the University of Wuppertal. § METADYNAMICS FORCE In order to obtain an expression for <Ref>, the algebra-valued derivative of Q_meta with respect to the unsmeared links U_μ^(0) has to be calculated. Here, we will only focus on the derivative of the clover-based topological charge Q_c with respect to a fully smeared gauge configuration U. For details of the stout-force recursion, we refer to <cit.>. On the lattice, the following definition holds for a suitably defined lattice field strength tensor: Q_c = 1/32 π^2∑_n ∈Λ[ϵ_μνρσ F_μν(n) F_ρσ(n)] The lattice field strength tensor based on the clover term is defined as the sum of four plaquettes: F_μν(n) = -i/8a^2(C_μν(n) - C_νμ(n) ) where the clover term in turn is defined via: C_μν(n) = P_μ, ν(n) + P_ν, -μ(n) + P_-μ, -ν(n) + P_-ν, μ(n) For notational purposes, we define the auxiliary variables R_μν(n) = C_μν(n) - C_νμ(n) and drop the specification of the lattice site n unless pertinent to the formula. What we need for the force is the sum over all eight algebra directions: T^a ∑_νρσ 4∂_n, α^aϵ_ανρσ[R_αν R_ρσ] where the sum over a is implied. Using the field strength tensor's symmetry properties, the derivative can be written as a term of the following form: ∑_νρσ∂_n, α^aϵ_ανρσ[R_αν R_ρσ] = ∑_νρσϵ_ανρσ 2 [ T^a U_α(n) U_ν(n + α) U^†_α(n + ν) U^†_ν(n) R_ρσ(n) - T^a U_α(n) U^†_ν(n + α - ν) U^†_α(n - ν) R_ρσ(n - ν) U_ν(n - ν) - T^a U_α(n) U^†_ν(n + α - ν) R_ρσ(n + α - ν) U^†_α(n - ν) U_ν(n - ν) + T^a U_α(n) R_ρσ(n + α) U_ν(n + α) U^†_α(n + ν) U^†_ν(n) - T^a U_α(n) U^†_ν(n + α - ν) U^†_α(n - ν) U_ν(n - ν) R_ρσ(n) + T^a U_α(n) U_ν(n + α) U^†_α(n + ν) R_ρσ(n + ν) U^†_ν(n) - T^a U_α(n) R_ρσ(n + α) U^†_ν(n + α - ν) U^†_α(n - ν) U_ν(n - ν) + T^a U_α(n) U_ν(n + α) R_ρσ(n + α + ν) U^†_α(n + ν) U^†_ν(n) ] = ∑_νρσϵ_ανρσ 2 [ T^a A_ανρσ] = 2 [ T^a A_α] An expression of the above form can be rewritten using the projector induced by the scalar product of the algebra: T^a [T^a A_α] = -1/2A_α+ 1/6 [A_α] Which in our case translates to: T^a 2 [T^a A_α] = T^a [ T^a A_α+ (T^a A_α)^†] = T^a [ T^a A_α- T^a A_α^†] = -1/2(A_α- A_α^†) + 1/6[ A_α- A_α^†] Including the factor we lost after defining R_μν, we obtain the derivative of the trace in <Ref> ∑_μνρσT^a ∂_n, α^aϵ_μνρσ[F_μν F_ρσ] = ∑_μνρσ -1/64 T^a ∂_n, α^aϵ_μνρσ[R_μν R_ρσ] = 1/32( (A_α - A_α^†) - 1/3[ A_α - A_α^†] ) Summarized, the algebra-valued derivative of the clover-based topological charge with respect to the gauge link U_α(n) can be written as: T^a ∂_n, α^a Q_c = ∑_μνρσ1/32π^2 T^a ∂_n, α^aϵ_μνρσ[F_μν F_ρσ] = 1/1024π^2( (A_α - A_α^†) - 1/3[ A_α - A_α^†] )
http://arxiv.org/abs/2307.05202v1
20230711121206
Accelerated structural evolution of galaxies in a starbursting cluster at z=2.51
[ "Can Xu", "Tao Wang", "Qiusheng Gu", "Anita Zanella", "Ke Xu", "Hanwen Sun", "Veronica Strazzullo", "Francesco Valentino", "Raphael Gobat", "Emanuele Daddi", "David Elbaz", "Mengyuan Xiao", "Shiying Lu", "Luwenjia Zhou" ]
astro-ph.GA
[ "astro-ph.GA" ]
School of Astronomy and Space Science, Nanjing University, Nanjing, Jiangsu 210093, China Key Laboratory of Modern Astronomy and Astrophysics, Nanjing University, Ministry of Education, Nanjing 210093, China 0000-0002-2504-2421]Tao Wang School of Astronomy and Space Science, Nanjing University, Nanjing, Jiangsu 210093, China Key Laboratory of Modern Astronomy and Astrophysics, Nanjing University, Ministry of Education, Nanjing 210093, China School of Astronomy and Space Science, Nanjing University, Nanjing, Jiangsu 210093, China Key Laboratory of Modern Astronomy and Astrophysics, Nanjing University, Ministry of Education, Nanjing 210093, China Istituto Nazionale di Astrofisica (INAF), Vicolo dell’Osservatorio 5, I-35122 Padova, Italy School of Astronomy and Space Science, Nanjing University, Nanjing, Jiangsu 210093, China Key Laboratory of Modern Astronomy and Astrophysics, Nanjing University, Ministry of Education, Nanjing 210093, China School of Astronomy and Space Science, Nanjing University, Nanjing, Jiangsu 210093, China Key Laboratory of Modern Astronomy and Astrophysics, Nanjing University, Ministry of Education, Nanjing 210093, China INAF – Osservatorio Astronomico di Trieste, Via Tiepolo 11, 34131 Trieste, Italy IFPU – Institute for Fundamental Physics of the Universe, Via Beirut 2, 34014 Trieste, Italy Cosmic Dawn Center (DAWN), Copenhagen, Denmark Niels Bohr Institute, University of Copenhagen, Jagtvej 128, 2200 Copenhagen N, Denmark Instituto de Física, Pontificia Universidad Católica de Valparaíso, Casilla 4059, Valparaíso, Chile Université Paris-Saclay, Université Paris Cité, CEA, CNRS, AIM, 91191, Gif-sur-Yvette, France Université Paris-Saclay, Université Paris Cité, CEA, CNRS, AIM, 91191, Gif-sur-Yvette, France Department of Astronomy, University of Geneva, Chemin Pegasi 51, 1290 Versoix, Switzerland School of Astronomy and Space Science, Nanjing University, Nanjing, Jiangsu 210093, China Key Laboratory of Modern Astronomy and Astrophysics, Nanjing University, Ministry of Education, Nanjing 210093, China School of Astronomy and Space Science, Nanjing University, Nanjing, Jiangsu 210093, China Key Laboratory of Modern Astronomy and Astrophysics, Nanjing University, Ministry of Education, Nanjing 210093, China Tao Wang [email protected] Structural properties of cluster galaxies during their peak formation epoch, z ∼ 2-4 provide key information on whether and how environment affects galaxy formation and evolution. Based on deep HST/WFC3 imaging towards the z=2.51 cluster, J1001, we explore environmental effects on the structure, color gradients, and stellar populations of a statistical sample of cluster SFGs. We find that the cluster SFGs are on average smaller than their field counterparts. This difference is most pronounced at the high-mass end (M_⋆ > 10^10.5 M_⊙) with nearly all of them lying below the mass-size relation of field galaxies. The high-mass cluster SFGs are also generally old with a steep negative color gradient, indicating an early formation time likely associated with strong dissipative collapse. For low-mass cluster SFGs, we unveil a population of compact galaxies with steep positive color gradients that are not seen in the field. This suggests that the low-mass compact cluster SFGs may have already experienced strong environmental effects, e.g., tidal/ram pressure stripping, in this young cluster. These results provide evidence on the environmental effects at work in the earliest formed clusters with different roles in the formation of low and high-mass galaxies. § INTRODUCTION The cosmic epoch of redshift z ∼ 2-4 marks an important phase of mass assembly and galaxy transformation for clusters of galaxies, at least for the most massive ones. This is first speculated from galaxy archaeology studies in the local Universe <cit.>, and has now been confirmed by the discovery of a significant population of starbursting (proto)clusters at these redshifts <cit.>. Unlike their local counterparts, these structures generally exhibit a much larger fraction of star-forming galaxies (SFGs) and starbursts <cit.>. Studying the physical properties of the member galaxies in these structures is essential to constrain the role of dense environment in the star formation and quenching of cluster galaxies. During the last decade, extensive efforts have been made in probing the physical properties of SFGs in high-z clusters, including star-forming main sequence, mass-metallicity relation, gas content, and star formation efficiency <cit.>. However, many of these studies yield different and sometimes controversial conclusions, which are likely driven by the biases in selecting member galaxies as well as the various types or evolution stages of (proto)clusters they probed. Therefore, in order to achieve a full understanding on the environmental effects on SFGs, a census of complete samples of SFGs in various types of (proto)clusters is essential, which is unfortunately quite difficult for most previous studies due to observational limitations. In addition to the aforementioned physical properties, the structure/morphologies of cluster galaxies provide another important avenue to study their formation process. In particular, the structure/morphologies of the SFGs in clusters carry key information on the involved environmental mechanisms. Most environmental mechanisms, e.g., tidal and ram pressure stripping, and galaxy interactions, all leave imprints in the structure/morphologies of SFGs. So far, most studies have primarily focused on the size evolution of quiescent galaxies in clusters at intermediate redshifts <cit.>, structural properties of representative samples of SFGs in high-z clusters or protoclusters is still poorly constrained. A few earlier studies show that cluster SFGs are generally smaller than their field counterparts at low redshift (z≲ 1,  ), a sign that environmental effects such as ram pressure and tidal stripping may be at work <cit.>. At z ≳ 1-2, however, the situation is less clear. A few recent works show that there is little environmental dependence on the sizes of SFGs at these redshifts  <cit.>, while some other works focusing on (proto)clusters at higher redshifts with more active star formation reveal that a significant fraction of massive cluster SFGs appear to be more compact than field galaxies <cit.>. This indicates that these starbursting (proto)clusters, in which their massive SFG members are going through a major phase of mass assembly, may represent the best laborotary to witness environmental effects at work for cluster SFGs. In this paper, we focus on structual properties of star-forming member galaxies in the z=2.51 cluster J1001 <cit.>, one of the most extreme cases of starbursting clusters or protoclusters. We extend our previous work on the same structure with newly obtained HST/WFC3 F125W and F160W (rest-frame optical) imaging towards a more complete sample of SFGs, while previous studies were based on shallow HST/WFC3 F110W imaging (rest-frame UV) on a biased sample (mainly CO-detected) of member galaxies. In addition to the structural properties, the multi-band HST/WFC3 imaging permits a census of the color profiles and stellar population properties of member galaxies, enabling probing the underlying physics on their structural/morphological differences from field galaxies. The paper is organized as follows. In Section <ref>, we give a brief description of the data and sample selection. We detail the method used for the estimation of galaxy size and other physical parameters in Section <ref>. In Section <ref>, we present the mass-size relation, color profiles and average spectral energy distributions (SEDs) of the cluster SFGs and their comparison to field galaxies. We then discuss the implications and physical origins of the observed differences between cluster and field galaxies, and summarize our main findings in Section <ref>. Throughout the paper we adopt a cosmology with Ω_ m= 0.3, Ω_Λ= 0.7 and H_0 = 70 km s^-1 Mpc^-1. Magnitudes are provided in the AB system  <cit.>. We use <cit.> stellar population synthesis models and a Chabrier initial mass function (IMF, ). § DATA: MEMBER GALAXIES SELECTION AND HST IMAGING Our primary selection of the star-forming (SF) members of J1001 is based on the deep narrow-band imaging with Subaru/MOIRCS <cit.>, aiming to identify Hα emitters at z=2.49-2.52 with the “CO” filter. We have detected 49 Hα emitters with a dust-free SFR limit of ∼ 5 M_⊙yr^-1 <cit.>, which corresponds to lower mass limit (1σ) of 10^9.2M_⊙ assuming the main sequence parametrization of <cit.>. While this ensures that we are able to detect most of the SF members above this mass limit, some of the most obscured ones may still be missed. We hence complemented this sample of Hα emitters with our previously confirmed cluster members based on CO(1-0) <cit.> and CO(3-2) <cit.> observations. The HST/WFC3 F125W and F160W imaging of J1001 is from Project 14750 (PI: T. Wang), which reaches 5σ detection limit of H_ F160W = 27.3 for point sources. We further require HST/WFC3 coverage with a minimum of H_ F160W= 24.5, roughly the same cut as that in  <cit.>. By restricting Hα emitters falling in the HST/WFC3 coverage, our final sample of SF members of J1001 includes 19 Hα emitters, 4 of which are covered by our previous KMOS observations and have Hα detections <cit.>. Some massive, and dusty obscured galaxies tend to have weak Hα emission, which will be missed from the sample of Hα emitters. By crossmatching this Hα-selected sample with the CO-detected members from previous work <cit.>, we find 4 additional members detected in CO(1-0) or CO(3-2) lines (Figure <ref>), yielding a total of 23 SF members. § METHOD §.§ Derivation of structure properties and color profiles We use GALFIT  <cit.> to measure galaxy structural parameters. Point-spread function (PSF) of each band were created using TinyTim  <cit.>. We adopted single Sérsic profiles <cit.> consistent with <cit.> for the fitting. For sources with nearby bright neighbors, we fit them simultaneously. Monte Carlo simulations are used to estimate the uncertainty of parameters reported by GALFIT. The effective radius (R_e) is defined as semi-major axis half-light radius from fitting F160W images. Considering the wavelength dependence of R_e, we applied the same correction as <cit.> to get final R_e estimates at rest-frame 5000Å. We compared the fitting results with using PSFs from stacking of bright stars in the field, yielding consistent results (with typical difference in R_ e less than 10%). We summarize our fitting results on the cluster galaxy samples in Appendix <ref>. We measure surface brightness profiles and radial color profiles by Photutils <cit.>[<https://photutils.readthedocs.io>] for stacking and individual galaxies. In the stacking procedure, the light-weighted center of each galaxy is shifted to the physical cut-out image center. We rescale the image onto a common grid and take the median pixel value as the flux of the stacked image at each position. Then we correct the different PSFs in the F125W and F160W bands by convolving with a Gaussian kernel to broaden the F125W-band images to match the angular resolution of the F160W-band images. §.§ Derivation of stellar masses We derive the stellar masses of our cluster galaxies by fitting their multiwavelength photometry from U-band to IRAC ch4 using the SED-fitting code BAGPIPES  <cit.> with a MultiNest sampling algorithm. We use the stellar population synthesis models from <cit.> with a delayed exponential declining star formation history and the <cit.> extinction law. The nebular emission is constructed following the methodology of <cit.>. We first run SED fitting with free stellar metallicity and get the initial stellar masses. Based on the mass-metallicity relation <cit.>, we estimated the prior range for metallicity in the new fitting. We repeated the SED fitting using the metallicity priors and obtained the final estimates for stellar masses, star formation rates, dust attenuation, and mass-weighted ages. We convert IMF from Kroupa <cit.> to Chabrier by dividing stellar masses by a factor of 1.06. § RESULTS §.§ Mass-Size Relation of cluster galaxies We show the mass-size relation for the cluster galaxies measured in the F160W band in Figure <ref>. The relation for field galaxies is from <cit.>. Cluster SFGs appear to be systematically smaller compared to their field counterparts. This is more pronounced at the high-mass end (M_∗ > 10^10.5 M_⊙), where half of the galaxies lie below the field relation considering the scatter. Galaxies at the low-mass end exhibits a large scatter, but on average, cluster SFGs are smaller ( 0.1dex) than their field counterparts, as also supported by the stacking results. For a detailed cluster-field comparison, we consider a mass- and redshift-matched field sample based on the CANDELS/GOODS-South data[ Data were obtained from the Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Institute; see <cit.>. The specific observations analyzed can be accessed via [https://doi.org/10.17909/8gdf-dc47]https://doi.org/10.17909/8gdf-dc47.] <cit.>, which reach H_F160W∼ 27 (5σ) magnitude limits for point sources for imaging. Multiband photometry and stellar masses in GOODS-South are from <cit.> and <cit.>. SFGs at 2.25≤ z ≤ 2.75 are selected with the UVJ diagram <cit.>. We divide field/cluster galaxies into compact and extended subsamples depending on whether they are below or above the mass-size relation in the field/cluster, respectively. We then select three (eight) field counterparts for each cluster galaxy in the same subsample within the stellar mass range of ∼0.2 dex in high- (low-) mass bin. In total, 109 field SFGs are selected. As a sanity check, we stacked images of these field SFGs in two mass bins and derived their sizes with the same approach used for clusters, which are consistent with the field mass-size relation in <cit.>. As shown in the right panel of Figure <ref>, cluster SFGs exhibit larger Sérsic indexes than field galaxies. This is consistent with what we found in the stacked images, with best fitted Sérsic index of 3.3±0.3 (2.4±0.2) and 1.9±0.4 (1.3±0.2) for high-mass and low-mass cluster (field) SFGs. We perform T-tests to determine if there is a statistically significant difference between the average Sérsic index of cluster and field samples. The p-value is 0.044 (≤0.05) and 0.226 for the high- and low-mass galaxies. These results indicate more spheroidal morphologies for high-mass cluster SFGs, suggesting accelerated structural evolution in dense environments. §.§ The radial color profiles of cluster SFGs We show the radial F125W - F160W (J - H) color profiles for the cluster/field SFGs based on the stacked images in Figure <ref>. In addition to the color profiles for the full sample, we also divide galaxies at both mass bins into compact and extended subsamples based on whether they are below or above the average mass-size relations shown in Figure <ref>. We then derive the color profiles for each subsample (Figure <ref>), for both stacked images and images of individual galaxies. The maximum radial distance along the semimajor axis is set to 2R_e. At the redshift of the cluster, z = 2.51, the J - H color straddles the Balmer break, which is an excellent tracer of the average age of the stellar populations. Considering the effect of dust extinction, for example, assuming a Calzetti extinction curve  <cit.>, Δ A_V = 0.3 corresponds to only a change of < 0.1 dex in the J - H color gradient at z = 2.5. As shown in Figure <ref>, distinct color gradients are revealed between field and cluster galaxies in both mass bins. The most striking feature is the steep positive color gradient for low-mass cluster galaxies, compared to the rather flat profile for field galaxies. For massive galaxies, on the other hand, a steeper negative color gradient is observed for cluster SFGs. Dividing galaxies into compact and extended ones, we further show that compact low-mass galaxies exhibit steep positive color gradients. The extended low-mass cluster galaxies show similar color profiles as their field counterparts, but are on average bluer. For high-mass galaxies, the color profile is similar between compact and extended galaxies, both of which show a steeper negative gradient than their field counterparts. These results suggest that in addition to the structural properties, the cluster SFGs also behave differently in their color gradients, which may reflect their different star formation histories. §.§ The average stellar population properties of cluster SFGs In order to examine general properties of the stellar populations of cluster SFGs, we derive the average SEDs, respectively for the low and high-mass subsamples, by computing the median flux densities with the Hodges-Lehmann estimator across U-band to IRAC ch4. The stacked SEDs for field galaxies are also derived with the same approach for comparison. We fit these stacked SEDs with BAGPIPES  <cit.> using the same parameter setting as for SED fitting for individual galaxies introduced in <ref>. The stacked SEDs and our best-fitting results are shown in Figure <ref>. For both low and high-mass galaxies, we find that the differences between cluster and field in their SEDs are mainly reflected in the extended population, while the SEDs of compact galaxies are rather similar. At the massive end, cluster galaxies are all relatively old, while the extended field galaxies are much younger, with a mass-weighted age of 0.04 Gyr, compared to 0.78 Gyr for extended cluster galaxies. The young age for the extended field galaxies is also clearly reflected in their weak Balmer break. For the extended low-mass galaxies, on the contrary, cluster galaxies are younger, consistent with their overall blue J - H color profile (Figure <ref>). We argue that the lack of relatively young massive galaxies in the cluster indicates that massive cluster galaxies are generally formed earlier than their field counterparts. For low-mass cluster galaxies, on the other hand, the apparent age difference between compact and extended ones suggests that structural transformation is likely related to their accretion history onto the cluster. §.§ Relation between galaxy sizes of cluster SFGs and their clustercentric radius Motivated by previous findings on a clustercentric-radius dependence of gas content in J1001 <cit.>, here we also explore whether and how the galaxy sizes change as a function of clustercentric radius. We constructed line-of-sight velocity versus projected position phase-space diagrams (Figure <ref>) of the spectroscopically comfirmed member galaxies with the same approach as in <cit.>. The phase-phase diagram allows us to infer the accretion history of these galaxies, where the parameter k_b=(r/R_200c)× ( |△ V | /σ_cluster) is roughly proportional to the time since infall. Galaxies with lower k_b are more closely bound to the cluster. Since the cluster members in the outskirts lack HST/WFC3 coverage, our analysis is limited to those close to the cluster center within the virial radius R_200c, most of which are massive galaxies with M_⋆ > 10^10.5 M_⊙. In addition to their sizes, we also show their gas content and SFRs (with the same data from <cit.>), enabling a comprehensive understanding on the clustercentric-radius dependence of the physical properties of cluster SFGs. Figure <ref> clearly shows that most of the compact cluster SFGs are preferentially located in the upper-left region that is expected to suffer from strong ram pressure stripping <cit.>. Moreover, those with lowest gas content and SFR (relative to similarly massive main-sequence SFGs) also tend to populate this region, providing further evidence that ram pressure stripping may be significantly affecting the structure, gas content and star formation properties in this distant cluster. However, not all compact galaxies in the “stripped” region exhibit low gas content and SFRs, which may reflects their different dynamical states and/or time delays between the changes in the stellar structure and ISM properties. We defer detailed discussions on this subject to a future work. § DISCUSSION AND CONCLUSION In this paper, based on a statistical and highly complete sample of SFGs (log(M_*/M_⊙)>10^9.2) in the z=2.51 cluster J1001, we have revealed significant differences in the structure, color profiles, and stellar population properties between cluster and field galaxies, suggesting that the structural transformation and star formation quenching in cluster SFGs are accelerated. We summarize these findings as follows: * Cluster SFGs are on average smaller than their field counterparts (∼0.1 dex), a difference that is most pronounced at the high-mass end (M_⋆ > 10^10.5M_⊙) with most of the high-mass SFGs (86%) lying below the mass-size relation of field galaxies (53%, if considering the 1σ scatter). Cluster SFGs are also on average more spheroidal with higher Sérsic indexes at all masses. * High-mass cluster SFGs exhibit steep negative color gradient in J - H (rest-frame U - B), irrespective of their compactness. At low masses, they show an overall much bluer color than their field counterparts and the compact SFGs (lying below the mean mass-size relation of all cluster SFGs) exhibit steeper positive color gradients. * The stellar populations of high-mass cluster SFGs are relatively old irrespective of their compactness. For low-mass cluster SFGs, the compact galaxies are generally older than the extended ones. The presence of strong mass dependence in the explored properties indicates different roles of the dense environment in the evolution of high- and low-mass systems. We hence discuss the two populations separately. §.§ Early rapid formation of massive SFGs in starbursting clusters For the high-mass cluster SFGs, their overall smaller sizes (and higher Sérsic indexes) than field galaxies and old stellar populations indicate they are in a more advanced transition phase into quiescent galaxies. Two effects may drive this accelerated transformation of the high-mass cluster SFGs. Firstly, the massive cluster SFGs likely formed most of their stars starting at earlier times than their field counterparts. So the size difference would naturally arise if the cluster SFGs are formed earlier when the Universe is denser. The similarity in the stellar populations between cluster and field compact SFGs suggests that the field compact SFGs may have similar formation history. Secondly, the overall steep negative gradients for both compact and extended cluster galaxies and the strong clustercentric dependence indicate that some other mechanisms are at work. We argue that such a steep negative color gradient indicates an inside-out growth scenario that most of the stars were likely formed during strong dissipative collapses in deep potential wells of galaxy cores <cit.>, which could be caused by intensive cold gas accretion in massive dark matter halos, major gas-rich mergers, or ram pressure compression, all of which are facilitated in high-z (proto)clusters close to the deep cluster potential <cit.>. §.§ Formation of low-mass compact SFGs with shrinking star-forming disks through gas stripping The low-mass, compact cluster SFGs are characterized by small sizes and steep positive color gradients, consistent with an outside-in quenching scenario <cit.>. In comparison, the color gradients for the field SFGs are much less prominent. This suggests that the cluster SFGs have likely gone through ram pressure or tidal stripping events <cit.>, during which the gas in the outer disk was stripped off, and subsequently star formation in the outer disk was quenched. The major difference between tidal and ram pressure stripping is that tidal stripping can also strip the stellar component and form tidal tails. As shown in Figure <ref>, most of these compact SFGs are rather isolated and do not have prominent tidal tails, indicating that either they have experienced tidal stripping at a much earlier time or the dominant mechanism is ram pressure stripping. In addition, compared to the extended cluster SFGs, the compact ones are older and have smaller SFRs, which suggests that they may have been accreted on the cluster earlier than the extended ones. Most likely, they have already survived a round-trip around the cluster core <cit.>, where the ram pressure stripping effect is expected to be stronger. The extended ones, on the other hand, are likely accreted/formed very recently and have not been strongly influenced by the cluster environment.   §.§ Comparison with previous works on J1001 and other high-z (proto)clusters Our results on the generally smaller sizes of massive galaxies are in good agreement with previous work on J1001 in <cit.>, which studied mainly the structural properties of the high-mass galaxies in our sample based on HST/WFC3 F110W imaging (rest-frame UV). This study extends the size measurements to a larger sample and lower stellar masses based on much deeper F125W and F160W (rest-frame optical) imaging. More importantly, with both the F125W and F160W imaging, we could further explore their color profiles and build more accurate SEDs, allowing probing the underlying physics of their structural differences between field galaxies. To put the findings on J1001 in the general context of galaxy formation in high-z clusters, here we compare our results on J1001 to other (proto)clusters at z ≳ 2. Unfortunately, due to difficulties in both member galaxy identification and obtaining high-resolution rest-frame optical imaging, there are not many structures at z ≳ 2 with detailed studies on the structural properties of a representative sample of SFG members. Among the very few (proto)clusters at z ∼ 2 with detailed structural studies of a rather complete SFG sample, the spiderweb (proto)cluster has probably the best multiwavelength data and the least unbiased view on the mass-size relation on its member galaxies. As shown in  <cit.>, a population of massive compact SFGs do exist, which is similar to though less extreme than this study. On the other hand, such a population of massive compact SFGs appear to be absent in those more mature clusters at lower redshift <cit.>, which generally reveal similar mass-size relation as field galaxies. In addition to the different types/evolutionary stages of structures, another potential factor that may cause different results on the mass-size relation for SFGs in high-z (proto)clusters is the selection of SFG members. In particular, Figure <ref> shows that many of the compact SFG members tend to have lower SFR and/or low gas content. As a result, their identification depends strongly on the depth of the observations, including optical-to-NIR spectroscopy, narrow-band imaging and CO observations. In this sense, the SFG members of J1001 represent probably one of the least biased SFGs samples among z ∼ 2 (proto)clusters by combining deep narrow-band imaging, NIR spectroscopy and CO observations. A less complete SFG member sample, which misses those less active SFGs, would tend to drop those most compact ones, yielding a mass-size relation more closer to the field. Keeping the aforementioned potential biases in various studies in mind, we emphasize that J1001 and other similar starbursting (proto)clusters is in a rapid transition phase from protoclusters or young clusters to mature clusters, with the most defining feature of a large concentration of massive SFGs (and starbursts) in the center of the (proto)cluster. Their large stellar masses, short gas depletion time, and compact sizes all suggest that they will soon be quenched and transit to quiescent galaxies. These may explain why strong environmental effects have been found for general member galaxies in J1001, which may be also expected in other similar structures <cit.>. In many aspects (dark matter halo masses, galaxy densities, et al.), their properties are already quite similar to mature clusters, which may explain why strong environmental effects have been observed. Very likely, these structures will soon evolve of the first mature clusters formed in the Universe with already a dominant quiescent galaxy population at z ∼ 2 <cit.>. In the future, more detailed studies on a larger number of similar structures as J1001 may finally tell us how these first clusters and their member galaxies are assembled. To summarize, based on a complete sample of SF members in the cluster J1001 at z = 2.51, we find systematic differences in sizes, Sérsic indexes, color gradients, and stellar population properties between cluster and field galaxies. Our results provide clear evidence of the environmental effects in these young clusters, which are likely at the end of mass assembly but still with active star formation. Specifically, for high-mass cluster SFGs, their systematically smaller sizes, older stellar populations, and steep negative gradients suggest an early formation time likely associated with strong dissipation collapse. For low-mass galaxies, a population of compact SFGs with steep positive color gradients indicates the prevalence of tidal and/or ram pressure stripping events in high-z clusters. Future high-resolution resolved studies on the spatial distribution of their stars and gas components are required to consolidate the dominating physical mechanisms. This work is supported by the National Natural Science Foundation of China (Project No. 12173017, and Key Project No. 12141301), and the China Manned Space Project with No. CMS-CSST-2021-A07. § GALFIT FITTING RESULTS The GALFIT fitting results (Table <ref>) of the 23 star-forming galaxies in J1001 are shown in Figure <ref>, Figure <ref>, Figure <ref> and Figure <ref>. aasjournal
http://arxiv.org/abs/2307.04780v2
20230710082045
Comparison of Point Cloud and Image-based Models for Calorimeter Fast Simulation
[ "Fernando Torales Acosta", "Vinicius Mikuni", "Benjamin Nachman", "Miguel Arratia", "Bishnu Karki", "Ryan Milton", "Piyush Karande", "Aaron Angerami" ]
cs.LG
[ "cs.LG", "hep-ex", "hep-ph", "nucl-ex", "physics.ins-det" ]
[email protected] Physics Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA National Energy Research Scientific Computing Center, Berkeley Lab, Berkeley, CA 94720, USA Physics Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA Berkeley Institute for Data Science, University of California, Berkeley, CA 94720, USA Department of Physics and Astronomy, University of California, Riverside, CA 92521, USA Thomas Jefferson National Accelerator Facility, Newport News, Virginia 23606, USA Department of Physics and Astronomy, University of California, Riverside, CA 92521, USA Department of Physics and Astronomy, University of California, Riverside, CA 92521, USA Computational Engineering Division, Lawrence Livermore National Laboratory, Livermore CA 94550 Nuclear and Chemical Science Division, Lawrence Livermore National Laboratory, Livermore, CA 94550 Score based generative models are a new class of generative models that have been shown to accurately generate high dimensional calorimeter datasets. Recent advances in generative models have used images with 3D voxels to represent and model complex calorimeter showers. Point clouds, however, are likely a more natural representation of calorimeter showers, particularly in calorimeters with high granularity. Point clouds preserve all of the information of the original simulation, more naturally deal with sparse datasets, and can be implemented with more compact models and data files. In this work, two state-of-the-art score based models are trained on the same set of calorimeter simulation and directly compared. Comparison of Point Cloud and Image-based Models for Calorimeter Fast Simulation Aaron Angerami Received / Accepted ================================================================================ § INTRODUCTION Detector simulations are essential tools for data analysis by connecting particle and nuclear physics predictions to measurable quantities. The most precise detector simulations are computationally expensive. This is especially true for calorimeters, which are designed to stop most particles and thus require modeling interactions from the highest accessible energies down to the lowest ones. Well-established experiments typically have bespoke fast simulations that capture the salient aspects of the precise simulations (usually based on Geant <cit.>) at a fraction of the computational cost. Traditionally, fast simulations are constructed to reproduce a series of low-dimensional observables. Furthermore, assembling an effective fast simulation is time intensive. If there was a way to build a fast simulation automatically and using the full detector dimensionality, then data analysis at existing and developing experiments could be greatly enhanced. Deep learning (DL) has been used to build automated and high-dimensional fast simulations (`surrogate models') for calorimeters. Starting from Generative Adversarial Networks (GANs) <cit.> <cit.> and now including Variational Autoencoders <cit.> <cit.>, Normalizing Flows <cit.> <cit.>, and Diffusion Models <cit.> <cit.>, deep learning based calorimeter simulations have rapidly improved over the last years. They are even starting to be used in actual experimental workflows, such as the ATLAS Collaboration fast simulation <cit.>. The recent CaloChallenge <cit.> community comparison showcased the state-of-the-art methods deployed to increasingly granular current and future detectors. As segmented detectors, calorimeters are naturally represented as (possibly irregular) images. Nearly all proposed methods for DL-based calorimeter simulations are based on an image format (fixed grid of pixels). However, these data are unlike natural images in a number of ways, most notably in their sparsity. As such, image-based approaches pioneered in industry may not be the most effective for particle interactions. Since most cells in a calorimeter image are empty, a more natural representation of these data may be a point cloud. Point clouds are a set of attributes assigned to locations in space. In the calorimeter case, the attribute is energy and the location is the cell coordinates. A calorimeter point cloud would require far fewer numbers to specify than an image representation, since only cells with non-zero energy would be recorded. The main challenges for point cloud models in contrast to image-based approaches is that they must cope with variable-length outputs that respect permutation invariance. With a lag compared to image-based approaches, point cloud generative models for particle/nuclear physics applications have seen a rapid development in recent years <cit.>. However, until recently, these models have never been applied to calorimeter simulations. The first (and until now, only) publication describing point cloud generative models applied to calorimeters is Ref. <cit.>, which proposed generating Geant `hits' (deposits of energy) prior to their discritization into cells. This innovative idea enables the separation of material interactions from readout geometry. However, the number of hits vastly exceeds the number of non-zero cells which makes this task difficult. In this paper, we explore point cloud generative models applied directly to cell-level information. In other words, we take calorimeter images and compare state-of-the-art generative models that represent the same inputs as either images or (zero-suppressed) point clouds. As a case study, the two representations are compared using simulations of a high-granularity hadronic calorimeter, similar to the design planned for the ePIC detector at the future Electron-Ion Collider <cit.>. This paper is organized as follows. Section <ref> describes the DL models used for the comparison. Both the image-based and point-cloud representations are generated with diffusion models in order to make the comparison as direct as possible. The simulation of the calorimeter dataset is found in Sec. <ref>. Discussion of the advantages and disadvantages of both representation, as well as numerical results are presented in Sec. <ref>. The paper ends with conclusions and outlook in Sec. <ref>. § DEEP LEARNING MODELS Generative models for detector simulation aim to precisely emulate physics-based models, like those based on Geant, but using far less time than the full simulation. With 𝒪(100) detector components, neural network architectures solely based on fully connected layers can efficiently produce high fidelity samples, resulting in surrogate models thousands of times faster than the standard simulation routines <cit.>. For higher detector granularity (𝒪(1k) - 𝒪(10k)), the use of data symmetries becomes crucial to achieve precision. These can be directly included in the model design through dedicated neural network architectures or included in the data pre-processing <cit.>. For generative models such as normalizing flows, introducing flexible network architectures is often not trivial as the model invertibility and tractable Jacobian of the transformation places a strong restriction on the model design. A second difficulty is to achieve a stable training routine of the surrogate model. At finer granularities, neural network models tend to become larger to accommodate the data complexity, often resulting in unstable training schedules. This issue becomes more prominent in generative models such as variational autoencoders, where the latent space can vary rapidly, leading to an unstable response of the decoder network, or GANs, where the adversarial training requires careful tuning of the model hyperparameters to achieve a stable training. Diffusion models are a class of generative neural networks that allow for stable training paired with high flexibility in the model design. Data is slowly perturbed over time using a time parameter t ∈ℝ that determines the perturbation level. The task of the neural network is to approximate the gradients of the log probability of the data, or the score function ∇_xp(x) ∈ℝ^D, based on data observations x∈ℝ^D in the D-dimensional space. This can be approximated by a denoising score-matching strategy <cit.>. In the implementation used in this paper, data observations x∼ p_data(x) are perturbed using the kernel 𝐱_t∼ q(𝐱_t|𝐱)=𝒩(𝐱_t;α_t𝐱,σ_t^2𝐈), with time-dependent parameters α and σ determining the strength of the perturbation to be applied. In the variance-preserving setting of diffusion processes, σ_t^2 = 1 - α_t^2. For the time-dependence, a cosine schedule is used such that α_t = cos(0.5π t). The loss function to be minimized is implemented using a velocity parameterization: ℒ_θ = 𝔼_ϵ,t𝐯_t - 𝐯̂_t,θ^2, where the time-dependent network output with trainable parameters θ, 𝐯̂_t,θ, is compared with the velocity of the perturbed data at time t, 𝐯_t ≡α_tϵ-σ_t𝐱, with ϵ∼𝒩(0,𝐈). The score function is then identified as ∇_xlogp̂_θ(𝐱_t) = -𝐱_t - α_t/σ_t𝐯̂_t,θ(𝐱_t). The data generation from the trained diffusion models is implemented using the DDIM sampler proposed in Ref. <cit.> that can be interpreted as an integration rule <cit.> with update rule specified by: 𝐱_s = α_s𝐱̂_θ(𝐱_t) + σ_s𝐱_t -α_t𝐱̂_θ(𝐱_t)/σ_t. For a fair comparison, all diffusion models are trained using the same score-matching strategy and fixed number of 512 time steps during sampling. The fast point cloud diffusion model (FPCD) follows <cit.>, where a permutation equivariant estimation of the score function is obtained by the combination of a DeepSets <cit.> architecture with attention layers <cit.>. During the point cloud simulation, two models are also defined: one that learns the number of non-empty cells, conditioned on the initial energy of the incoming particle, and one model that learns the score function of the normalized point cloud, also conditioned on the energy of the particle to be simulated and the number of hits to be generated. This model is trained on Dataset 1, described in Sec. <ref>. The model trained on the image dataset (CaloScore) is adapted from <cit.> with a few modifications. Compared to the original implementation, the calorimeter simulation task is now broken down in two diffusion models: one that learns only the energy deposits in each layer of the calorimeter, conditioned on the initial energy of the particle to be simulated, and one model that learns to generate normalized voxels per layer, conditioned on the energy deposition in each layer and the initial energy of the particle to be simulated. Additionally, the original U-Net <cit.> model is combined with attention layers. These changes increase the model expressiveness and the generation fidelity. This model is trained on Dataset 2, described in Sec. <ref> § DETECTOR AND DATA DESCRIPTIONS §.§ Calorimeter Simulation The DD4HEP framework <cit.> is used to run Geant simulations of a high-granularity iron-scintillator calorimeter (based on the CALICE-style design <cit.>), which has dimensions similar to those of the forward hadronic calorimeter in the future ePIC detector (LFHCAL <cit.>) at the EIC. Specifically, the sampling structure comprises 0.3 cm scintillator tiles sandwiched between 2.0 cm thick steel plates. It consists of a total of 55 layers. The transverse area of the scintillator is set to 10 cm×10 cm, somewhat larger than in Ref. <cit.>. It adopts a non-projective geometry with tower elements arranged in parallel to the z axis and has its front face at z=3.8 m. 1.7 million events of single π^+ particles incident on the center of the calorimeter are simulated. The incident momentum, P_Gen., was generated uniformly in log_10 space in the range 1.0 < P_Gen. < 125 GeV/c. In order to hit the center of the calorimeter, the pions were generated with a polar angle of θ_Gen. = 17^∘. Because the detector is symmetric about ϕ, the particles are generated in the range 0^∘ < ϕ_Gen. < 360^∘. An energy threshold corresponding to 0.3 MeV are used to select hits for further analysis. §.§ Datasets Dataset 1 is the point cloud representation of the Geant showers, while Dataset 2 represents the same showers using the image representation. Both Dataset 1 and Dataset 2 used in training share the same parent Geant simulation, such that the fast point cloud diffusion model and the image model are trained on different representations of the same set of calorimeter showers. Dataset 1 is created by taking the Geant simulation and converting it to a format based on JetNet data <cit.>, that stores information on jets and their constituents in a zero-suppressed point cloud representation. The Geant data is stored in files containing two datasets, clusters and cells. The cluster dataset contains the P_Gen of the incident pion, as well as the number of hits in the calorimeter. The cell data is comprised of a constant number of 200 cells per event. Empty cells, or cells with deposited energy below the threshold are masked, with all values set to 0.0, and ignored during training. The x, y, and z distributions of the Geant simulation are initially discrete, resulting from the digitization step of the simulation, with values equal to the centers of the cells in each dimension. The point cloud model struggles to learn extremely sharp features, as the score function is not well-defined for discrete inputs. To circumvent this, a uniform smearing within a cell-width is applied to the cells along each dimension to obtain continuous distributions for the final point cloud dataset. This maintains the same distributions at histogram-level when binning according to the cell-width, but yields a point cloud dataset with smooth x, y, and z distributions. Without this smearing, the distributions in x, y, and z resemble a series of delta functions that the point cloud model struggles. The point cloud model is trained on this smeared point cloud representation of the Geant simulation. Dataset 2 is created by converting the point cloud dataset into an image format. Images at the original granularity would would be too large for the generative model. The calorimeter cells were therefore clustered into groups of 5 along each axis of the detector to create voxels, where 5×5×5 cells = 1 voxel. Energy in each of the cells making up the voxel were summed and assignd to the final voxel's total energy. The final image format consists of 11 11× 11 voxels. A hit in the voxelized dataset, and referenced in Section <ref>, is defined as any voxel with energy deposition above threshold. For the final comparison, generated samples from the point cloud model are voxelized using the same method for Dataset 2. All comparisons are in this image format, at the same resolution of 11 × 11 × 11 voxels per image. Images representing the full resolution of the calorimeter with 55×55×55 voxels were not used, as this would result in unmanageably large datasets (see Table <ref>), and would represent the largest calorimeter image training ever done. The point cloud model was trained on the full resolution because point clouds naturally represent the calorimeter at full granularity. Training the point cloud model on this more natural representation is in line with the goal of this work to investigate advantages/disadvantages of two representations of the calorimeter data. It is also for this reason that the generated point cloud distributions are shown separately, while the direct comparisons between models are done in the image representation. Investigating possible advantages of a point-cloud model trained directly on the voxelized dataset is left to future work. § RESULTS All generated samples along with Geant are converted to the same image format at the same resolution of 11× 11× 11 voxels per event for fair comparison. A variety of distributions are used to evaluate the quality of the generated images. After comparing calorimeter images generated by both models, the point cloud representation of Geant is compared to the generated samples of the point-cloud model to provide additional insight to the previous image-based comparison. For all comparisons, the Earth mover's distance (EMD) <cit.>, also known as the 1-Wasserstein distance <cit.>, between generated distributions and Geant) distributions is calculated. The EMD score a distance-like measure of the dissimilarity between two distributions. It roughly represents the minimum amount of work needed to transform one distribution to another. While this is not the only possible metric, it is a standard and widely-used statistic that was also the main distance deployed in <cit.>, where an image based model was compared to a Wasserstein-GAN. All EMD scores in Figures <ref>, <ref> and <ref> are calculated on final voxelized distributions Figure <ref> shows a qualitative assessment of the generative models using the 2-dimensional distribution of the average energy deposition in three layers. All voxels with an expected energy deposition above 0 are populated in both the image and point cloud based models, with very few additional hits. The calorimeter shower will have diverse shapes, as well as different overall distribution of voxels due to the variation of ϕ_Gen.. The qualitative similarities in each image in Fig <ref> indicate that models reproduce the various showers from the training dataset well. Each image contains a ring due to θ_Gen. being fixed while varying ϕ_Gen.. Table <ref> shows the model size, size of each dataset, and time to generate 100k calorimeter showers. The disk size and sample time under the point cloud model are for showers in the point cloud representation. The AUC is obtained from a classifier trained to distinguish the samples of both models only in the voxelized image format. Both models have very good AUC, reasonably close to 0.5, with the image model having the lower AUC. The point cloud model is smaller by a factor of 4 compared to the image based model, and samples events 3 times faster. Lastly, the point cloud dataset requires over 100 times less disk space than the image format at full granularity. Figure <ref> compares the total energy deposited in the calorimeter and total number of calorimeter hits, where a hit is defined as any voxel with energy above threshold. The EMD is also calculated between Geant and the different generative models. Both the image-based diffusion model and the point-cloud based diffusion model are in good agreement with Geant at small deposited energies, deviating no more than 10%. At the highest deposited energies, however, both diffusion models begin to fall away from Geant, with the point-cloud model generating less energy, and the image based model generating slightly more energy than Geant. These trends begin at about 10 GeV, with the point-cloud model deviating slightly earlier. The point-cloud model also shows a slightly higher EMD score than the image based model. The region where the deviations are largest, past 20 GeV of deposited energy are rare, and statistical fluctuations begin to dominate the Geant distributions. The number of hits shows a similar trend, though with larger deviations. At a small number of hits, both show good agreement with Geant, with deviations slightly above 10%. At 15 or more hits, both models begin to deviate well past 10%, with the point cloud model oversampling the number of hits, and the image based model generating less hits than Geant. Figure <ref> and <ref> shows the average deposited energy x, y, and z-coordinates. Both models struggle in the first and last layers in x and y coordinates, but show good agreement in the middle layers. While the image-based model shows larger deviations in the first and last layers of the calorimeter compared to the point-cloud model, it has an overall lower EMD in both distributions. The two-pronged feature of these distributions is a result of generating the pions at a fix polar angle and varying ϕ. It should be noted that there are little to no hits in the first and last x and y layers of the calorimeter, so even a very small deviation from Geant will result in a large deviation percentage (bottom panels of Fig. <ref> and <ref>). Similarly, as there are fewer hits towards the back of the detector, deviations increase slightly for the very last layers. However, The z-distributions show both models in very good agreement with the original Geant predictions, a possible effect of the z-distribution of hits being less dependant on the generated θ and ϕ ranges. All three distributions show the point cloud samples are systematically lower than the original Geant distributions. This indicates the point cloud model would benefit from learning the energy per layer directly, as is done in the image model described Sec. <ref>. This difference likely explains why this small bias is observed in the point cloud model, but not in the image model, and is an avenue for improving the point cloud. Following <cit.>, a classifier was trained to distinguish between generated showers and Geant showers. The classifier is comprised of two fully connected layers of size 256 using the RELU activation function. The classifier is trained only on vectors of voxelized images of each dataset. The area under the receiver-operator curve (AUC) for the image model was 0.673. The AUC for the point-cloud model was 0.726. Generally, being closer to 0.5, where the classifier is maximally confused, is the target. However the AUC obtained by both models is very promising, as having an AUC even slightly below 1.0 is non-trivial. A key advantage of the point cloud model is that the distributions at the sub-voxel level can be shown. The point cloud model already simulates the data at the original granularity of the calorimeter, and voxelization is only necessary for the image representation. The original output of the point cloud model is compared to the continuous (or smeared) Geant distributions. Figure <ref> shows the number of hits in the point cloud representation of the calorimeter showers. In the point-cloud representation, a hit is defined as any cell that has a energy deposited above threshold. The point-cloud model reproduces the total number of cell hits well, much better than the voxel hit distribution, shown in Fig. <ref>. This may indicate that while the point cloud model is overall similar to Geant in both representations, small deviations in point cloud distributions can be summed into larger deviations during the voxelization process, where 125 individual cells are combined into a single voxel. However, there is a large symmetry group under which mismodelings in the bigger space may not affect the modeling in the coarser space, so further investigation is needed. However, the very good agreement with Geant in the number of cell hits and degrading agreement in the number of voxel hits indicates that the first diffusion model of the point cloud model architecture is performing well, while the second model, responsible for sampling the cell distributions, would likely benefit from additional tuning. Similar conclusions can be derived from Fig. <ref>, show the generated point samples at the full detector granularity and in good agreement with Geant. Fig. <ref> shows the average x, y, and z coordinate distributions, as well as the cell log_10E distribution in the point representation. Again, there are larger relative deviations in the first and last layers in x, y, and z, coordinates where there are very few hits, just as in the image representation. However, there is very good agreement with the Geant simulation in layers containing a reasonable number of hits. § CONCLUSION AND OUTLOOK In this paper, we make the first direct comparison between two score based generative models using either images or point clouds as representations of the same training data. We use Geant calorimeter simulations of a high-granularity hadronic calorimeter. Both models perform well for most distributions, with very similar AUCs, but the image-based diffusion model invariably has a lower EMD in each comparison to Geant. Overall, the performance of the point-cloud diffusion model is very close to the image model. This is despite the point cloud model being disadvantaged in this work in a few important ways. First, the calorimeter showers from the FPCD model are closest to Geant in the point cloud representation at the full calorimeter granularity, as shown in Fig. <ref> and <ref>. But it is later voxelized for comparison. This may compound mismodeling during the voxelization, however further investigation is needed. Second, the point cloud model is adapted from a model architecture initially designed for jet data from the JetNet datasets. While the high-level structure of the datasets are very similar, the data itself are quite different. For example, the first diffusion model making up the point cloud model was initially much larger, as predicting the jet multiplicity is in general a more difficult problem than the number of non-empty cells in a calorimeter shower. Reducing the size of the first diffusion model of the point cloud model architecture had no impact on performance while speeding up training. The second diffusion model making up the point cloud model architecture that is responsible for sampling the cell x, y, z, and E was directly adapted from <cit.>. Further tuning of the point cloud model, particularly the cell-model can likely close the small remaining gap in performance. The image model, in contrast, is based on CaloScore, which was tuned specifically for calorimeter showers. Lastly, the image-based model uses the energy deposition in each layer in addition to the generated particle momentum to condition the second diffusion model making up its architecture. The second diffusion model making up the point cloud model is solely conditioned on the generated particle momentum. This might explain why the point cloud model has systematically lower mean energy distributions (see Fig. <ref> and <ref>) compared to both Geant and the image based model. These potential sources of improvement in the point cloud model should not detract from it's already very reasonable performance, deviating from Geant more 10% only in the sparsest of layers, where the image based model also struggles. At the same time, the point cloud model offers several advantages over the image model. First, the sheer size of the data. The point cloud data saved to HDF5 files is a factor of 100 times smaller using the same zlib compression as the image based dataset at full granularity, with no voxelization. As calorimeters continue to increase in granularity, this difference will only increase. Second, information is lost during voxelization process; cell hits with the same x, y, z coordinates, but different energies are summed over in the image representation. This is true even if images are produced at the full granularity of the calorimeter, where hits within the single cells are summed over. This means that voxelized datasets cannot naturally be reverted back to a point cloud representation. Additionally, as was showed in this work, the generated point clouds can be voxelized afterwards, or converted into other representations that better fit specific use cases. This work establishes a benchmark for future research on generative models, offering valuable insights into the challenges of modeling hadronic showers in highly granular calorimeters using image-based techniques, while also exploring the potential of point-cloud methods. The current advantages of point clouds, in combination with improvements to close the remaining performance gap described earlier, will likely make point cloud based models a clear choice for highly granular calorimeters. This work should serve as a reference for studies utilizing future calorimeters based on the CALICE design, including those intended for use in CMS at the LHC and ePIC at the EIC. § CODE AVAILABILITY The code used to produce the point cloud results shown in this document are available at <https://github.com/ftoralesacosta/GSGM_for_EIC_Calo>. The code for the image based model and comparisons of images is available at <https://github.com/ViniciusMikuni/Calo4EIC>. Example Geant4 datasets and generated samples are available at <https://zenodo.org/record/8128598>. § ACKNOWLEDGMENTS We acknowledge support from DOE grant award number DE-SC0022355.This research used resources from the LLNL institutional Computing Grand Challenge program and the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231 using NERSC award HEP- ERCAP0021099. M.A acknowledges support through DOE Contract No. DE-AC05-06OR23177 under which Jefferson Science Associates, LLC operates the Thomas Jefferson National Accelerator Facility. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344. unsrt
http://arxiv.org/abs/2307.06056v1
20230712101858
How Many Papers Should You Review? A Research Synthesis of Systematic Literature Reviews in Software Engineering
[ "Xiaofeng Wang", "Henry Edison", "Dron Khanna", "Usman Rafiq" ]
cs.SE
[ "cs.SE" ]
978-1-6654-5223-6/23/$31.00 2023 IEEE How Many Papers Should You Review? A Research Synthesis of Systematic Literature Reviews in Software Engineering Xiaofeng Wang Free University of Bozen-Bolzano Bolzano, Italy [email protected] Henry Edison Blekinge Institute of Technology Karlskrona, Sweden [email protected] Dron Khanna Free University of Bozen-Bolzano Bolzano, Italy [email protected] Usman Rafiq Free University of Bozen-Bolzano Bolzano, Italy [email protected] ======================================================================================================================================================================================================================================================================================================================================================== [Context] Systematic Literature Review (SLR) has been a major type of study published in Software Engineering (SE) venues for about two decades. However, there is a lack of understanding of whether an SLR is really needed in comparison to a more conventional literature review. Very often, SE researchers embark on an SLR with such doubts. We aspire to provide more understanding of when an SLR in SE should be conducted. [Objective] The first step of our investigation was focused on the dataset, i.e., the reviewed papers, in an SLR, which indicates the development of a research topic or area. The objective of this step is to provide a better understanding of the characteristics of the datasets of SLRs in SE. [Method] A research synthesis was conducted on a sample of 170 SLRs published in top-tier SE journals. We extracted and analysed the quantitative attributes of the datasets of these SLRs. [Results] The findings show that the median size of the datasets in our sample is 57 reviewed papers, and the median review period covered is 14 years. The number of reviewed papers and review period have a very weak and non-significant positive correlation. [Conclusions] The results of our study can be used by SE researchers as an indicator or benchmark to understand whether an SLR is conducted at a good time. SLR, Systematic Literature Review, Methodological Study, Research Synthesis, Software Engineering § INTRODUCTION Systematic literature reviews (SLRs) have a strong presence in Software Engineering (SE) literature, and the number of SLR studies has grown steadily in the last two decades <cit.>. SLRs, like any research, should be performed carefully, following rigorous processes, and results should be reported and interpreted appropriately. They require considerably more effort than traditional literature reviews <cit.>. Therefore, SE researchers should not commit to conducting an SLR without understanding whether it is worth doing. The worthiness can be understood from different perspectives, among which an important one is timing, i.e., when is the appropriate time to conduct an SLR? Or is there an appropriate time at all? Despite several guidelines and tertiary studies on SLRs in SE <cit.>, no clear indications are provided on the right time to conduct an SLR on a research question, area, or phenomenon in the SE research field. We aspire to fill this observed knowledge gap. As the first step of our research, we investigated the datasets, i.e., the reviewed papers, in SLRs in SE. We assumed that analysing the dataset of an SLR can reveal the development status of a research topic or area when an SLR was conducted. Therefore, we asked the following research question: What are the characteristics of the datasets of SLRs in SE? To answer the research question, we conducted a research synthesis on a sample of SLRs published in top-tier SE journals. For each of the SLR studies in the sample, we extracted relevant data on the reviewed papers, including the number of reviewed papers and the period covered by these reviewed papers. The collected data was analysed through multiple angles to reach the answer to the posed research question. The findings reported in the paper provide insights into the characteristics of the datasets used by SLRs in SE. SE researchers can take our findings as an indicator or benchmark to understand whether an SLR is conducted at a good time. The rest of the paper is organised as follows. Section <ref> provides a review of the guidelines and tertiary studies that are relevant to our study. The data collection process we followed to build our study sample is described in Section <ref>. The following section, Section <ref>, reports the findings, which are discussed in Section <ref>. Lastly, in Section <ref>, we outline the next steps of our research on understanding the temporal aspects of SLRs in SE. § RELATED WORK The widely used guidelines of SLRs in SE are provided in <cit.>, in which the reasons for performing SLRs and their importance are argued. Later on, guidelines for the search strategy to update SLRs in SE are provided in <cit.>. Recently, Kitchenham et al. <cit.> presented an integrated set of guidelines to address reporting problems in secondary SE studies. Apart from these guidelines, several tertiary studies in SE exist in the literature. These studies assess the impact of SLRs and provide an annotated catalogue of SLRs (e.g., <cit.>), record the reported experiences of conducting SLRs for the benefit of new researchers <cit.>, or review SLRs in a specific SE area (e.g., Software Engineering Education <cit.>). Few existing guidelines or tertiary studies in SE suggest the appropriate time to conduct an SLR on a research question or topic. The study of Mendes et al. <cit.> is the only one that we are aware of investigating the timing aspect of SLRs in SE. Their goal is to understand when is the appropriate time to update SLRs in SE. Using a decision framework employed in other fields, they analysed 20 SLRs which are updates of previously conducted SLRs. The study finds that 14 of the 20 updated SLRs need not be conducted. The work of Mendes et al. <cit.> provides a good motivation to examine the necessity of conducting first-time SLRs in SE, which is not investigated by these authors or in any existing SE literature as far as we are aware of. More specifically to the focus of this paper, no suggestion is provided on how many papers should be reviewed in an SLR in SE. Understandably, suggestions like this are difficult to offer since each research topic or area has a different development pace, has a different number of researchers working on it, and therefore accumulates evidence and knowledge at a different speed. Nevertheless, it would be useful to have an overall understanding of the datasets used by SLRs in SE, since a dataset, i.e., reviewed papers, in an SLR represents the knowledge accumulated on the research topic under the investigation. § RESEARCH APPROACH To answer the research question, we employed research synthesis. Research synthesis is an umbrella term referring to methods used to summarise, integrate, combine, and compare the findings of different studies on a particular topic or research question <cit.>. Research synthesis aims at analysing and evaluating multiple studies to integrate and provide new interpretative explanations about them <cit.>. We conducted a research synthesis of a sample of SLRs in SE, focusing on the datasets used in these SLRs to investigate how many papers should be reviewed in an SLR. §.§ Data collection §.§.§ Search strategy Even though we were not conducting an SLR study, we followed the search strategy defined in <cit.> to build our sample. We did not attempt to search for all relevant SLRs in SE exhaustively but rather to sample enough studies for analysis. Therefore, we focused on SLRs published in top-tier SE journals as identified by Wong et al. <cit.>. This is a trade-off between considering as much literature as possible and at the same time accumulating and extracting reliable information. As reported in <cit.>, more than 600 SLRs were published between 2004 and 2016, and there is a trend that the number has been growing since. Therefore, the number of SLRs published in journals can already provide enough data for the first step of our study. To build our search string, we combined the journals' titles with the synonyms of “systematic literature reviews” <cit.>. Our generic search string is: (“systematic review" OR “research review" OR “research synthesis" OR “research integration" OR “systematic overview" OR “systematic research synthesis" OR “integrative research review" OR “integrative review" OR “systematic literature review" OR “literature review") AND (“Information and Software Technology" OR “Journal of Systems and Software" OR “IEEE Software" OR “IEEE Transactions on Software Engineering" OR “Software: Practice and Experience" OR “Software Testing, Verification and Reliability" OR “Transactions on Programming Languages and Systems" OR “Transactions on Software Engineering and Methodology" OR “Journal of Software: Evolution and Process" OR “International Journal on Software Tools for Technology Transfer" OR “Empirical Software Engineering") We ran the search string in Scopus on Feb 24, 2023, and retrieved 412 published papers. Each paper was inspected by two authors to decide whether it is an SLR study or follows SLR guidelines. In the cases where the two authors did not agree on the decision, a third author's vote was required. We excluded the studies that do not follow SLR guidelines (e.g., conventional/ad-hoc reviews). We also excluded mapping studies, grey literature reviews, multi-vocal literature reviews, tertiary studies or SLR updates. Some studies published in IEEE Software are typically summaries of existing SLRs that have already been published in other venues. We checked the venues where the original SLRs were published. We included the original SLRs in our sample if the venues are among the journals we used to search for SLRs. §.§.§ Data extraction A key element of an SLR is dataset, i.e., the papers reviewed in an SLR. There are various facets of a dataset that could be relevant to our study. In this first step, we focused on the following three facets: * The number of reviewed papers in the SLR; * The earliest publication year of the reviewed papers; and * The latest publication year of the reviewed papers. If an SLR does not report the information above or provide detailed information on how to get them, we excluded it from our sample. The unit of analysis in our study is the SLR study itself. Therefore, if two SLR studies were conducted and reported in one paper, we considered two data points from that paper. Moreover, if a published paper contains both an SLR and other review studies (e.g., systematic mapping study) or empirical studies (e.g., case studies, experiments, etc.), we only included the paper if we were able to extract the SLR-related data. For each of the identified SLR studies, the meta-data of the paper in which it is published (e.g., title, publication year, authors, publication venue) were extracted automatically through the “export” feature of Scopus. Ultimately, we collected data from 170 SLRs, constituting our final data analysis sample. The final version of the dataset is accessible through a publicly available repository <cit.>. §.§ Data analysis In the data analysis phase, apart from the meta-data of the publications containing the SLRs, we defined the following two variables directly related to the dataset of an SLR: * NoRP: Number of Reviewed Papers in an SLR; and * RPC: Review Period Covered by the reviewed papers. RPC = the latest publication year of the reviewed papers in an SLR - the earliest publication year of the reviewed papers in an SLR + 1 After obtaining the descriptive statistics (min, max, median, mean and standard deviation) of NoRP and RPC, we explored whether there was any relation between the two variables. That is, to understand whether the number of reviewed papers in an SLR can be indicated by how long the research topic under the study has been explored. § RESULTS §.§ Sample overview Before reporting the results related to the two variables, we provide an overview of the 170 SLRs in our sample, as shown in Fig. <ref> and Table <ref>. Fig. <ref> shows the distribution of the SLRs across the years. In our sample, the two earliest SLRs were published in IST in 2008. The number of SLRs published in top-tier journals has been growing over the years, despite having small dips in certain years. Table <ref> shows the distribution of these SLRs across the journals. It can be seen from Table <ref> that the Journal of Systems and Software has the most SLRs (70), followed by Information and Software Technology (40). Journal of Software: Evolution and Process and Empirical Software Engineering have similar numbers of SLRs (18 and 16, respectively). §.§ Characteristics of the datasets of SE SLRs Table <ref> lists the descriptive statistics of the two variables, NoRP and RPC. As shown in the first column “NoRP (n=170)” of Table <ref>, the number of reviewed papers, or the size of the datasets of the SLRs, varies greatly (sd=95.60). The minimum number of reviewed papers is 6 (in one SLR), and the maximum is 925 (in one SLR). The median size is 57, and the mean value is 80.59, which means the number of reviewed papers is right-skewed. Indeed, after removing the outliers (the four largest numbers of NoRP) to make Fig. <ref> more readable (otherwise, the majority of the data points will be squeezed into a small area of the diagram), the difference between the median and mean values is reduced, as well as the standard deviation, as shown in the second column “NoRP (n=166)” of Table <ref>. To show the distribution of NoRP more clearly, we plotted the histogram using the sample of 166 SLRs, as shown in Fig. <ref>. The red line indicates the mean value. It can be seen in Fig. <ref> that the dataset sizes ranging from 53 to 57 reviewed papers are most common, used by fourteen SLRs. The other common size ranges are between 28 and 32 (thirteen SLRs), between 33 and 37 (twelve SLRs), and between 68 and 72 reviewed papers (twelve SLRs). As shown in the third column, “RPC (n=170)” of Table <ref>, this variable's median and mean values converge to 14 years, with a standard deviation of 8.22. The longest review period covered by the reviewed papers in an SLR is 41 years. The SLR with the longest review period was published in TSE in 2021. One hundred and sixty-six papers were reviewed in this SLR, ranging from 1977 to 2017. What is somehow surprising is the shortest review period (min value of RPC), which is 2 years. The SLR with the shortest review period was published in Software Testing Verification and Reliability in 2014. Despite the short review period, the number of reviewed papers is fifty-four, close to the median dataset size. These fifty-four reviewed papers were published between 2009 and 2010. Fig. <ref> shows the distribution of RPC, the review periods covered by the reviewed papers in SLRs, using the sample of 170 SLRs. No outlier is perceived since all values are within a reasonable range (between two and 41). The red line indicates the mean value. As shown in Fig. <ref>, fifteen SLRs have reviewed the papers published within 14 years, which is the most common review period covered and also the median value of RPC. The next common review period covered is 6 years (thirteen SLRs have this review period), followed by 11 years (twelve SLRs). Fig. <ref> is the scatterplot of the two variables (NoRP vs RPC) using the sample of 170 SLRs. It shows no observable relationship between the number of reviewed papers in an SLR and the review period covered by that collection of reviewed papers. The scatterplot can be better observed using the sample of 166 SLRs as shown in Fig. <ref>. Using both samples, we tested the correlation between NoRP and RPC. Since the two variables are not normally distributed (based on the results of the Sharpiro-Wilk test<cit.>), we tested their correlation using the Spearman rank correlation coefficient<cit.> with a 0.95 confidence level. For the sample of 170 SLRs, the results indicate a very weak positive correlation between the two variables (rho = 0.1310, p-value = 0.0886). Similar results were obtained using the sample of 166 SLRs (rho = 0.1357, p-value = 0.0814). However, in both cases, the p-value is above 0.05, which indicates that there is no sufficient evidence to support the correlation between the two variables in both samples. § DISCUSSION The quantitative analysis conducted on the datasets used by the SLRs in our sample shows that there is no single magic number that SE researchers can rely on to decide whether it is an appropriate time to conduct an SLR. It depends evidently on the research question or topic under investigation. However, the median number of reviewed papers in the SLRs (57) and the typical review period covered (14 years) can serve as a first useful indicator or benchmark to evaluate whether the research on a given topic has accumulated enough studies that warrant an SLR. SE researchers can estimate the dataset they will obtain or compare what they have already obtained to understand whether they are dealing with a smaller or larger dataset than the average ones used by the SLRs in SE. They should be more cautious when the dataset is extremely small or large, which may signal a potential issue in the literature search or inclusion/exclusion processes. Additionally, when the number is extremely small, it may mean that the research field is not mature enough, and an SLR is not needed at that point in time. On the contrary, when the number is extremely large, it indicates that the SLR should have been conducted earlier. One major limitation of our study is that we constrained our SLR sampling to those published in a selected list of top-tier SE journals. We did not include SLRs published in SE conferences. Therefore, the findings cannot be generated for the SLRs published in those venues. Another limitation is that we used the Number of Reviewed Papers (NoRP) as an indicator. This number is only obtainable after the relevant papers are retrieved, and inclusion/exclusion criteria are applied, which means a significant amount of effort has already been invested before the NoRP can be known. This limits the usefulness of NoRP as an early-stage indicator of “when” to conduct an SLR. § NEXT STEPS AND FUTURE WORK This paper reports the initial findings of our study on the temporal aspects of SLRs in SE. Our eventual goal is to understand when it is an appropriate time to conduct an SLR on an SE research topic. In the first step, we used the number of reviewed papers and review period covered by these papers as the indicators. In the next steps, we will investigate other data, e.g., the number of retrieved papers after applying the search string (assuming a good one), as an earlier indicator on whether an SLR is conducted in a timely manner. We also need to explore the factors that affect the size of SLR datasets, such as the number of libraries used in the search and the search strategies used (such as closed vs. open period). Additionally, we will collect more data about different facets of the dataset of an SLR, the distribution of the reviewed papers over years and venues, and the types of papers included in a dataset (conference or journal paper, research methodology used, and so on). We will explore the patterns in these data and relations among different facets. Another venue for future work is to broaden our sample by collecting and analysing the SLRs published in SE conferences. By contrasting and comparing the SLRs published in these two different types of venues, we can improve the generalisability of our findings. Our study focused only on the quantitative SLR data. In the future, qualitative analysis can be conducted on SLRs. For example, one can investigates which SE topics have been systematically reviewed and published. One can also map the topics of SLRs to the SE knowledge areas <cit.> to provide a bigger picture of SE research and its change over time. This could help SE researchers to find the relevant SLRs on their topics and decide if an SLR on their topic is needed. Even though we focused on SLR, we believe our research question is relevant to other literature review methods, such as systematic mapping studies or multivocal reviews. Therefore, researchers could replicate our approach to advance our knowledge in these related areas. § ACKNOWLEDGEMENT This work has been supported by ELLIIT; the Swedish Strategic Research Area in IT and Mobile Communications. IEEEtran
http://arxiv.org/abs/2307.04545v1
20230710132434
The Pairing-Hamiltonian property in graph prisms
[ "Marién Abreu", "Giuseppe Mazzuoccolo", "Federico Romaniello", "Jean Paul Zerafa" ]
math.CO
[ "math.CO", "05C76, 05C70, 05C45" ]
The Pairing-Hamiltonian property in graph prisms Marién Abreu Dipartimento di Matematica, Informatica ed Economia Università degli Studi della Basilicata, Italy [email protected] Giuseppe Mazzuoccolo Dipartimento di Scienze Fisiche, Informatiche e Matematiche Università degli Studi di Modena e Reggio Emilia, Italy [email protected] Federico Romaniello Dipartimento di Matematica “Giuseppe Peano" Università di Torino, Italy [email protected] Jean Paul Zerafa St. Edward's College, Triq San Dwardu Birgu (Città Vittoriosa), BRG 9039, Cottonera, Malta [email protected] 05C76, 05C70, 05C45 Let G be a graph of even order, and consider K_G as the complete graph on the same vertex set as G. A perfect matching of K_G is called a pairing of G. If for every pairing M of G it is possible to find a perfect matching N of G such that M ∪ N is a Hamiltonian cycle of K_G, then G is said to have the Pairing-Hamiltonian property, or PH-property, for short. In 2007, Fink [J. Combin. Theory Ser. B, 97] proved that for every d≥ 2, the d-dimensional hypercube 𝒬_d has the PH-property, thus proving a conjecture posed by Kreweras in 1996. In this paper we extend Fink's result by proving that given a graph G having the PH-property, the prism graph 𝒫(G) of G has the PH-property as well. Moreover, if G is a connected graph, we show that there exists a positive integer k_0 such that the k^th-prism of a graph 𝒫^k(G) has the PH-property for all k ≥ k_0. § INTRODUCTION The problem of extending perfect matchings of a graph to a Hamiltonian cycle has been first considered by Las Vergnas <cit.> and Häggkvist <cit.> in the 1970s. They both proved Ore-type conditions which ensure that every perfect matching of a graph having some initial conditions can be extended to a Hamiltonian cycle. Some years later, Kreweras <cit.> conjectured that any perfect matching of the hypercube 𝒬_d, d≥ 2, can be extended to a Hamiltonian cycle. This conjecture was proved in 2007 by Fink <cit.>. Actually, he proved a stronger version of the problem. Given a graph G, let K_G denote the complete graph on the same vertex set V(G) of G. Fink shows that every perfect matching of K_𝒬_d, and not only the perfect matchings of 𝒬_d, can be extended to a Hamiltonian cycle of K_𝒬_d, by using only edges of 𝒬_d. More in general, for a graph G of even order, a perfect matching of K_G is said to be a pairing of G. Given a pairing M of G, we say that M can be extended to a Hamiltonian cycle H of K_G if we can find a perfect matching N of G such that M ∪ N = E(H), where E(H) is the set of edges of H. A graph G is said to have the Pairing-Hamiltonian property (or, the PH-property for short), if every pairing M of G can be extended to a Hamiltonian cycle as described above. For simplicity, we shall also say that a graph G is PH if it has the PH-property. This notation was introduced in <cit.>, where amongst other results, a classification of which cubic graphs admit the PH-property was given: these are the complete graph K_4, the complete bipartite graph K_3,3, and the cube 𝒬_3. We remark that this was the first non-trivial classification of graphs (having regular degree) admitting the PH-property, as, the only 2-regular graph admitting the PH-property is the cycle on 4 vertices, which happens to be 𝒬_2. We also remark that there is an infinite number of 4-regular graphs having the PH-property (see <cit.>). Following such a terminology we can state Fink's result from <cit.> as follows. The hypercube 𝒬_d has the PH-property, for every d≥ 2. Recall that the Cartesian product G H of two graphs G and H is a graph whose vertex set is V(G) × V(H), and two vertices (u_i, v_j) and (u_k, v_ℓ) are adjacent precisely if u_i = u_k and v_jv_ℓ∈ E(H), or u_iu_k ∈ E(G) and v_j = v_ℓ. Given a graph G, the prism operator 𝒫(G) consists of two copies G_1 and G_2 of G with the same vertex labelling as in G, and an edge between the vertices having the same label. Note that 𝒫(G)=G K_2, the Cartesian product of G with K_2. The result of a single application of the operator is usually called the prism graph 𝒫(G) of G (see <cit.>), and repeated applications shall be denoted by powers, with 𝒫^k(G) being the prism graph of 𝒫^k-1(G). If needed we shall assume that 𝒫^0(G)=G. It is worth noting that for d≥ 2, 𝒬_d=𝒫^d-2(Q_2). Hence, Theorem <ref> is equivalent to saying that for each k>0, 𝒫^k(𝒬_2) admits the PH-property. One might wonder whether it is possible to replace 𝒬_2 with some other initial graph. The main contribution of this paper is Theorem <ref>, which generalises Theorem <ref>. We obtain a much larger class of graphs with the PH-property by proving that for every graph G having the PH-property, the graph 𝒫^k(G) has the PH-property for each k≥0. Hence, Kreweras' Conjecture, and therefore Theorem <ref>, turn out to be special consequences of Theorem <ref> obtained starting from G=𝒬_2, which is trivially PH. Other results on this topic, dealing with the Cartesian product of graphs, were also obtained in <cit.> and <cit.>. In particular, we state the following theorem which shall be needed in Section <ref>. Let P_q be a path of length q. The graph P_q𝒬_d admits the PH-property, for d ≥ 5. The above theorem is stated as Theorem 5 in <cit.>, where some other results apart from the statement above are proved. We use this result to obtain one of the same flavour for every connected graph G (see Theorem <ref>). More precisely, we prove that for every arbitrary connected graph G, the graph 𝒫^k(G) has the PH-property for a sufficiently large k, depending on the minimum number of leaves over all spanning trees of G. We refer the reader to <cit.> and <cit.> for other papers dealing with the Pairing-Hamiltonian property and related concepts under some graph operations. § GENERALISING FINK'S RESULT As stated in the introduction, this section will be devoted to generalising Theorem <ref>. Let G be a graph having the PH-property. Then, for each k≥0, 𝒫^k(G) admits the PH-property. Consider 𝒫(G) and let G_1 and G_2 be the two main copies of the graph G in 𝒫(G). Then, a pairing P of 𝒫(G) can be partitioned into three subsets P_1 ∪ P_2 ∪ X where: P_i={xy ∈ P | {x,y}⊂ V(G_i), for each i∈{1,2}}; and X={xy ∈ P | x ∈ V(G_1), y ∈ V(G_2)}. Note that |X| ≡ 0 2 since each G_i admits the PH-property and so are both of even order. We shall distinguish between two cases: whether X is empty or not. Case 1. |X|=0. In this case, P=P_1 ∪ P_2. Since G_1 has the PH-property, there exists a perfect matching M of G_1 such that P_1 ∪ M is a Hamiltonian cycle of K_G_1. Let M' be the perfect matching of G_2 such that x'y' ∈ M' if and only if xy ∈ M. In other words, M' is the copy of M in G_2. We observe that P_2 ∪ M' consists of the union of cycles of even length, say C_1,… , C_t. Note that cycles of length 2 shall be allowed in the sequel as they arise when P_2 ∩ M' ≠∅. For each i ∈{1,…,t}, we choose an edge e_i'=x_i'y_i' ∈ M' ∩ C_i and we denote the corresponding edge in M by e_i=x_iy_i. Consequently, the set N=(M ∖{ e_1,…, e_t}) ∪ (M' ∖{e'_1,…, e'_t}) ∪{ x_ix_i',y_iy_i' | i∈{1,…,t}} is a perfect matching of 𝒫(G) such that P ∪ N is a Hamiltonian cycle of K_𝒫(G). We note that the vertex x_i' in G_2 corresponds to the vertex x_i in G_1, see Figure <ref>. Case 2. |X|=2r>0. In this case we consider an analogous argument to the one used by Fink to prove Theorem <ref>. Since |X| ≠ 0, P_1 is a matching of K_G_1 which is not perfect, as there are 2r unmatched vertices. Let L be an arbitrary set of r edges of K_G_1 such that P_1 ∪ L is a pairing of G_1. Since G_1 has the PH-property, there exists a perfect matching M, of G_1, such that P_1 ∪ L ∪ M is a Hamiltonian cycle of K_G_1. Next we define the following set R = {x y∈ E(K_G_2) |[ ∃ x,y ∈ V(G_1) with {xx,yy}⊆ X and; ∃ an (x,y) -path contained in P_1 ∪ M ]}, such that P_2 ∪ R is a pairing of G_2. Note that x x and y y are edges in K_G since |X| ≠ 0, and their extremes might not be corresponding vertices in G_1 and G_2, as they were in the former case. Since G_2 has the PH-property there exists a perfect matching M^' of G_2, such that P_2 ∪ R ∪ M^' is a Hamiltonian cycle of G_2. It follows that P_1 ∪ P_2 ∪ X ∪ M ∪ M^' is a Hamiltonian cycle of K_𝒫(G) in which M ∪ M^' is a perfect matching of 𝒫(G), see Figure <ref>. This proves that 𝒫(G) has the PH-property and thus, by iterating the prism operator, the result follows. § CONVERGENCE OF GENERAL GRAPH PRISMS TO THE PH-PROPERTY In this section we show that given any connected graph G, there exist a sufficiently large integer k such that 𝒫^k(G) has the PH-property. In other words, after iterating the prism operator a sufficient number of times, the resulting graph will have the PH-property. We remark that if a graph contains a spanning subgraph admitting the PH-property, then the graph itself admits the PH-property. Hence, by Theorem <ref>, the next corollary follows. Let G be a traceable graph. For k ≥ 5, the graph 𝒫^k(G) has the PH-property. Recall that a traceable graph is a graph admitting a Hamiltonian path. Next, we show that starting from an arbitrarily connected graph G, we can always obtain a traceable graph by iterating the prism operator a suitable number of times. To this purpose, we need the following definition and lemma. Let G be a connected graph. The minimum leaf number of G, denoted by ml(G), is the minimum number of leaves over all spanning trees of G. Clearly, for any connected graph G, ml(G)≥ 2, and ml(G)=2 if and only if G is traceable. Let G be a connected graph with ml(G) >2. Then, ml(G) > ml(𝒫(G)). Suppose that ml(G) =t>2 and let G_1 and G_2 be the two copies of G in 𝒫(G). Let R_1,R_2 be two copies of a spanning tree of G with t leaves in G_1 and G_2, respectively. Let S={e_0,e_1,…,e_t-1} be the set consisting of the t edges which connect a leaf of R_1 to the corresponding leaf of R_2. Consequently, we have that T_0=(R_1 ∪ R_2) + e_0 is a spanning tree of 𝒫(G) with 2t-2 leaves. Moreover, T_0+e_1 has exactly one cycle, say C_1. Since ml(G) >2, C_1 is a proper subgraph of T_0 +e_1 and there exists a vertex v of C_1 such that deg_T_0+e_1(v) >2. We note that the removal of an edge of C_1, say f_1, which is incident to v gives rise to a spanning tree T_1=T_0+e_1-f_1 of 𝒫(G) with at most 2t-3 leaves. Then, for every j∈{2,…, t-1}, starting from j=2 and continuing consecutively up to t-1, we choose an edge f_j from E(T_j-1+e_j) lying on the unique cycle in T_j-1+e_j and incident to a vertex of degree at least 3 in T_j-1+e_j. We then let T_j to be equal to T_j-1+e_j-f_j, which by a similar argument to the above is a spanning tree of 𝒫(G) with at most 2t-2-j leaves. Therefore, T_t-1 has at most t-1 leaves and ml(𝒫(G)) ≤ t-1 < ml(G). From the above statements, it is easy to obtain the following result. Let G be a connected graph. Then, 𝒫^k(G) is traceable for all k ≥ml(G)-2. If we start from G and apply the prism operator ml(G)-2 times, by Lemma <ref>, the graph 𝒫^ml(G)-2(G) has ml(𝒫^ml(G)-2(G))=2. Consequently, it admits a Hamiltonian path. Combining Theorem <ref> and Proposition <ref> we obtain the following. Let G be a connected graph with m=ml(G), then 𝒫^m+3(G) has the PH-property. If G is traceable, then m=2, and so, from Theorem <ref> we have that 𝒫^5(G) has the PH-property. On the other hand, if G is not traceable, then m>2. By Theorem <ref>, the graph 𝒫^m-2(G) is traceable. Hence, by Theorem <ref>, 𝒫^m-2(𝒫^5(G))=𝒫^m+3(G) admits the PH-property. § FINAL REMARKS Several open problems were posed in <cit.>. In particular, proving that the graph P_q 𝒬_d has the PH-property for d=3,4 and an arbitrary q is still open. It is dutiful to note that we are aware that in case of a positive answer, Theorem <ref> should be refined accordingly. A much more ambitious problem is to wonder whether it is enough for two graphs G and H to have the PH-property, for G H to have the PH-property as well. This latter question seems very difficult to prove. Here, we have shown, in Theorem <ref>, that it holds when H is the hypercube, which is an iteration of the prism operator. In Theorem <ref>, we see that even if G does not have the PH-property, but is traceable, a large enough number of iterations of the prism operator make it converge to a graph with the PH-property. As a matter of fact, we can define the parameter 𝔭(G) as the smallest positive integer 𝔭=𝔭(G) such that 𝒫^𝔭(G) admits the PH-property. It trivially follows that 𝔭(G)=0 if and only if G is PH. Henceforth, the parameter 𝔭(G) can be considered as a measure of how far a graph G is from having the PH-property, with respect to the prism operator. Determining the behaviour of 𝔭(G) for some special classes of graphs could be of interest in the study of the PH-property. We could also wonder if there are other graphs that speed up the convergence to the PH-property under the Cartesian product, or on the other hand if there are other products under which the convergence to the PH-property is faster. It seems so if we consider the strong product of graphs. The strong product G ⊠ H is a graph whose vertex set is the Cartesian product V(G) × V(H) of V(G) and V(H), and two vertices (u_i, v_j), (u_k, v_ℓ) are adjacent if and only if they are adjacent in G H or if u_i,u_k∈ E(G) and v_j,v_ℓ∈ E(H). It is trivial that G H is a subgraph of G ⊠ H; hence, if G H has the PH-property, then G ⊠ H will inherit the same property as well. A result from <cit.> on accordion graphs easily implies that in the case of Hamiltonian graphs, only one occurrence of the strong product with K_2 is enough to obtain a graph with the PH-property. Let G be a Hamiltonian graph, then G ⊠ K_2 has the PH-property. This suggests that the strong product may have a faster convergence to the PH-property than the Cartesian product also for general graphs. 999 AGZ-Rook M. Abreu, J.B. Gauci and J.P. Zerafa, Saved by the rook: a case of matchings and Hamiltonian cycles, Contrib. Discrete Math. (2023), accepted. AAAHST A. Alahmadi, R.E.L. Aldred, A. Alkenani, R. Hijazi, P. Solé and C. Thomassen, Extending a perfect matching to a Hamiltonian cycle, Discrete Math. Theor. Comput. Sci., 17(1) (2015), 241–254. PrismGraphs R.E.L. Aldred and M.D. Plummer, Matching extension in prism graphs, Discrete Appl. Math., 221 (2017), 25–32. Fink J. Fink, Perfect matchings extend to Hamilton cycles in hypercubes, J. Combin. Theory Ser. B, 97 (2007), 1074–1076. accordions J.B. Gauci and J.P. Zerafa, Accordion graphs: Hamiltonicity, matchings and isomorphism with quartic circulants, Discrete Appl. Math. 321 (2022), 126–137. GauZer J.B. Gauci and J.P. Zerafa, Perfect Matchings and Hamiltonicity in the Cartesian Product of Cycles, Ann. Comb. 25 (2021), 789–796, https://doi.org/10.1007/s00026-021-00548-1https://doi.org/10.1007/s00026-021-00548-1. Hag R. Häggkvist, On F-Hamiltonian graphs, in: J.A. Bondy, U.S.R. Murty (eds.), Graph Theory and Related Topics, Academic Press, New York, 1979, 219–231. Kre G. Kreweras, Matchings and Hamiltonian cycles on hypercubes, Bull. Inst. Combin. Appl. 16 (1996), 87–91. LasVergnas M. Las Vergnas, Problèmes de couplages et problèmes hamiltoniens en théorie des graphes, Thesis, University of Paris 6, Paris, 1972. betwixt F. Romaniello and J.P. Zerafa, Betwixt and between 2-Factor Hamiltonian and Perfect-Matching-Hamiltonian Graphs, Electron. J. Combin. 30(2) (2023), #P2.5.
http://arxiv.org/abs/2307.04270v1
20230709215839
A Complete Finite Equational Axiomatisation of the Fracterm Calculus for Common Meadows
[ "Jan A Bergstra", "John V Tucker" ]
cs.LO
[ "cs.LO" ]
<foo DisplayAlgebra DisplaySignature DisplayEquations [1]text Common Meadows Informatics Institute, University of Amsterdam, Science Park 900, 1098 XH, Amsterdam, The Netherlands [email protected] Department of Computer Science, Swansea University, Bay Campus, Fabian Way, Swansea, SA1 8EN, United Kingdom [email protected] A Complete Finite Equational Axiomatisation of the Fracterm Calculus for Common Meadows Jan A Bergstra1 John V Tucker2 August 12, 2023 =========================================================================================== We analyse abstract data types that model numerical structures with a concept of error. Specifically, we focus on arithmetic data types that contain an error flag whose main purpose is to always return a value for division. To rings and fields we add a division operator x/y and study a class of algebras called common meadows wherein x/0 =. The set of equations true in all common meadows is named the fracterm calculus of common meadows. We give a finite equational axiomatisation of the fracterm calculus of common meadows and prove that it is complete and that the fracterm calculus is decidable. arithmetical data type, division by zero, error flag, common meadow, fracterm, fracterm calculus. § INTRODUCTION Arithmetical structures have deep mathematical theories exploring their abstract axiomatisations, concrete representations, comparisons by homomorphisms, use in constructions, methods of equation solving, etc. For example, the naturals form commutative semirings, the integers form commutative rings, and the rationals, reals and complex numbers form fields. However, for computing, their classical algebraic theories have some shortcomings. Computing with arithmetical structures requires us to make abstract data types with extra algebraic properties that arise from the semantics of algorithms and programs. In practical computation, the application of an operator must return a value, i.e., must be a total operator. For this reason arithmetical structures in computing can have various special elements that flag special behaviour; the most obvious examples are error flags, such as a pocket calculator displays when trying to compute 1/0 or when having an overflow. Floating point arithmetics employ several more flags, such as infinities +∞, -∞ and `not a number' 𝖭𝖺𝖭. Surprisingly, not much is known about the algebraic theories of these augmented structures whose semantical features are deemed practically essential for arithmetical abstract data types. What has been known, at least since von Neumann and Goldstine's 1947 analysis of numerics, is that computer arithmetics do not satisfy the beautiful axioms of classical algebra <cit.>. §.§ Common meadows In <cit.>, we began to investigate semantic aspects of computer arithmetic using the theory of abstract data types. Using the equational methods characteristic of the theory, we have studied several semantic options for undefined operators and overflows, often focussing on data types of rational numbers (we sketch some of this programme later, in section <ref>). In this paper we consider the class of all arithmetical data types called common meadows, which have the form (F ∪{} | 0, 1, , x+y, -x, x · y, x/y) where F is a field and is an element that behaves like an error flag. Following <cit.>, we use the term meadow for any field equipped with division, or inverse, as an operation. The idea of a common meadow was introduced in <cit.>. The class of all common meadows is denoted 𝖢𝖬. Common meadows are built from fields by adding error and division, as follows. Given any field F, we extend its domain with a new element which is absorptive, which means for all x ∈ F, x + = , x · = , and - = . This gives us the enlarged field-like structure 𝖤𝗇𝗅_(F), using the general methods of <cit.>. The addition of disturbs the classical algebra of fields as standard properties can fail, e.g., x - x ≠ 0 because - = and x · 0 ≠ 0 because · 0 = . We will explore the effect of and show that, surprisingly, many familiar laws can be preserved or rescued. With installed, we can extend 𝖤𝗇𝗅_(F) with a total division function x/y, also written x/y, and defined by: x/y = if y=0, y = or x=; otherwise, x/y = x · y^' where y^'∈ F is the unique element for which y · y^'= 1 in F. This algebra we denote 𝖤𝗇𝗅_(F(_/_)) and is a common meadow. With these constructions introduced, we can now turn to the main theorem of the paper, for which we need to be very precise about the syntax of rings, fields and common meadows. The syntax is determined by choosing signatures that contain names for the constants and operations. We need several: Σ_r for rings and fields; Σ_r, for rings and fields with ; Σ_m for meadows; and Σ_cm for common meadows. We will use terms and equations over these signatures. §.§ Fracterm calculus for common meadows The importance of the field of rational numbers for computing influences our use rings and fields in developing data types. In addition to focussing on division as a total function, we highlight the idea of a fraction – the primary representation of rationals in practice – adapting it to the abstract setting of meadows. Although fractions are not well-defined notions, the idea can be made perfectly precise using the syntax of the signature containing division. In general, a fracterm is a term over the meadow signature Σ_m whose leading function symbol is division. Fracterms were introduced in <cit.>, and a full motivation for the use of this syntax and terminology is given in <cit.>. The equational theory of common meadows is the set FC(𝖢𝖬) = { e | ∀ A ∈𝖢𝖬. A e } of all equations over Σ_cm that are true in all common meadows; we call the this the fracterm calculus for common meadows. The objective of the paper is to develop enough theory to prove the following new result (Theorem <ref> below). Theorem. There is a finite equational axiomatisation E_𝖿𝗍𝖼-𝖼𝗆 that is sound for the class 𝖢𝖬 of common meadows and complete w.r.t. equational logic for the fracterm calculus FC(𝖢𝖬) for common meadows, i.e., for any equation e over Σ_cm, E_𝖿𝗍𝖼-𝖼𝗆⊢ e if, and only if, e ∈ FC(𝖢𝖬). In the language of logic, the equational theory of commmon meadows is finitely based. Furthermore: Corollary. The fracterm calculus for common meadows is algorithmically decidable. The class of all fields is classically definable by finitely many first order axioms, allowing negation; but it is not definable by any set of equations or conditional equations as they do not form a variety in the sense of Birkhoff's Theorem, or a quasivariety in the sense of Mal'tsev's Theorem (e.g., they are not closed under products) <cit.>. Equations, and conditional equations, are the preferred forms of axioms for data types, especially if they have good term rewriting properties <cit.>; they are a basic component for specification and verification tools. Seeking equational axiomatisations of arithmetical data types is a technical programme for which this paper is something of a milestone. Common meadows have emerged as a mathematically sound and tractable data types semantics for computer arithmetic. Our theorem improves on earlier axiomatisations and on a partial completeness result for common meadows given in <cit.>, based on fields with characteristic 0. Complementing our theorem is the fact that our axiomatisation E_𝖿𝗍𝖼-𝖼𝗆 does not prove all conditional equations even for characteristic 0, <cit.>: Question. Does the conditional equational theory of common meadows have a sound and complete finite conditional equation axiomatization? §.§ Structure of the paper We begin with preliminaries that recall basic ideas about abstract data types in Section <ref>, and rings, fields and common meadows in Section <ref>. Polynomials play a central role in arithmetical structures and so transitions between standard polynomials and syntactic polynomials for fields and common meadows are established in Section <ref>. In Section <ref> we use the ideas and results we have accumulated to prove the theorems. We discuss technical matters arising and some background to the study of totalisation in Section <ref>. § PRELIMINARIES ON DATA TYPES The theory of abstract data types starts from four basic concepts as follows. An implementation of a data type is modelled by a many-sorted algebra A of signature Σ. A signature Σ is an interface to some (model of an) implementation of the data type, and the constants and operations declared in Σ provide the only means of access to the data for the programmer. Axiomatisations of the operations in a signature define a range of implementations and provide the only means for the programmer to reason about the data. Two implementations of an interface are equivalent if, and only if, their algebraic models are isomorphic. The theory of arithmetic data types we are developing here is shaped by these and the following following general concepts. §.§ Terms and equations That signatures model interfaces establishes an essential role for the syntax of terms and equations in the theory abstract data types. Let Σ be any signature. Let X be any countable set of variables. Let T(Σ) and T(Σ, X) be the algebras of all closed or ground terms over Σ, and open terms with variables in X, respectively. Given a Σ-algebra A, and a valuation σ for variables in a term t ∈ T(Σ, X), the result of evaluating t in A using σ is denoted t _σ. An equation is a formula of the form e ≡ t(x_1, … , x_k) = t'(x_1, … , x_k) where t(x_1, … , x_k), t'(x_1, … , x_k) are terms over Σ with variables from the list x_1, … , x_k ∈ X of all the variables in e – the terms t and t' need not have the same variables. Set Eqn(Σ,X) to be the set of all equations over Σ with variables taken from X. An equation e ≡ t = t' ∈ Eqn(Σ,X) is valid in the algebra A, written A e, if for all valuations σ of variables of e, t _σ = t' _σ. The equation e is valid in a class 𝖪 of Σ-algebras, written 𝖪 e, if it is valid in every algebra in 𝖪. Given E ⊂ Eqn(Σ,X), we use equational logic for reasoning and write E ⊢ e if e can be deduced from E. Let 𝖪 be a class of Σ-algebras. An axiomatisation of the algebras of 𝖪 by a set E of equations is sound w.r.t. equational logic for 𝖪 if for all e ∈ E, if E ⊢ e then 𝖪 e. Conversely, the axiomatisation E is complete w.r.t. equational logic for 𝖪 if for all e ∈ E, if 𝖪 e then E ⊢ e. In trying to axiomatise a given class 𝖪 of structures soundness is necessary. However, for a given class 𝖪 of structures completeness is more complicated and, in fact, rare; for example, if the class 𝖪 is a particular algebra (unique up to isomorphism), such as some algebra of rational numbers. In algebraic practice, many classes studied consist of just the algebras that satisfy an interesting set of axioms. However, these classes that are defined by axioms arise from encounters with structures, and the search for an axiomatisation and its study is a method for discovering the essential properties of these structures. Let 𝖪 be a class of Σ-algebras. The set EqnThy(𝖪) = { e | ∀ A ∈𝖪 . A e } is called the equational theory of 𝖪. §.§ Data types and their enlargements by The properties of interest to abstract data types are isomorphism invariants – typical examples are properties that are definable by first order formulae and forms of computability. This means that if a property is true of any data type A, and is an isomorphism invariant, then the property will be true of its abstract data type. For more of the general theory of abstract data types see <cit.>. Our algebras will be single-sorted and have a non-empty carrier so we will use a simple notation for data types. For instance, (A | c_1, …, c_k, f_1,…, f_l) denotes a data type with domain A and constants c_1, ...,c_k from A, and functions f_1,..,f_k, where it is assumed that arities for the functions on A are known from the context. A Σ-algebra A is Σ-minimal if it is generated by the constants and operations named in its signature Σ. A data type is a Σ-minimal algebra. An abstract data type is an isomorphism class of a data type. Algebras can be expanded by adding new constants and operations to their signature. Algebras can be extended by adding new elements to their carriers. Combining expansions and extensions in some order comprises what we call enlargements of an algebra. Consider the following general method of enlarging an algebra with . Consider the algebra (A | c_1, …, c_k, f_1,…, f_l) of signature Σ. Suppose ∉ A and let Enl_(A) = (A ∪{} | c_1, …, c_k, , f_1,…, f_l) wherein is (i) absortive, i.e., if is an argument to an operation f then the result is ; and (ii) totalising, i.e., if any operation f is undefined in A then it returns a value in Enl_(A). Let Σ_ = Σ∪{} be the signature of Enl_(A). If the algebra A is total then f returns if, and only if, one of its arguments is . We can adapt some equational axioms true of A to accommodate by using this idea: An equation t = t' is a balanced equation if the terms t and t' have the same variables. Their key property is this: Let A be a Σ algebra and let t = t' be a balanced equation. Then, A t = t' if, and only if, Enl_(A) t = t'. § PRELIMINARIES ON ARITHMETIC STRUCTURES In the arguments that follow, we will move between the algebra of rings, fields (with and without ) and common meadows. §.§ Rings and fields and common meadows We start from the theory of commutative rings and fields. A commutative ring with 1 is an algebra of the form (R | 0, 1, x+y, -x, x · y). A field F is a commutative ring with 1 in which each non-zero element x ∈ F has an inverse y ∈ F, i.e., x · y = 1. Note rings and fields have the same three operations. Let Σ_r be a signature for rings and fields. All our rings will be commutative with 1. Let be a ring of integers and let be a field of rational numbers containing the subring . We add to a ring R by applying the enlargement of Definition <ref> to make the algebra Enl_(R) = (A ∪{} | 0, 1, , x+y, -x, x · y) with signature Σ_r,. The same construction applied to a field F yields Enl_(F). The point of adding is to manage division. §.§ Meadows and common meadows To fields we add division to make a meadow. A meadow is a partial algebra F( _/_) obtained as an expansion of a field with a division function _/_ that works as usual on non-zero elements of the domain of F. Let Σ_m = Σ_r∪{_/_}. To totalise division, we add to a meadow F( _/_) by applying the enlargement of Definition <ref>: A common meadow is a total algebra Enl_(F( _/_)) = (F ∪{} | 0, 1, , x+y, -x, x · y, x/y) with signature Σ_cm = Σ_m,. Thus, we have a field F equipped with a division function _/_ that has been made total by having x/0 = for all x, including . Equivalent designs for meadows and common meadows can be based on inverse as a primitive, an approach that was taken in <cit.>. Recall that to qualify as a data type, an algebra must be minimal, i.e., generated by its constants and operations. Now, if F is a finite prime field then 𝖤𝗇𝗅_(F) is minimal, while for all other fields F – especially the rationals – the algebra is non-minimal and is not a data type for that reason. Division is needed to make the classical field of rational numbers a data type: The common meadow Enl_(( _/_)) of rationals is Σ_cm-minimal and hence qualifies as a data type. Recalling an observation made in <cit.>, we summarise the constuction: Every field F can be enlarged to a common meadow Enl_(F( _/_)) that is unique with respect to isomorphisms that fix the field F. If F is a computable field then Enl_(F( _/_)) is a computable common meadow. It is easy to see that the extension of F by is computable. Division is partial on F, but its set { (x, 0) | x ∈ F } of undefined arguments is computable, for which the value for divisions can be computed. See, e.g., <cit.> for methods to express this argument in detail. Applying the definitions of equations in Section <ref> we have: The fracterm calculus of common meadows is the set FC(𝖢𝖬) = { e ∈ Eqn(Σ_cm) | ∀ A ∈𝖢𝖬. A e } of all equations made of Σ_cm-terms that are true in all common meadows. §.§ Polynomial sumterms For the next steps in preparing for the proof, we need some syntactic theory of polynomials adapted to the presence of in rings and fields and, later, to working with division in common meadows. A sumterm is a Σ_r term with _+_ as its leading function symbol. A pure product term is a Σ_r term containing only multiplications _·_. A flat sumterm is a sum of pure product terms, where sums may have an arbitrary length (assuming the associativity of addition). Let Eqn(Σ_r) denote the set of equations made from terms over Σ_r. Now since Σ_r⊂Σ_r,⊂Σ_cm these ring terms and equations are destined to play a special role in the theory of common meadows: they are the simple terms and equations over Σ_cm that do not involve or division. Let SumEqn(Σ_r) ⊂ Eqn(Σ_r) be the set of all equations whose terms are sumterms. The sumterm calculus of common meadows is the set SumC(𝖢𝖬) = { e ∈ SumEqn(Σ_r) | ∀ A ∈𝖢𝖬. A e } of all sumterm equations true in all common meadows. §.§ Equational specifications with Consider the set E_𝗐𝖼𝗋, of equational axioms over Σ_r, in Table <ref>. (x+y)+z = x + (y + z) x+y = y+x x+0 = x x + (-x) = 0 · x x · (y · z) = (x · y) · z x · y = y · x 1 · x = x x · (y+ z) = (x · y) + (x · z) -(-x) = x 0 · (x + y) = 0 · (x · y) x + = E_𝗐𝖼𝗋,: equational axioms for weak commutative rings with Notice these equations are close to the equational axioms of commutative rings. The eight equations for commutative rings in Table <ref> that are intact are balanced equations (Lemma <ref>). The two axioms (4) and (10) are adjusted to the presence of . For example, the unbalanced equation x + (-x) = 0 is replaced by the balanced x + (-x) = 0 · x, which is valid for x =. Axiom (11) introduces , from which the absorption axioms for · and - can be derived from E_𝗐𝖼𝗋,. We call these axioms for weak commutative rings. The equations E_𝗐𝖼𝗋, in Table <ref> are a finite axiomsatistion that is complete for the (i) sumterm calculus for rings equipped with ; (ii) sumterm calculus for fields equipped with ; and (iii) sumterm calculus for common meadows. The validity of these axioms in all structures of the form 𝖤𝗇𝗅_(F( _/_)), for a field F, is easy to check by inspection. Hence, the axioms are sound for SumC(𝖢𝖬). By Proposition 2.3 of <cit.>, the equations E_𝗐𝖼𝗋, of Table <ref> provide a complete axiomatisation of the equational theory of the class of structures obtained as 𝖤𝗇𝗅_(R) for some ring R. It is an immediate corollary of the proof of Proposition 2.3 in <cit.> that contemplating a smaller class of structures by requiring that R is a field allows the conclusion to be drawn for 𝖤𝗇𝗅_(F): In the final lines of that proof, instead of considering a ring of integers one may use, to the same effect, a field of rationals. Since the sum terms and equations do not involve division, completeness holds 𝖤𝗇𝗅_(F( _/_)). In Section <ref>, we build the equations of common meadows by axiomatising _/_ on top of this set E_𝗐𝖼𝗋,. § STANDARD POLYNOMIALS AS SYNTACTIC TERMS OVER COMMON MEADOWS Working with standard polynomials over rings and fields does not involve syntax or . Here we collect some results on standard polynomials over fields and, in particular, (i) formalise syntactic terms for standard polynomials and (ii) establish a two-way transformation between standard polynomials and their formal syntactic counterparts. §.§ Properties of standard polynomials Consider the polynomial rings [X_1,…,X_n] ⊆[X_1,…,X_n]. We need to distinguish specific types of multivariate polynomials. In particular, each value s ∈, including 0, counts as a polynomial. However, coefficients of polynomials must be non-zero, so 0 · X_1 · X_2 will not be considered a polynomial in [X_1,X_2]. A polynomial p in [X_1,…,X_n] is primitive if the gcd of its coefficients equals 1. Let be an arbitrary but fixed algebraic closure of the field . Suppose p and q are polynomials in [X_1,…,X_n] which take value 0 at the same argument vectors in ^n, then p and q have the same irreducible polynomials as factors (up to constant factors in ), in the ring [X_1,…,X_n]. This follows by repeated application of the Nullstellensatz (e.g., <cit.>, Ch. IX, Theorem 1.5) and unique factorization (e.g., <cit.>, Ch. IV, Corollary. 2.4). (Lemma of Gauss.) Consider a polynomial p ∈[X_1,…,X_n]. Suppose that p is non-zero and has a factorisation p = r_1 · r_2 in [X_1,…,X_n]. Then for some rational numbers c_1,c_2 ∈, p = c_1 · r_1 · c_2 · r_2 and the polynomials c_1 · r_1 and c_2 · r_2 are in [X_1,…,X_n]. Suppose that a non-zero primitive polynomial p ∈[X_1,…,X_n] has a factorisation p = r_1 ·…· r_m with r_1,…,r_m irreducible polynomials in [X_1,…,X_n]. Then the multiset {r_1, …, r_m} of polynomials, modulo the sign thereof, is unique. Suppose α and β are primitive non-zero polynomials in the ring [X_1,…,X_n] with the property that α and β take value 0 on the same argument vectors in . Then there are primitive irreducible polynomials γ_1,…,γ_m ∈[X_1,…,X_n] and positive natural numbers a_1,…,a_n, b_1,…,b_m such that in [X_1,…,X_n], α = γ_1^a_1·…·γ_n^a_m and β = γ_1^b_1·…·γ_n^b_m. If in it is the case that α and β vanish on the same arguments both have the irreducible factors, say γ_1,…,γ_m over [X_1,…,X_n]. Using Proposition <ref> these irreducible polynomials may be chosen in [X_1,…,X_n], and with Proposition <ref> one finds that, viewed as a set, said collection of polynomials is unique modulo the sign of each polynomial. In the proof below only Proposition <ref> will be used. §.§ Polynomial sumterms in the setting of common meadows The step from the ordinary algebra of rings and fields to working in common meadows is not difficult, but it involves some details. The key syntactic idea is a special sumterm called a polynomial sumterm over Σ_r, and hence over our other signatures, which will work like a standard polynomial in conventional algebra. To replicate in syntax the various standard polynomials, we begin with choosing sets of numerals, which are closed terms for denoting the naturals, integers and rationals. Numerals for natural numbers are: 0,1,2, 3,… where 2≡ 1+1, 3≡2 +1, … In general: n+1≡n+1. (The precise definition of numerals is somewhat arbitrary and other choices are equally useful.) For integers we will have terms of the form -n with n>0. We will use the notation n for an arbitrary integer, thus 0≡ 0, 1≡ 1 and for positive n, -n≡ - (n). For rational numbers, we have terms of the form n/m and -n/m with n>0, m>0 and (n,m) = 1. In this way, for each a ∈ we have a unique numeral t_a such that t_a = a in . We build the polynomial sumterms in stages. A pure monomial is a non-empty product of variables (understood modulo associativity and commutativity of multiplication). A monomial is a product c · p with c a non-zero numeral for a rational number and p a pure monomial. We will assume that pure monomials are written in a uniform manner mentioning the variables in the order inherited from the infinite listing X_1,X_2,… with powers expressed as positive natural numbers (where power 1 is conventionally omitted). Recalling Definition <ref> of sumterms: A polynomial sumterm p is a flat sumterm for which (i) all summands involve pairwise different pure monomials, and (ii) none of the coefficients is 0, unless p ≡ 0. Clearly, 0 is a polynomial sumterm while is not a polynomial sumterm as polynomial sumterms are terms over Σ_r: Given polynomial sumterms p and q, Enl_() p=q if, and only if, E_𝗐𝖼𝗋,⊢ p = q. This is an immediate corollary of the proof of Theorem 2.1 in <cit.>. §.§ Transitions between standard polynomials and polynomial sumterms We now turn to the relationship between standard polynomials and polynomial sumterms. Upon evaluation of the numerals that serve as its coefficients, a polynomial sumterm p with variables in X_1,…,X_n can be understood as a standard polynomial p' in the ring [X_1,…,X_n]. Thus, we have the translation: p ↦ p'. Conversely, a polynomial α∈[X_1,…,X_n] can be written as a polynomial sumterm α^⋆ by turning all coefficients in into the corresponding numerals. Thus, we have the translation: α↦α^⋆. Given polynomial sumterms p and q involving the same variables and a ring R, the following equivalence holds: Enl_(R) p = q if, and only if, p' = q' in R. Moreover, the following observation can be made, which, however, critically depends on the assumption that all coefficients of a polynomial are non-zero. For all polynomials α and β: α = β in R if, and only if, Enl_(R) α^⋆ = β^⋆. For all polynomials α and β: α = β in if, and only if, Enl_() α^⋆ = β^⋆ if, and only if, E_𝗐𝖼𝗋,⊢α^⋆ = β^⋆. This follows by combining Proposition <ref> with Proposition <ref>. Properties of polynomial sumterms and standard polynomials correspond as follows: (i) p is non-zero ⟺ p' is non-zero, (ii) p has degree n ⟺ p' has degree n, (iii) p is irreducible ⟺ p' is irreducible, (iv) p is primitive ⟺ p' is primitive, (v) q is a factor of p ⟺ q' is a factor of p', (vi) any polynomial sumterm p can be written as a· q for a non-zero integer a and a primitive polynomial sumterm q. §.§ Quasi-polynomial sumterms Consider, for instance, the Σ-terms x and x + 0 · y. On evaluating in a commutative ring R, these terms over Σ_r define the same functions, but they do not do so in the enlargement Enl_(R) as they take different values upon choosing x=0, y =. Thus, the terms need to be distinguished: since 0 usefully occurs as a coefficient in a polynomial when working with . We will work with a second kind of polynomial sumterm in order to make these issues explicit. A quasi-polynomial sumterm p is either (i) a polynomial sumterm, or (ii) a monomial of the form 0 · r with r a pure monomial with all its variables in r occurring in the first power only, or (iii) the sum q + 0 · r of a polynomial sumterm q and a monomial of the kind in (ii) and such that no variables occur both in q and in r. The following proposition provides a rationale for the specific form of quasi-polynomial sumterms as just defined. Given a sumterm p which contains at least one variable, a pure monomial q can be found with variables occurring with power 1 only such that 0 · p = 0 · q. Use the following rewrite rules, working modulo commutativity and associativity of addition and multiplication, each of which are sound w.r.t. E_𝗐𝖼𝗋,: x+ 0 → x, 0 · (x · y) → (0 · x) + (0 · y), 0 · (x+ y) → (0 · x) + (0 · y), 0 ·n→ 0, (0 · x) + (0 · x) → 0 · x until no further rewrites with these rules can be performed and finally use the rule 0 · x + 0 · y → 0 · (x · y) to arrive at the required form. The sum of two polynomial sumterms need not be provably equal by E_𝗐𝖼𝗋, to a polynomial sumterm. Indeed, x + (-x) = 0 · x is merely a quasi-polynomial sumterm. However, conversely: A product r = p · q of two non-zero polynomial sumterms p and q is provably equal to a polynomial sumterm by E_𝗐𝖼𝗋,. We write t=_𝗐𝖼𝗋, r for E_𝗐𝖼𝗋,⊢ t=r. First, notice that p · q is provably equal to a quasi-polynomial sumterm. For example, consider p ≡ x+1, q ≡ x-1, then r = p · q =_𝗐𝖼𝗋, (x^2 + 0 · x) + (-1)=_𝗐𝖼𝗋, (x · (x + 0)) + (-1) =_𝗐𝖼𝗋, x^2 + (-1). More generally, if a variable x occurs in either p or q then as a function p · q depends on x, from which it follows that in the polynomial α with α = p' · q', x must occur at least once in a monomial of α of which the coefficient is non-zero. This implies that an additional summand 0 · x is unnecessary in the quasi-polynomial sumterm α^⋆, which for that reason is provably equal with E_𝗐𝖼𝗋, to a polynomial sumterm. Let p and q be integer polynomial sumterms, both with non-zero degree, with variables among X_1,…,X_n and such that p' as well as q' are primitive polynomials. Suppose that in Enl_() both p and q have value 0 on the same argument vectors in Enl_()^n. Then, there are (i) a positive natural number m, (ii) integer polynomial sumterms r_1,…,r_m with non-zero degree, such that r_1',…,r_n' are primitive polynomials, and (iii) non-zero natural numbers a_1,…,a_n, b_1,…,b_m such that E_𝗐𝖼𝗋,⊢ p = r_1^a_1·…· r_n^a_m and E_𝗐𝖼𝗋,⊢ q = r_1^b_1·…· r_n^b_m. Let p and q be as assumed in the statement of the Proposition. Now p and q evaluate to 0 for the same argument vectors in Enl_()^n. It follows that p and q must contain precisely the same variables. To see this, assume otherwise that say variable x occurs in p and not in q (the other case will work similarly) and then choose a valuation for the other variables in which solves q=0, by additionally having value for x a valuation is obtained where q=0 and p =, thereby contradicting the assumptions on p and q. Both p' and q' then have non-zero degree and are non-zero polynomials with, using Proposition <ref>, the same zeroes in ()^n. Now Proposition <ref> can be applied with α≡ p', β≡ q' thus finding polynomial sumterms γ_1,…,γ_m, and numbers a_1,…,a_m, b_1,…,b_m such that in : p' = α = γ_1^a_1·…·γ_m^a_m and q' = β = γ_1^b_1·…·γ_m^b_m. Now choose: r_1≡γ_1^⋆,…,r_m≡γ_m^⋆. It follows that Enl_() p = r_1^a_1·…· r_m^a_m andEnl_() q = r_1^b_1·…· r_m^b_m. Moreover, with Proposition <ref>, we know that r_1^a_1·…· r_m^a_m is provably equal to a polynomial sumterm, say P (by E_𝗐𝖼𝗋,) and that r_1^b_1·…· r_m^b_m is provably equal to a polynomial sumterm, say Q. So we find Enl_() p = P and Enl_() q = Q, and in consequence Enl_() p = P and Enl_() q = Q. Lastly, using Proposition <ref>, E_𝗐𝖼𝗋,⊢ p = P and E_𝗐𝖼𝗋,⊢ q = Q from which one finds that E_𝗐𝖼𝗋,⊢ p = r_1^a_1·…· r_m^a_m and E_𝗐𝖼𝗋,⊢ q = r_1^b_1·…· r_m^b_m thereby completing the proof. The quasi-polynomial sumterm introduces extra variables via a linear monomial. In <cit.> introduced extra variables using a linear sum 0 · (x_1 + … + x_1), which takes the same values. From <cit.> we take the following information concerning sumterms: Let t be a Σ_r,-term, then either (i) E_𝗐𝖼𝗋,⊢ t =; or (ii) there is a quasi-polynomial sumterm p such that E_𝗐𝖼𝗋,⊢ t = p. In each case the reduction is computable. § EQUATIONAL AXIOMS FOR COMMON MEADOWS We now add to the equational axioms E_𝗐𝖼𝗋, in Table <ref> to make a set of equational axioms for common meadows: E_𝖿𝗍𝖼-𝖼𝗆 in Table <ref>. These equations have been presented in a different but equivalent form in <cit.>. By inspection, one can validate soundness: (Soundness of E_𝖿𝗍𝖼-𝖼𝗆.) 𝖢𝖬 E_𝖿𝗍𝖼-𝖼𝗆. import    E_𝗐𝖼𝗋, x = x/1 -x/y = -x /y x/y·u/v = x · u/y · v x/y + u/v = (x· v) + (y · u)/y · v x/( u/v) = x ·v · v/u · v x/y + 0 · z = x + 0 · z/y = 1/0 E_𝖿𝗍𝖼-𝖼𝗆: Equational axioms for fracterm calculus for common meadows §.§ On fracterms and flattening The introduction of division or a unary inverse introduces fractional expressions. The theory of fractions is by no means clear-cut if the lack of consensus on their nature is anything to go by <cit.>. However, in abstract data type theory, fractions can be given a clear formalisation as a syntactic object – as a term over a signature containing _/_ or -^-1 with a certain form. Rather than fraction we will speak of a fracterm, following the terminology of <cit.> (item 25 of 4.2). A fracterm is a term over Σ_cm whose leading function symbol is division _/_. A flat fracterm is a fracterm with only one division operator. Thus, fracterms have form p/q, and flat fracterms have the form p/q in which p and q do not involve any occurrence of division. Note that fracterms are generally defined as terms of the signature Σ_m of meadows, but we will use them only over the Σ_cm of common meadows (and its subsignatures). The following simplification process is a fundamental property of working with fracterms. (Fracterm flattening <cit.>.) For each term t over Σ_cm there exist p and q terms over Σ_r, i.e., both not involving or division, such that E_𝖿𝗍𝖼-𝖼𝗆⊢ t = p/q, i.e., t is provably equal to a flat fracterm. Furthermore, the transformation is computable. Immediate by structural induction on the structure of t, noting that any occurrence of can be replaced by 1/0. The set E_𝖿𝗍𝖼-𝖼𝗆 of equational axioms for common meadows has been designed so that the proof of fracterm flattening is straightforward; it also allows other results of use for this paper to be obtained easily. More compact but logically equivalent axiomatisations can be found. In <cit.>, using inverse rather than division, a set of logically independent axioms for common meadows is given, from which fracterm flattening is shown, the proof of which then is correspondingly harder. From now on we will omit brackets thanks to associativity commutativity of addition and multiplication. §.§ Completeness We prove that the equations E_𝖿𝗍𝖼-𝖼𝗆 are complete for the fracterm calculus 𝖢𝖬 of common meadows, i.e., for the equational theory of the class of common meadows: For any equation t=r over Σ_cm the following holds: E_𝖿𝗍𝖼-𝖼𝗆⊢ t=r if, and only if, t=r is valid in all common meadows. The soundness of E_𝖿𝗍𝖼-𝖼𝗆 was noted in Proposition <ref>. For completeness, suppose that t=r is valid in all common meadows, i.e., 𝖢𝖬 t=r. In what follows, for brevity, we will write ⊢ e for E_𝖿𝗍𝖼-𝖼𝗆⊢ e. By the fracterm flattening Theorem <ref>, we can find Σ_r terms p,q,u,v such that ⊢ t = p/q and ⊢ r = u/v. By Proposition <ref>, each of these four terms can be written in the form of a quasi-polynomial sumterm: ⊢ p = s_p + 0 · h_p, ⊢ q = s_q + 0 · h_q, ⊢ u = s_u + 0 · h_u, ⊢ p = s_v + 0 · h_v with s_p, s_q, s_u, s_v polynomial sumterms and h_p, h_q, h_u and h_v linear monomials. Substituting these quasi-polynomial sumterms for p,q,u,v and applying axiom 17 of E_𝖿𝗍𝖼-𝖼𝗆, we get ⊢p/q = (s_p + 0 · h_p) + 0 · h_q/s_q and ⊢u/v = (s_u + 0 · h_u) + 0 · h_v/s_v. So, to prove ⊢ t = r we need to prove ⊢(s_p + 0 · h_p) + 0 · h_q/s_q =(s_u + 0 · h_u) + 0 · h_v/s_v assuming its validity in all common meadows. Now, notice that in all common meadows s_q and s_v must produce 0 on precisely the same non- valuations of the variables occurring in either of both expressions. Six cases will be distinguished, of which the first five are straightforward to deal with: (i) s_q ≡ 0 and s_v ≡ 0. Here, trivially ⊢(s_p + 0 · h_p) + 0 · h_q/s_q = =(s_u + 0 · h_u) + 0 · h_v/s_v. (ii) s_q ≡ 0 and s_v ≢0. This is not possible because s_q and s_v must produce 0 on the same valuations of variables and if, for a polynomial sumterm h, h ≢0 then it must be that for some common meadow Enl_(G(_/_)) and valuation σ, we have Enl_(G(_/_)), σh=0. The symmetric case s_q ≢0 and s_v ≡ 0 is not possible for corresponding reasons. (iv) s_q and s_v are both non-zero numerals, say s_q = a and s_v = b. Now, factorisations of a and b both contain the same prime numbers. To see this, otherwise assume that say prime c is a divisor of a while c is not a divisor of b. Then, working in the prime field F_c of characteristic c, s_q takes value 0 while s_v does not. The symmetric case that b has a prime factor c which is not a divisor of a works in the same way. (v) one of s_q and s_v is a non-zero numeral, while the other one contains one or more variables, i.e., has degree 1 or higher. This situation is impossible because in that case the polynomial sumterm of nonzero degree takes both value zero and nonzero (on appropriate arguments) in and for that reason also on appropriate non- valuations for ()_. (vi) Lastly, we are left with the main case that both s_q and s_v are polynomials with non-zero degree. It suffices to prove ⊢s_p + 0 · (h_p + h_q)/s_q =s_u + 0 · (h_u + h_v)/s_v from its validity in all common meadows. Now, as a first step, chose non-zero integers a and b as follows: a is the of the coefficients of s_q and b is the of the coefficients of s_v. Further, choose polynomial sumterms ŝ_q and ŝ_v such that ⊢ s_q = a·ŝ_q and ⊢ s_v = b·ŝ_v. Next, we show that a and b must have the same prime factors. If not, say c is a prime factor of a but not of b. In the algebraic closure F_c of the prime field F_c of characteristic c a solution – i.e., a valuation σ – exists for the equation s_v -1 = 0; this equation must be of non-zero degree as s_v is of non-zero degree. We find that F_c,σc = 0 so that F_c,σa = 0, which implies F_c,σ s_q = 0. Furthermore, F_c,σb≠ 0 and F_c,σŝ_v = 1 so that F_c,σ s_v = b·ŝ_v ≠ 0, which contradicts the assumptions made above. Without loss of generality, we may assume that a and b are both positive, and we take an increasing sequence of prime factors c_1,…,c_k with respective positive powers e_1,…,e_k and f_1,…,f_k such that a = c_1^e_1·…· c_k^e_k and b = c_1^f_1·…· c_k^f_k. The next step is to notice that ŝ_q and ŝ_v must have the same zero's in and to apply Proposition <ref> on the polynomial sumterms ŝ_q and ŝ_v, thereby obtaining a sequence of irreducible and primitive polynomials r_1,…,r_m with positive powers a_1,…,a_m and b_1,…,b_m such that ⊢ŝ_q = r_1^a_1·…· r_m^a_m and⊢ŝ_v = r_1^b_1·…· r_m^b_m. By substitution, now we know that s_q + 0 · (h_p + h_q)/c_1^e_1·…· c_k^e_k· r_1^a_1·…· r_m^a_m = s_u + 0 · (h_u+ h_v)/c_1^f_1·…· c_k^f_k· r_1^b_1·…· r_m^b_m. It suffices to prove the same equation from E_𝖿𝗍𝖼-𝖼𝗆 and to that end we proceed in the following manner. First, notice by the usual rules of calculation, available from E_𝖿𝗍𝖼-𝖼𝗆, 1/x = 1+0/x = 1/x + 0/x = x + 0 · x/x · x = (1 +0) · x/x · x = x/x · x. Then, let K_max be the maximum of e_1,…,e_k,f_1,…,f_k,a_1, …,a_m, b_1,…,b_m, and let K = K_max+1. Now, we make repeated use the validity of x + 0 · w/y · z^g = (x · z^h) + 0 · w/ y · z^g+h (⋆) for positive integers g and h (in this case for g + h = K) in order to transform the above equation into another, but equivalent, equation between flat fracterms with the same denominator. The identity (⋆) is a consequence of the validity of the equations 1/x = x/x · x and (x + (0 · y)) · z = (x · z) + (0 · y)). Let t̂≡s_q + 0 · (h_p + h_q)/c_1^e_1·…· c_k^e_k· r_1^a_1·…· r_m^a_m and r̂≡s_u + 0 · (h_u+ h_v)/c_1^f_1·…· c_k^f_k· r_1^b_1·…· r_m^b_m. Moreover, let t̂̂̂≡(s_q · c_1^K-e_1·…· c_k^K-e_k· r_1^K-a_1·…· r_m^K-a_m) + 0 · (h_p + h_q)/c_1^K·…· c_k^K· r_1^K·…· r_m^K and r̂̂̂≡(s_u · c_1^K-f_1·…· c_k^K-f_k· r_1^K-b_1·…· r_m^K-b_m) + 0 · (h_u+ h_v)/c_1^K·…· c_k^K· r_1^K·…· r_m^K. Here it is assumed that the variables in h_q do not occur elsewhere in t̂̂̂ and that the variables of h_u do not occur elsewhere in r̂̂̂. With repeated use of the identity (⋆) we find that ⊢t̂ = t̂̂̂ and ⊢r̂ = r̂̂̂. Summarizing the above, we have established that ⊢ t = t̂ = t̂̂̂, ⊢ r = r̂ = r̂̂̂ and Enl_() t̂̂̂ = r̂̂̂. Consider the numerators and let H_t = s_q · c_1^K-e_1·…· c_k^K-e_k· r_1^K-a_1·…· r_m^K-a_m and H_r = s_u · c_1^K-f_1·…· c_k^K-f_k· r_1^K-b_1·…· r_m^K-b_m. Then, from Enl_() t̂̂̂ = r̂̂̂, it follows that working in Enl_() for all non- rational substitutions σ, if Enl_(),σ c_1^K·…· c_k^K · r_1^K·…· r_m^K-b_m≠ 0 it must be the case that Enl_(),σ H_t= H_r. So, for all non- valuations σ, Enl_(), σ (c_1^K·…· c_k^K· r_1^K·…· r_m^K) · (H_t-H_r )= 0. Rings of polynomials over have no zero divisors and the polynomial sumterm c_1^K·…· c_k^K· r_1^K·…· r_m^K is non-zero. Thus, it follows that, H_t-H_r = 0 as polynomials so that ⊢ H_t = H_r. Finally, we complete the proof by noticing that ⊢ H_t + 0 · (h_p + h_q) = H_r +0 · (h_u+ h_v) because otherwise both terms contain different variables which cannot be the case. To see this latter point, notice that if, say x occurs in H_t + 0 · (h_p + h_q) and not in H_r +0 · (h_u+ h_v), then, because H_t = H_r, a contradiction with Enl_() t̂̂̂ = r̂̂̂ is arises: contemplate any valuation σ that satisfies Enl_() c_1^K·…· c_k^K · r_1^K·…· r_m^K-b_m-1 = 0, a requirement which is independent of x. Indeed, now the RHS depends on x while the LHS does not, which is a contradiction, thereby completing the proof. The fracterm calculus of common meadows is decidable. Given an equation e, if it is true in all common meadows then it is provable from E_𝖿𝗍𝖼-𝖼𝗆. The equations provable from this finite set E_𝖿𝗍𝖼-𝖼𝗆 are computably enumerable. Thus, the true equations of the fracterm calculus of common meadows are computably enumerable. If e is not true in all common meadows then e fails in an algebraic closure of some prime field or F_p for some prime p. These fields are computable and can be computably enumerated uniformly <cit.>, and a computable search for a counterexample to e attempted. Thus, the false equations of the fracterm calculus of common meadows are computably enumerable. In consequence, the fracterm calculus of common meadows is decidable. Of course, this enumeration argument for decidability is crude. However, we note that the completeness proof for Theorem <ref> is effective because the transformations which are used are all computable – including the earlier necessary lemmas such as flattening (Theorem <ref>) and reductions to quasi-polynomials (Proposition <ref>). From these transformations, which map the provability of equations to the identity of terms, an alternate proof of decidability can be constructed that offers an algorithm for the provability and validity of equations and invites a further independent analysis. § CONCLUDING REMARKS §.§ Matters arising The completeness result distinguishes the axioms in E_𝖿𝗍𝖼-𝖼𝗆 as an abstract characterisation of fields with a simple, workable error flag, i.e., the common meadows. Being close to the axioms for commutative rings, the axioms E_𝖿𝗍𝖼-𝖼𝗆 are not unfamiliar and hopefully memorable; they establish a firm platform for the algebraic and logical study of an attractive practical semantics for reasoning about arithmetical data types. The equational axiomatisation E_𝖿𝗍𝖼-𝖼𝗆 has been optimised for ease of use in the paper (e.g., especially flattening), and we have not paid attention to the logical independence of the various axioms. Some of the axioms of E_𝖿𝗍𝖼-𝖼𝗆 are redundant, given the other ones. Given their arithmetic purpose, the relationships between axiomatisations of common meadows and axiomatisations of rings and fields are of mathematical interest and practical value. Finding attractive sets of axioms which are also minimal is a topic worthy of investigation in its own right. In the revision of  <cit.> the same equational theory, though equipped with inverse rather than with division, is given an axiomatisation with logically independent axioms. Three open questions stand out from the results in this paper: (i) Is the fracterm calculus of the common meadow Enl_((_/_)) of rationals decidable? (ii) Can a finite basis for the fracterm calculus of common meadows with orderings be found? (iii) Can the fracterm calculus of common meadows be axiomatised by means of a specification which constitutes a complete term rewriting system? In the matter of (ii), this was done in the setting of 1/0 = 0 using a sign function in <cit.>. In the matter of (iii), a negative result in a simplified case was obtained in <cit.>. Notwithstanding these open questions, we consider common meadows to provide an attractive basis for the formal specification of arithmetics for computation. §.§ Background to the problem of division by zero Completely central to quantification and computation are the rational numbers . When we measure the world using a system of units and subunits then we use the rational numbers. Today's computers calculate only within subsets of the rational numbers. An early objective for our theory is to design and analyse abstract data types for the rational numbers. Designing a data type for rationals requires algebraic minimality, which can be obtained by introducing either division or inverse as an operation. Thus, division using rational numbers is essential and must be total, which requires choosing a value for 1/0. Using various semantical flags to be found in practical computations to totalise division – such as 𝖾𝗋𝗋𝗈𝗋, ∞, NaN, the last standing for `not a number' – we have constructed equational specifications (under initial algebra semantics) for the following data types of rational numbers: Involutive meadows, where an element of the meadow's domain is used for totalisation, in particular 1/0 = 0, <cit.>. Common meadows, the subject of this paper, where a new external element that is `absorbtive' is used for totalisation 1/0 =, <cit.>; Wheels, where a one external ∞ is used for totalisation 1/0 = ∞ = -1/0, together with an additional external error element to help control the side effects of infinity, <cit.>; Transrationals, where besides the error element two external signed infinities are added, one positive and one negative, so that division is totalised by setting 1/0 = ∞ and -1/0 = -∞, <cit.>; Symmetric transrationals, where the error element , two external signed infinities +∞, -∞, and two infinitesimals +ι, -ι are added so that division is totalised by setting 1/0 =, as with common meadows, and the other elements are used to manage overflows and underflows, <cit.>; specifically, totality is separated from over and under flows. In practice, the first four of these models are based on data type conventions to be found in theorem provers, common calculators, exact numerical computation and, of course, floating point computation, respectively. The last, the symmetric transrationals, we developed to extend the scope and improve the algebra of the transrationals. For some historical remarks on division by zero, we mention <cit.>, and for a survey we mention <cit.>. Of these five semantical options it may be helpful to compare the common meadows with one of the above. The simplest choice appears to be the involutive meadows, which have been deployed in logical arguments and has its advocates <cit.>. In our <cit.>, to create an equational specification for the rational numbers we introduced totality by setting 0^-1=0. This led us to the study of involutive meadows  <cit.>, and subsequently to the broad programme of work cited above. An explicit logical discussion of the proposal to adopt 0^-1 = 0 dates back at least to Suppes <cit.>, and led to theoretical work of Ono <cit.>. A completeness result was shown by Ono <cit.>. In <cit.>, the fracterm calculus of involutive meadows was introduced. Completeness for the Suppes-Ono fracterm calculus is shown with a different proof in <cit.>. An advantage of the latter approach to completeness is that it generalises to the case of ordered meadows, see also <cit.>. Although the flattening property is quite familiar from the school algebra of rational numbers, it stands in marked contrast with the abstract situation for involutive meadows. In <cit.> it is shown that, with the axioms for involutive meadows, terms are provably equal to only finite sums of flat fracterms; and in <cit.>, it is shown that arbitrarily large numbers of summands may be needed for that purpose. Thus, the involutive meadows run into difficulties that the common meadows do not. Our results here and elsewhere point to the fact that arithmetical abstract data types with error flags are theoretically superior among the many practical conventions we have studied. This design decision is attractive semantically since as an error flag can have a number of different interpretations in computations. Futhermore, much of the algebra we have encountered for common meadows is intimately and agreeably connected with the theories of rings and fields; and it serves rather well the theory of data types of rational numbers, which must be a starting point for theorising. tocsectionReferences 99 AndersonVA2007 James A. Anderson, Norbert Völker, and Andrew A. Adams. 2007. Perspecx Machine VIII, axioms of transreal arithmetic. In J. Latecki, D. M. Mount and A. Y. Wu (eds), Proc. SPIE 6499. Vision Geometry XV, 649902, 2007. AndersonB2021 James A. Anderson and Jan A. Bergstra. 2021. Review of Suppes 1957 proposals for division by zero. Transmathematica, (2021). <https://doi.org/10.36285/tm.53>. Bergstra2019b Jan A. Bergstra. Division by zero, a survey of options. Transmathematica, (2019). <https://doi.org/10.36285/tm.v0i0.17>. Bergstra2020 Jan A. Bergstra. 2020. Arithmetical data types, fracterms, and the fraction definition problem. Transmathematica, (2020). < https://doi.org/10.36285/tm.33>. BergstraBP2013 Jan A. Bergstra, Inge Bethke and Alban Ponse. 2013. Cancellation meadows: a generic basis theorem and some applications. The Computer Journal, 56 (1) (2013), 3–14. Also <arxiv.org/abs/0803.3969>. BergstraBP2015 Jan A. Bergstra, I. Bethke, and A. Ponse. Equations for formally real meadows. Journal of Applied Logic, 13 (2) (2015), 1–23. BergstraHT2009 Jan A. Bergstra, Yoram Hirshfeld, and John V. Tucker. Meadows and the equational specification of division. Theoretical Computer Science, 410 (12) (2009), 1261–1271. BergstraM2015 Jan A. Bergstra and C.A. Middelburg. Division by zero in non-involutive meadows. Journal of Applied Logic, 13(1): 1–12 (2015). <https://doi.org/10.1016/j.jal.2014.10.001> BergstraM2016a Jan A. Bergstra and Cornelis A. Middelburg. Transformation of fractions into simple fractions in divisive meadows. 2015. Journal of Applied Logic, 16 (2015), 92–110. Also <https://arxiv.org/abs/1510.06233>. BergstraP2015 Jan A. Bergstra and Alban Ponse. 2015. Division by zero in common meadows. In R. de Nicola and R. Hennicker (eds), Software, Services, and Systems: Wirsing Festschrift, Lecture Notes in Computer Science 8950, Springer, 2015, 46–61. For an improved version (2021), see: . BergstraP2016 Jan A. Bergstra and Alban Ponse. 2016. Fracpairs and fractions over a reduced commutative ring. Indigationes Mathematicae, 27, (2016), 727–748. Also <https://arxiv.org/abs/1411.4410>. BergstraT1995 Jan A. Bergstra and J.V. Tucker. Equational specifications, complete term rewriting systems, and computable and semicomputable algebras. Journal of the ACM, Vol. 42 (6), 1194-1230 (1995). BergstraT2007 Jan A. Bergstra and John V. Tucker. 2007. The rational numbers as an abstract data type. Journal of the ACM, 54 (2) (2007), Article 7. BergstraT2020 Jan A. Bergstra and John V. Tucker. 2020. The transrational numbers as an abstract data type. Transmathematica, (2020). <https://doi.org/10.36285/tm.47>. BergstraT2021a Jan A. Bergstra and John V. Tucker. 2021. The wheel of rational numbers as an abstract data type. In Roggenbach M. (editor), Recent Trends in Algebraic Development Techniques. WADT 2020. Lecture Notes in Computer Science 12669, Springer, 2021, 13–30. BergstraT2022b Jan A. Bergstra and John V. Tucker. 2022. On the axioms of common meadows: Fracterm calculus, flattening and incompleteness. The Computer Journal. Online first, 8pp. <https://doi.org/10.1093/comjnl/bxac026> BergstraT2021c Jan A. Bergstra and J.V. Tucker. Partial Arithmetical Data Types of Rational Numbers and their Equational Specification Journal of Logical and Algebraic Methods in Programming, 128, August 2022, 100797. <https://doi.org/10.1016/j.jlamp.2022.100797> BergstraT2021b Jan A. Bergstra and John V. Tucker. 2022. Totalising partial algebras: Teams and splinters. Transmathematica, <https://doi.org/10.36285/tm.57> BergstraT2022c Jan A. Bergstra and John V. Tucker, Symmetric Transrationals: The Data Type and the Algorithmic Degree of its Equational Theory, In N. Jansen et al. (eds.) A Journey From Process Algebra via Timed Automata to Model Learning - A Festschrift Dedicated to Frits Vaandrager on the Occasion of His 60th Birthday, Lecture Notes in Computer Science 13560, 63-80. dosReisGA2016 Tiago S. dos Reis, Walter Gomide, and James A. Anderson. 2016. Construction of the transreal numbers and algebraic transfields. IAENG International Journal of Applied Mathematics, 46 (1) (2016), 11–23. <http://www.iaeng.org/IJAM/issues_v46/issue_1/IJAM_46_1_03.pdf> EhrichWL1997 Hans-Dieter Ehrich, Markus Wolf, and Jacques Loeckx. Specification of Abstract Data Types. Vieweg Teubner, 1997. EhrigMahr1985 H. Ehrig and B. Mahr. Fundamentals of Algebraic Specification 1: Equations und Initial Semantics, EATCS Monographs on Theoretical Computer Science, Vol. 6, Springer, 1985. ShepherdsonF1956 Albrecht Fröhlich and John C. Shepherdson Effective procedures in field theory, 1956 Philosophical Transactions of the Royal Society of London. Series A, Mathematical and Physical Sciences 248, 407-432 <http://doi.org/10.1098/rsta.1956.0003> NeumannGoldstine1947 John von Neumann and Hermann Goldstine. Numerical inverting of matrices of high order. 1947. Bulletin American Mathematical Society, 53 (11), 1021-1099. Lang2002 Serge Lang. Algebra. Graduate Texts in Mathematics, Vol. 211, Third revised edition. Springer. Mal'tsev1973 A.I. Mal'tsev. Algebraic systems, Springer, 1973. MeinkeTucker92 K. Meinke and J. V. Tucker. Universal Algebra. In S Abramsky and D Gabbay and T Maibaum, Handbook of Logic for Computer Science, Oxford University Press, 1992, 189–411. Ono1983 Hiroakira Ono. 1983. Equational theories and universal theories of fields. Journal of the Mathematical Society of Japan, 35 (2) (1983), 289-306. OkumuraSM2017 Hiroshi Okumura, Saburou Saitoh and Tsutomu Matsuura. 2017 Relations of zero and ∞. Journal of Technology and Social Science, (2017) 1 (1). Okumura2018 Hiroshi Okumura. Is it really impossible to divide by zero? Biostatistics and Biometrics Open Acc. J. 7 (1) 555703. DOI: 10.19080/BBOJ.2018.07.555703, (2018) Setzer1997 Anton Setzer. 1997. Wheels (Draft), Unpublished. 1997. StoltenbergTucker1999 Viggo Stoltenberg-Hansen and John V. Tucker. 1999. Computable rings and fields, in Edward Griffor (ed), Handbook of Computability Theory, Elsevier, 1999, 363-447. Suppes1957 Patrick Suppes. 1957. Introduction to Logic. Van Nostrand Reinhold, 1957. Tucker2022 John V Tucker. 2022. Unfinished Business: Abstract data types and computer arithmetic. BCS FACS FACTS, The Newsletter of the Formal Aspects of Computing Science BCS Specialist Group, Issue 2022-1, February 2022, 60-68. <https://www.bcs.org/media/8289/facs-jan22.pdf> Wechler1992 Wolfgang Wechler. Universal Algebra for Computer Scientists. Springer-Verlag, 1992. >
http://arxiv.org/abs/2307.04016v1
20230708171122
Cellular LTE and Solar Energy Harvesting for Long-Term, Reliable Urban Sensor Networks: Challenges and Opportunities
[ "Alex Cabral", "Vaishnavi Ranganathan", "Jim Waldo" ]
cs.NI
[ "cs.NI" ]
Explicit a posteriori error representation for variational problems and application to TV-minimization [ August 12, 2023 ======================================================================================================== empty § INTRODUCTION As the global urban population continues to grow, cities are increasingly interested in monitoring urban processes such as vehicular traffic, and public health and environmental harms including air pollution and noise, to help cities grow in a healthy and sustainable fashion <cit.>. The lowering cost of sensing infrastructure and recent digital twin capabilities have encouraged city officials, researchers, and urban residents to use large-scale, low-cost sensor networks to monitor hyperlocal phenomena, inform policy and planning decisions, and collect data to help transition to being considered smart cities <cit.>. We identify that, to be successful, a smart city network must be: * reliable: the network should continue to operate and transmit data over long periods of time and across the city to ensure equitable node distribution <cit.> * scalable: it should be easy to add/replace nodes within the network at any new location in the city <cit.> * easy to maintain: nodes should be outfitted with hardware and firmware that minimize the need for in-person maintenance <cit.> * real-time: data must be transmitted as quickly as possible, particularly for applications such as emergency services <cit.>, and the network must be monitored in real-time for maintenance <cit.> * low-cost: by using existing infrastructure and services, the network can avoid added costs in installation and maintenance <cit.> We determine that two key features of an urban sensor network's design can help to make the network fit within the aforementioned criteria. The first is connectivity, which is essential for data transmission, real-time node monitoring, and software updates. The second is power, which provides for reliable operation and data collection. The decisions that cities and network designers make in these two areas have a direct and significant impact on the criteria for a successful smart city network. For example, an urban sensor network that uses a low-power wide-area network (LPWAN) for connectivity may not satisfy the criteria of low cost because the backhaul infrastructure required, although low in per-unit cost, quickly becomes expensive when considering the number of cells required for a large, dense sensor network <cit.>. Similarly, a smart city network that relies on wired power may not be scalable, as nodes will be limited to locations that already have wired mains <cit.> and will involve additional installation and maintenance cost. Based on a review of prior urban sensor network deployments and our experience working on a large-scale sensor network, we establish that LTE networks and solar panels are the appropriate connectivity and power choice for most urban sensor networks given the available options and necessary criteria. Although LTE performance for mobile communication in urban areas is well-researched <cit.>, the performance of IoT-specific networks when implemented in a city-scale long-term sensor network deployment is yet to be characterized. Solar power in urban sensor networks has also been evaluated on a small scale <cit.>, but not in a large-scale long-term deployment. Moreover, there are no established guidelines that can ensure reliable performance for future deployments of such large-scale LTE-connected, solar-powered sensor networks. Finally, researchers have not looked into the overlap between technical issues that arise in LTE connectivity or solar power and the socioeconomic factors that make up many “sensor deserts" <cit.>, or areas that lack nodes in cities with sensor networks. In this work we describe the design and analyze the connectivity and power performance of a stationary 118-node LTE-M connected, solar-powered sensor network deployed for one year in Chicago, Illinois. We find that 11 of the 118 original node locations could not support LTE connectivity, despite all FCC and network provider connectivity maps indicating otherwise. A small number of cell towers and node locations are disproportionately affected by significantly delayed readings, and 44 of the 118 nodes experienced issues charging in the winter months. Furthermore, we discover that connectivity and power related issues are not equitably spread around the city, but rather are more prominent in areas that are classified as socioeconomically disadvantaged and have a larger racial minority population. Our primary contribution is an in-depth analysis of a long-term real-world deployment assessing the feasibility and reliability of a large-scale LTE-connected and solar-powered urban sensor network. Additional contributions include: 1) highlighting the overlap between technical challenges in urban sensor networks and socioeconomic inequality, 2) revealing the inherent challenges in relying upon open data sources that are commonly used to predict connectivity and power availability for urban sensor network deployments, and 3) identifying strengths and weaknesses to define future research directions in energy harvesting systems and equitable network infrastructure deployments to ensure the just future of smart city networks. This paper is structured as follows: Section 2 offers an overview of Related Works; Section 3 highlights why the city of Chicago is a useful case study for urban sensor networks; Section 4 highlights the design of the sensor network and datasets used; Section 5 discusses the connectivity of the sensor network, including the hardware, network carrier information, and insights from the year-long deployment; Section 6 details the powering of the sensor network, including the hardware, energy management techniques, and insights from the deployment; Section 7 provides a discussion, focusing on the implications of the challenges we discovered and the limitations of our study. § RELATED WORKS In this section, we first review former and existing sensor network deployments to identify necessary criteria, prior evaluations, and known issues around inequality. We then examine LTE connectivity and solar power in urban areas, as these are the technologies we use for our sensor network. §.§ Criteria for Urban Sensor Networks By examining prior urban sensor network deployments, we have identified five criteria necessary for success—reliability, scalability, ease of maintenance, real-time communication, and low cost. The shortcomings of prior sensor networks has often been caused by a lack of reliability, either in terms of not functioning over time, as with malfunctioning hardware <cit.>, or not communicating data reliably over space and time <cit.>. Many prior networks have also raised the issue of scalability, which is especially prevalent when relying on electrical cables and wired power, which may be available at street lamps or traffic signals, but ultimately limits the node placement locations <cit.>. Similar initiatives have shown that reliance on these specific locations can additionally make installation and maintenance more difficult, which then increases the cost of operation <cit.>. The issue of maintenance is particularly important in urban settings, where the cost of accessing a node can be very high <cit.>. Conversely, we find that some deployments are more successful because they achieve low-cost via the use of existing infrastructure. For example, officials in New York City chose to use an existing public safety wireless network for a new traffic control system <cit.> and Chicago's Array of Things relied on cellular networks <cit.>, decisions that helped ease installation and thus save costs. §.§ Evaluations of Urban Sensor Network Deployments The evaluations of real-world sensor network deployments in urban settings have often been small-scale and short-term. A small number of researchers have shared the lessons and challenges learned from urban sensor network deployments, but many of these are focused on specific data such as noise <cit.> and water quality <cit.>. Furthermore, many of these studies rely on the power grid for high computation tasks <cit.>, or use technologies such as Wi-Fi or Zigbee for data transfer <cit.>. The works that evaluate LTE-connected or solar-powered urban sensor networks are small scale and short duration studies that do not offer extended insights on reliability <cit.>. §.§ Inequality of Sensor Networks As smart city networks are increasingly explored and deployed, sociology and urban planning researchers have begun to evaluate the potential social implications of urban sensor networks. For example, one group of researchers evaluated prior urban sensor network deployments and identified areas deemed “sensor deserts", which are those that lack nearby sensors based on a straight line distance <cit.>. As the researchers state, sensor deserts not only add to existing forms of inequality, but the placement of sensor nodes can also affect resident perception of the distribution of resources and harms throughout a city <cit.>, creating potential political or social strife if nodes are not visible in certain areas. Similarly, others have noted the potential for smart city technologies to “further deepen the splintering of urban networks, creating deep divides between those with access to 'smart' and those without" and raising questions about the “politics of urban exclusion" <cit.>. Thus, there is an increasing push for equity as a consideration in practical sensor network deployment <cit.>. §.§ LTE Connectivity in Urban Areas Extensive research around mobile connectivity has revealed a variety of factors known to affect RSS and limit propagation distance for LTE signals. These include physical features such as high-rise buildings <cit.>, the distance between the cell tower and receiver <cit.>; meteorological conditions such as precipitation <cit.>, humidity <cit.>, strong winds <cit.>, temperature <cit.> and sudden weather changes <cit.>; and environmental measures such as high particulate matter concentrations <cit.>. Another major factor that affects signal strength is inter-cell interference (ICI) <cit.>, which occurs when a node moves to the edge of one cell tower's range while moving closer to another cell tower. We include all these factors in our analysis of connectivity issues in section 5. §.§ Solar Charging in Urban Areas Due to the vast quantity of previously deployed solar powered sensor networks and the numerous papers published about these networks, it seems guaranteed that solar power is reliable for most sensor network deployments. However, there have been very few studies looking into the long-term reliability of solar power in urban settings. Dehwah et al. <cit.> evaluate the performance of a traffic monitoring sensor network in a desert city, and describe the effect of dust storms and building shadows on solar charging. However, they do not do a deep analysis into the locations that were most affected by shadows to determine how the issue may be prevented in future deployments and the potential social implications. To our knowledge, this work presents the first in-depth analysis of a large-scale, long-term cellular, solar-powered urban sensor network towards understanding the broader impact of the technical challenges for urban communities. § CHICAGO AS A CASE STUDY §.§ Building Height According to the Council on Tall Buildings and Urban Habitat <cit.>, amongst cities around the world, Chicago has the 10th most buildings 150 meters and higher, 11th most buildings 200 meters and higher, and 5th most buildings 300 meters and higher. However, its place on those lists is expected to fall within the coming years—Chicago has only three buildings 150 meters and higher under construction and twelve proposed for construction. By comparison, Wuhan, Shenyang, and Bangkok—cities just below Chicago on the list of most 150+ meter buildings—have 49, 14, and 17, buildings under construction respectively, and dozens more proposed in both Wuhan and Shenyang. In addition, development in cities such as Mumbai, Nanning, and Nanjing, which all have several 150+ meter buildings under and proposed for construction will propel them past Chicago in the list in the coming decades. This puts Chicago currently in a unique position for evaluating the impact of built environment towards planning global urban sensor networks. §.§ Latitude and Sunlight Hours Chicago has a latitude of 41.88 degrees, where the sun is visible for 15 hours, 15 minutes during the summer solstice and 9 hours, 6 minutes during the winter solstice. According to data from the World Economic Forum <cit.>, the top five most populous latitudes are between the 22nd and 27th parallel north, which are all much closer to the equator and thus have more sunlight on the winter solstice, with an average of 10 hours 35 minutes. Nevertheless, a number of highly populated cities reside at or above the 42nd parallel north, including London, Moscow, Harbin, and Toronto, as well as much of Western Europe. Cities such as New York and Beijing are also located at nearly the same latitude, receiving 9 hours 13 minutes sunlight on the winter solstice. Furthermore, as the effects of climate change disproportionately affect populations who live closer to the equator, mass migration away from the equator is expected <cit.>. Thus, understanding the performance of solar-powered sensor networks at northern latitudes is essential for future urban environmental sensing. §.§ Segregation and Inequality Based on 2020 United States Census Data, Chicago is the fourth most racially segregated large city (population at least 200,000) in the United States <cit.>. Fig. <ref>a highlights Chicago's racial segregation, showing where the white and non-white—primarily Black and Latine—populations live relative to each other. There is limited data comparing racial segregation in global cities, likely because many countries are more racially homogeneous than the United States. However, segregation based on income or social status exists in many global cities, with the highest levels of inequality and segregation often found in cities of lower income countries <cit.>. According to Gini Index data from the 2019 American Community Survey <cit.>, Chicago has the 10th greatest income inequality amongst US cities, with a Gini index of 0.53 (where a 0 indicates perfect equality and 1 indicates perfect inequality). Compared to cities such as London and Johannesburg, which have the highest global Gini index values—both over 0.7—Chicago has a relatively medium-high level of income inequality <cit.>. As seen in Fig. <ref>b, the areas of Chicago that are considered most socioeconomically disadvantaged based on factors such as unemployment and poverty level also overlap with many of the areas that have a majority Black or Latine population. Thus, we believe that Chicago provides a useful case study by which to examine the potential social and equity implications that sensing technologies can introduce in cities around the globe. § SENSOR NETWORK AND DATA §.§ Sensor Network Design The sensor network, described in further detail in [blinded] and shown in Fig. <ref>, was designed and deployed to collect air pollution data across Chicago. The network comprised of 118 unique sensor node locations, with 20 nodes allocated to local environmental justice groups for placement according to their priorities, 12 nodes at four EPA stations (3 nodes at each station) for collocation to perform calibration, and the rest placed based on locations chosen through stratified random sampling, as described in NYCCAS <cit.>, with a small subset chosen by partner organizations. All devices that were not at EPA stations were installed at bus shelters throughout the city, as shown in Fig <ref>. These nodes were placed at the same height, about 2.5 meters above ground. Nodes at EPA stations were located on the rooftops near the EPA monitors, several meters above ground and at different heights based on the height of the building or structure housing the EPA monitor. Most of the devices were installed at their respective locations in July and August 2021, with 98 nodes (over 83%) placed by July 3rd, 2021. §.§ Datasets The node-related data for each reading, including the time, received signal strength (RSS), battery level, internal node temperature, and air pollutant readings were all logged with each reading and stored in an cloud server. We calculated the latency by comparing the time of the sensor reading to the time of the data's insertion into the server. Cell tower information, such as the cell tower ID, were collected when making a connection with the tower. We used OpenCellID <cit.> to link the cell tower information with locations, OSM (Open Street Maps) Buildings <cit.> to gather data about buildings surrounding the nodes, FCC Broadband <cit.> and nPerf <cit.> data to examine AT&T connectivity, Meteostat <cit.> to collect external weather data, and the Shadow Accrual Maps tool <cit.> to calculate the amount of shadow hours at each node location. Socioeconomic data were pulled from the City of Chicago Open Data Portal <cit.>. §.§ Data Cleaning We removed readings that had no connectivity data (N = 9,393, 0.2% of readings), readings where the signal was equal to zero (N = 11,626, 0.12%), readings where the tower location was clearly outside of Chicago, possibly due to sensors being shipped back and forth when there were issues (N = 11,778, 0.12%), and readings with a delay of more than 24 hours (N = 54,900, 0.63%), as this was likely indicative of a device issue, rather than connectivity or charging issue. We also identified 565,371 readings (12.7%) where the cell tower could not be located in the OpenCellID database; we kept these readings in for all analyses except ones involving distance and general direction of the cell tower. § CONNECTIVITY §.§ Motivation for an LTE-Connected Urban Sensor Network Despite recent advances in WiFi and low-power wide-area networks (LPWAN), such as LoRaWAN <cit.>, most urban sensor networks will rely on cellular networks in the coming years for the following reasons: 1) Dependence on existing urban cellular networks ensures city-wide coverage without additional infrastructure. 2) Widespread global availability and flexible data plans with each generation. 3) Lower cost and ease of setup and scaling—for technologies such as LoRaWAN, scalability is a particularly pressing issue due to the cross-technology interferences that will arise from other technologies <cit.> and potential packet collisions with large sensor networks <cit.>. In addition, LPWAN require dedicated infrastructure that have a low per-unit cost, but quickly add up in costs based on the cells required to support high node density <cit.>. Thus, to support the necessary criteria of reliability, real-time, and low cost, we use an LTE network for communication. LTE networks propose great coverage in most cities around the globe <cit.>, providing means for scaling reliably. Because the cellular infrastructure is already built and evolving, networks are easy to set up and remain low-cost, especially with the variety of LTE plans available. Finally, with the fast evolving generations of cellular communication, such networks are increasingly seen as dedicated low latency connectivity for massive IoT deployments in growing cities <cit.>. §.§ Materials: Antenna and LTE Carrier The sensing nodes connected via AT&T's 4G IoT LTE-M One network, which uses LTE Bands 2, 4, and 12, and operates at frequencies of 700, 1700, and 1900 MHz. Each node used a SIM card and Ignion NN03-310 antenna <cit.>, which transmits data over 3G and 4G, is tuned for channels 2, 3, 4, 5, 9, 12, 20, and 28, and operates on frequencies from 698-960 MHz and 1710-2690 MHz. The antenna was placed at the top right of the printed circuit board (PCB) [After conversations with the antenna manufacturer and a small series of tests, it was determined that antenna placement on a PCB can have a significant effect on the RSS values. It is imperative for sensing node designers to consult with antenna manufacturers to ensure correct antenna placement on custom PCB for the best connectivity.], as shown in Fig <ref>. §.§ Methods: Node Connectivity and Data Transmission The sensing node preserved battery life by periodically waking up to record a sample and transmit data to the cloud, as further described in Section <ref>. For this deployment, the nodes were set to transmit data every five minutes from the last recorded sample time. The data transmission process included the following series of steps: 1) The microprocessor woke up and kicked off two processes on separate threads, 2a) One thread sampled the sensor with the longest latency, typically about 8 seconds, 2b) A separate thread simultaneously initiated connection to the cloud, 3) Another array of low latency sensors were sampled, 4) The data were then packaged and transmitted to the IoT endpoint going through the cell tower, AT&T network routers etc. §.§ Methods: Retry Logic If a node could not connect to the cloud, it stored the reading locally, went back to sleep for five minutes, and tried to connect again. After 10 retries, if the node still could not connect, then the node was set to reboot itself. After a reboot, the node would immediately try to make a connection to the cloud and would not record local readings until it did because the node lacked a real time clock. Once the node could connect again, it transmitted all locally stored data and errors that were logged in the absence of connectivity. §.§ Results: Readings and Cell Towers For the one-year period and 118 nodes in our network, our dataset included 8,684,756 readings. We linked the readings to 417 unique cell tower locations, 65 with only 1 associated reading, 179 with 500 (0.0057%) or more readings, and 165 with 1000 (0.011%) or more readings. §.§ Results: “Dead Zones" Over the course of our deployment, we identified 11 locations (9.32%) at which the sensor nodes reported consistently low RSS values and ultimately failed to connect, generally within a few days of installation. These 11 locations include 10 from the main deployment beginning in July 2021 and one node location from an earlier pilot program in April 2021. 3 of the 11 locations were selected for deployment by local community groups, a significant percentage more than in the overall deployment. Initial mitigation strategies involved moving the nodes to the closest bus shelter, which was often directly across the street. However, we discovered that the nodes had to be moved even further—sometimes multiple blocks away—to establish a connection. We examined a number of factors to determine the potential cause of these “dead zones", including the distance between the node and cellular tower, the number of towers close to a node, evidence of inter-cell interference (ICI) <cit.>, and nearby physical urban structures, including the distance and height of the closest building to the node, and the number, tallest height, mean and median building height within 100, 250, and 500 meters of each node. We found no evidence to suggest that any of these features had an effect on a node's ability to connect, when comparing all “dead zones" to all other node locations. When comparing “dead zone" locations to the new locations each of those nodes was moved to, we found a statistically significant difference in the height of the tallest building within 100 meters of the node after relocation versus before, as shown in Fig. <ref>. This indicates that land use and urban form close to the location of stationary sensors are likely factors impacting connectivity, fitting in line with observation from prior work <cit.>. In addition, we investigated the role of line-of-sight as a primary factor contributing to “dead zones". We examined the relation between the sensor node, cellular tower, and tallest nearby building for the two nodes found to connect to the same primary cellular tower at their original (“dead zone") and new location. We found that one of these node configurations exhibited line-of-sight interference, as shown in Fig. <ref>, as the tallest building (11.9 meters) was clearly in the path between the cellular tower and sensing node. Due to the limited number of examples to examine, there is a need for further investigation in larger datasets, however, this evidence supports the key role of line-of-sight impediments in contributing to “dead zones". Finally, we examine the socioeconomic factors around the node locations without connectivity. We do not find a significant difference in the socioeconomic factors when comparing node locations that can and cannot connect, likely because there are a large number of nodes around the city. However, we do note that many of the dead zone locations are in socioeconomically disadvantaged and majority Black and Latine neighborhoods, as shown in Fig. <ref>a. §.§ Results: Signal Strength As shown in Fig. <ref>, the yearly median signal strength for each node ranged from -61 dBm to -113 dBm, with a network-wide median of -87 dBm. There was no significant difference in the median signal strength for community-selected versus randomly-selected nodes and we did not identify a statistical relationship between surrounding physical features, such as building height or distance to buildings, and the median signal strength for the sensor node or corresponding cell tower location. As with “dead zones", we found that the node locations with the lowest median signal strength—those less than 100 dBm—were nearly all sited in neighborhoods that are socioeconomically disadvantaged and have a higher percentage racial minority population. In fact, only one of the eight locations with a low median signal strength was sited in a majority white neighborhood, as shown in Fig. <ref>b. §.§ Results: Latency We found that over the entire year's worth of data, the minimum latency was 2 seconds, the median latency was 5 seconds, and the interquartile range fell between 4 and 6 seconds (our data allowed only for estimating seconds, and not milliseconds for latency). When examining the median latency for each sensor node over the course of the study, we found a much tighter distribution then we saw for median signal strength. In fact, the interquartile range all falls at the exact same value of 5 seconds. There are only three sensor locations with a median latency greater than that value, shown in Fig. <ref>c, and two of those locations overlap with those that have poor median signal strength, suggesting a correlation between signal strength and latency. We find that only 7.24% of readings have a latency of 10 or more seconds, 1.18% have a latency of 30 or more seconds, and less than 1% (0.88%) have a latency of one minute or longer. Although these are low percentages, we examined the significantly delayed readings to determine if they occur randomly or follow a pattern. We found that the delayed readings do not occur randomly, but rather appeared disproportionately on certain dates, at certain sensor locations, and with certain cellular towers, as seen in Fig. <ref>. Interestingly, the sensor locations with the most delayed readings have no overlap with the locations that have either the lowest median signal strength or the highest median latency. However, when looking at the map of the sensor locations in Fig <ref>d, we see again that most of these locations are in neighborhoods with a majority Black or Latine population. We could not identify any temporal or location-based events events, such as sporting games, that have previously been associated with cellular network delays and may have caused these significant events. Coupled with the lack of empirical evidence from the cellular service providers , we are led to determine that the delays are likely caused to carrier-specific issues such as cell tower maintenance. § POWER §.§ Motivation for a Solar-Powered Urban Sensor Network Nodes must be continuously running to collect data over time, yet many outdoor urban spaces are not equipped with accessible wired mains <cit.>. Solar power is the most ubiquitous form of renewable energy for sensor networks, and will remain prevalent in the coming years for the following reasons: 1) Solar panels are relatively inexpensive and easy to install. 2) Solar panels can power sensors that need to operate continuously in remote or hard-to-reach locations where it may be difficult or expensive to run electrical cables or replace batteries. 3) Using solar power eliminates the need for frequent battery replacements, which creates an added burden for cities looking to deploy sensor networks. Thus we use solar energy to power our sensor network to achieve reliability through continuous power, scalability in allowing for power in locations that do not have outlets, ease of maintenance by limiting battery replacements, and low-cost by requiring no new infrastructure. §.§ Materials: Battery, Solar Panel, and Power Usage   Each sensing node was outfitted with a rechargeable 2000 mAh lithium polymer battery and a 10×13 cm Voltaic Systems P126 6W solar panel. The solar panel was attached horizontally, in a flat position, to the top of the node's respective bus shelter to maximize solar absorption, maintain security of the panel, and provide ease of installation. To optimize for low power consumption, the microcontroller operated in a duty cycled mode, consuming as little as 40 µA between measurements. The device's four electrochemical gas sensors consume microwatts of power, while the particulate matter (PM) sensor consumes up to 80 mA power as it relies on an internal fan to circulate air. Thus to optimize the overall power usage, we sampled the gases every 60 seconds and sampled the PM and transmitted data every 5 minutes. On average, the device drew 4mA current over a 24 hour period, allowing the battery to power the sensing node, including communications, for approximately 15 days at the aforementioned sampling rate. §.§ Methods: Power Saving Strategies In October 2021, we noticed that one of the devices was no longer charging. After sending the local maintenance team to investigate, we discovered that the sun was no longer reaching the solar panel due to the change in the sun's position and the node's location surrounded by skyscrapers. We anticipated that this issue would begin to show up in other nodes as well, so determined three potential solutions to ensure the network still collected useful data throughout the winter months: * Set the sampling interval to be more than every five minutes, which would deplete the battery less quickly by running the PM sensor and data transmission less often. * Implement a power-saving mode to ensure devices only run when they have a certain amount of battery and sleep when they are below that value. * Schedule devices to only run at certain times of the day, i.e. for a few hours in the middle of the day when there is sunlight. Naturally, each option comes with its own trade-offs that had to be considered. Sampling less often would provide less temporal coverage which could cause cities to potentially miss timely notifications from sensors, make it more difficult to identify noisy or anomalous readings through techniques such as moving averages, and introduce calibration errors from datasets with different resolutions. A power-saving mode could result in large time spans with no data, creating difficulty in comparing data from different seasons and potentially resulting in a lack of data needed for calibration. Scheduling devices to only run at certain times would limit data collection to only specific hours of the day, and may not solve the issue if the number of hours is not chosen correctly. Based on the tradeoffs and our need of data for sensor calibration, we implemented a power-saving mode to put devices into a deep sleep to avoid depleting the batteries in low- or no-light conditions. Power-saving mode was initiated when a battery's power level fell to 15% or less of its total capacity then turned off when the battery's power level had recharged to at least 40%. §.§ Results: Data Loss due to Power Saving Mode Between the autumn and spring equinox of the year long study period, 44 devices (37.29%) went into power saving mode (PSM), with most devices entering PSM between January and March. Seven of these devices were at community selected sites, representing about 16% of the devices in PSM, indicating the community selected sites were not disproportionately affected. In total, devices in the networks spent 19,450,915 seconds — over 33,180 hours or 1382.5 days—in PSM, resulting in about 398,000 potential sensor readings that were not captured. Most devices entered PSM numerous times, with several entering more than five times during the study period. Thus, in many locations there was adequate sunlight to keep the devices charged throughout the winter months if a larger solar panel had been used or the devices had better energy harvesting to extend the battery life with the limited charge they received. §.§ Results: Location of Solar Charging Issues As expected, the node locations in downtown Chicago entered PSM for a long duration of the winter due to the high number of very tall buildings in the neighborhood. However, several node locations in neighborhoods outside of downtown Chicago, that lack a high density of tall buildings, also experienced solar charging issues. In fact, the node location with the second highest amount of time spent in PSM was not in a location near tall buildings, and 8 of the 12 node locations that had the most power saving hours were outside of the downtown area, as shown in Fig. <ref>f. The figure also shows that they mostly fall in neighborhoods with a majority Black or Latine population. As seen in Fig. <ref>, shadows from trees for large portions of the day could be a potential cause for charging issues in some areas. In addition, ice build up on solar panels may cause charging issues, but this is difficult to diagnose without visiting every node location while it is in PSM. Thus, further analysis is required to determine the exact cause of charging issues in these locations that obviously lack tall buildings in the vicinity. The important takeaway is that the dynamic physical environment of solar IoT deployments need to be considered by tools that are currently being developed to estimate solar energy availability using historic data or satellite/map images <cit.>. §.§ Results: Predicting Solar Charging Issues We used the OSM Buildings data <cit.> and Shadow Accrual Maps tool <cit.> to determine how well we would be able to predict a sensor location having power saving issues. With the OSM Buildings data, we examined the distance to the closest building, height of the closest building, and mean and median height of buildings within 100, 250, and 500 meters of each node location. For shadows, we used the tool to calculate the amount of time each node location was in shadow on the winter equinox. Using both a logistic regression model for the binary case of power saving or not, and a linear regression model for the amount of time spent in PSM, we found no statistical significance for either the amount of time spent in shadow, or any data related to buildings around the node locations, as highlighted for one data point in Fig. <ref>. Upon further examination, we discovered that one of the issues around using crowdsourced and open source resources is that they are not consistently updated. For example, one sensor node that was indicated to have shadow issues but did not enter PSM likely had a building present when the data were uploaded, but no longer has a building there as discovered on Google Maps. Likewise, as seen in Fig. <ref>, a node location with no building nearby that entered PSM was likely affected by the presence of a tree near the bus shelter, which was not captured in the tools we used, which are focused on buildings. This points to an additional shortcoming of the data available, which focus on buildings and do not account for foliage, hyperlocal snowfall, and other physical phenomena that may impede solar charging. § DISCUSSION §.§ The Potential of LTE-Connected, Solar-Powered Urban Sensor Networks The results show immense promise for LTE-connected urban sensor networks. Most node locations had adequate signal strength to achieve connectivity, and the vast majority of sensor readings were transmitted to the cloud server within five seconds. Furthermore, there were no noticeable issues around connectivity due to temporal features such as weather or traffic patterns. We also had success using LTE to detect errors and perform software updates, including a firmware patch to add the power saving mode. These findings all point to the potential of LTE in creating reliable, scalable, easily maintainable, and real-time sensing in cities. Solar panels proved to be a reliable energy source for over half of the year-long study, and most devices that experienced charging issues only did so between January and March. Chicago is at a more northern latitude than most of the global population, so we expect that many cities, and especially those in the Global South, would experience fewer solar charging issues. Additional improvements with solar panel efficiency <cit.> and research on smart power management strategies for renewable energy in IoT establish solar charging as a viable powering option. The nodes that were collocated at EPA stations all experienced no charging or connectivity issues, suggesting that placing nodes on rooftops could be a viable solution to improve reliability. However, node placement is highly dependent on the application, and many cities may choose or need to place nodes closer to street level. Future research could include interpolation and machine learning techniques to correlate data from street level to rooftop nodes to address the technical issues and still collect useful data. Additionally, passive wireless reflector and relay research can find application in routing network availability from cell towers and around built infrastructure to end devices. §.§ Implications of Connectivity and Charging Issues Despite the success we had in using 4G LTE-M to transmit data, we discovered issues around “dead zones", delayed readings, and unequal signal strength. The cause of these issues could not often be easily identified and data sources from AT&T and the FCC indicate widespread support of the LTE network across Chicago, as seen in Fig. <ref>. Thus, the discovery of these issues raises questions on the reliability of LTE networks, especially in cities that do not have as much cellular infrastructure as Chicago. However, we did not identify significant data loss from the connection-related issues, suggesting that LTE-connected sensor networks are likely appropriate for applications that do not rely on instant or near instant data. For applications that cannot afford to have any delayed data, such as emergency support services, network designers will want to think about building robustness into the system to ensure real-time communication for all readings. Despite the ubiquity of solar panels as the power source for wireless sensor networks, we found that they are not a reliable power source for urban sensor networks for cities that have limited sunlight in winter months. In addition, urban areas at latitudes closer to the equator will also experience solar charging issues if they have numerous tall buildings blocking the path of the sun. Thus, we need to continue research in alternative charging options, energy harvesting techniques, and battery-less sensors to ensure reliability and scalability in powering urban sensor networks. In our study, we found that cellular connection and solar charging issues are not all localized to areas with tall buildings and may be spread inequitably around a city. Thus, urban sensor network deployments have the potential to exacerbate existing societal inequalities by allowing for networks to be scaled more easily in some neighborhoods than others. In turn, this can increase mistrust between residents and governments <cit.> and drive residents to make assumptions about the distribution of resources and harms based on the physical presence of sensors <cit.>. Thus, to serve people in all communities, sensor network designers should consider working with local service providers, using repeaters, multiple sensors, and other technologies to improve reliability in underserved areas. Furthermore, networking researchers and designers need to focus on equality, and not just quality or area coverage when building and deploying infrastructure. §.§ Challenges around Data Access Due to the lack of official up-to-date building information, we relied on open crowdsourced data to determine the location and height of buildings in the city. Similarly, because the location of cellular towers is not publicly available, we relied on data from OpenCellID. As with many open crowdsourced datasets, these data were not completely accurate or up-to-date <cit.>. This was especially clear when examining FCC carrier connectivity information, as the entire city of Chicago seemingly has coverage (Fig. <ref>, yet we found that was not the case, likely because the data are reported by carriers <cit.>. We also discovered data accuracy issues in shadow prediction using the Shadow Accrual Maps <cit.>. Other crowdsourced data, such as nPerf, presented an alternative usage issue in incompleteness, as seen in Fig. <ref>. Particularly in Chicago, there is significantly more data available in the northern part of the city and along highways, likely attributed to the increased usage of crowdsourced platforms by white people and high-income earners <cit.>. Thus, relying on crowdsourced data makes it difficult to predict locations with solar charging or connectivity issues that may arise due to building height and other urban interferences, made further difficult by the social inequities that exist in many cities and are exacerbated in crowdsourced technologies. The difficulty in working with open crowdsourced data points to a need for new methods to obtain up-to-date urban data. For example, researchers can help develop ways to obtain building height or cell tower location from satellite imagery or Google Maps. We may also look to develop easier ways for cities to create their own databases that are kept up-to-date or develop better community science incentives to keep crowdsourced data sources such as OSM Buildings, OpenCellID, and nPerf up-to-date and to reach new users who do not currently contribute to these datasets. §.§ Limitations of this Study We acknowledge that this work is limited, as it focuses on a single-city case study. Although we believe that Chicago is representative of many other large cities, we lack the empirical evidence needed to “assess the implications and potentially transformative consequences" of how similar smart city networks would emerge in different urban contexts <cit.>. An additional limitation is that we use weather data from US government agencies and there are only three weather stations in the Chicago area. Although we also had temperature and humidity readings at each node, these sensors were located inside the node enclosures, and thus did not always provide accurate external measurements. Thus, our weather-related analyses are not hyperlocalized to most of the sensors, and it is possible that there are hyperlocal weather correlations, such as urban heat islands, that affected sensor connectivity. § CONCLUSION In this work, we present the challenges and opportunities from a year-term city-wide urban sensor network deployment. The network was created based on five specific criteria of success that we identified from past work. We provide an in-depth analysis of deployment data from the aspect of cellular connectivity and solar energy harvesting, which are the two key features that help meet the success criteria. In addition we highlight inherent challenges with open data sources available for root-cause analysis of failure nodes, and identify strengths and weaknesses to define future research directions that will support large-scale, real-time energy harvesting deployments in achieving reliable, equitable smart city networks. acm
http://arxiv.org/abs/2307.08838v1
20230714114703
Dynamic Object Tracking for Quadruped Manipulator with Spherical Image-Based Approach
[ "Tianlin Zhang", "Sikai Guo", "Xiaogang Xiong", "Wanlei Li", "Zezheng Qi", "Yunjiang Lou" ]
cs.RO
[ "cs.RO" ]
One-Shot Action Recognition via Multi-Scale Spatial-Temporal Skeleton Matching Siyuan Yang, Jun Liu, Shijian Lu, Er Meng Hwa, Life Fellow, IEEE, and Alex C. Kot, Life Fellow, IEEE Siyuan Yang is with the Rapid-Rich Object Search Lab, Interdisciplinary Graduate Programme, Nanyang Technological University, Singapore. E-mail: [email protected] Jun Liu is with the Information Systems Technology and Design Pillar, Singapore University of Technology and Design, Singapore. E-mail: [email protected] Shijian Lu is with the School of Computer Science & Engineering, Nanyang Technological University, Singapore. E-mail: [email protected] Er Meng Hwa and Alex C. Kot are with the School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore. E-mail: {emher, eackot}@ntu.edu.sg Corresponding author: Jun Liu August 12, 2023 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= empty empty Exactly estimating and tracking the motion of surrounding dynamic objects is one of important tasks for the autonomy of a quadruped manipulator. However, with only an onboard RGB camera, it is still a challenging work for a quadruped manipulator to track the motion of a dynamic object moving with unknown and changing velocities. To address this problem, this manuscript proposes a novel image-based visual servoing (IBVS) approach consisting of three elements: a spherical projection model, a robust super-twisting observer, and a model predictive controller (MPC). The spherical projection model decouples the visual error of the dynamic target into linear and angular ones. Then, with the presence of the visual error, the robustness of the observer is exploited to estimate the unknown and changing velocities of the dynamic target without depth estimation. Finally, the estimated velocity is fed into the model predictive controller (MPC) to generate joint torques for the quadruped manipulator to track the motion of the dynamical target. The proposed approach is validated through hardware experiments and the experimental results illustrate the approach's effectiveness in improving the autonomy of the quadruped manipulator. § INTRODUCTION The quadruped manipulator is a multi-functional platform that is comprised of a mobile quadruped and a manipulator, as shown in Fig. <ref>. It has not only the agility of the quadruped but also the interactivity of the manipulator, and thus has great potential applications, such as disaster rescue<cit.> and anti-terrorist operations<cit.>. To exploit these advances, making the quadruped manipulator to track the motion of surrounding dynamic objects is an important task for the autonomy of the quadruped manipulator. In the previous studies <cit.>, external humans or marker boards (e.g., ArUco) are needed to provide the motion expectations to the quadruped manipulator. They assumed that the motion state of the target to be track was known and static, which limits the scope of the applications. More notable contributions to the autonomy of the quadruped manipulator could be found in the works<cit.>, where the target state was estimated with onboard RGB-D cameras, and motion expectations of the robot were automatically generated through trajectory optimization. However, they only solved the problem of grasping static objects and had to use the depth of the target, which is often computationally expensive to obtain or inaccurate due to the environment changes in practice<cit.>. Dynamic target tracking, i.e., tracking an object with unknown and changing speeds, is still a difficult task for the quadruped manipulator. Visual servoing is one of the promising methods that can enhance the autonomy of the quadruped manipulator, which has been developed for a long time and has been successfully used in various robots, such as drones <cit.>, wheeled robots <cit.>, and quadrupeds <cit.>. However, directly applying the conventional visual servoing to the quadruped manipulator has at least two challenges to solve. The first challenge is to deal with the coupling motion between the locomotion and manipulation, which requires the visual servoing to take the full system states into account. The work in <cit.> considered the locomotion and manipulation as two separate subsystems. However, without considering the motion coupling, it is difficult to exploit the coordination of locomotion and manipulation for the quadruped manipulator, limiting the robot's motion agility. The second challenge is caused by the under-actuation of the quadruped platform. When the robot operates under a dynamic gait (e.g., trot), the roll and pitch angles of the quadruped platform is uncontrollable due to the under-actuation, which seriously affects the operation accuracy of the manipulator's end-effector. The above two challenges of the visual servoing were also studied for aerial robots in the literature. For example, the work in <cit.> employed the image-based visual seroving (IBVS) to track a static target but do not consider the coupling of the manipulator and the quadrotor. In contrast, Zhong et al. <cit.> considered the coupling between the manipulator and the aerial drone but only tracked a static target with the aerial manipulator. To track a dynamic object, in <cit.>, the optical flow technology was adopted to estimate the velocity of the dynamic target. The problem is that, when the target depth is unknown, the optical flow can not estimate the target's velocity, and it is unreliable under the assumption of constant brightness. To track a dynamic target for the quadruped manipulator without depth estimation, this manuscript proposes a novel image-based visual servoing (IBVS) approach that consists of three elements: a spherical projection model, a robust super-twisting observer (STO), and a model predictive controller (MPC). First, by simultaneously projecting the images of the dynamic target and the manipulator onto the spherical image plane, we have the equation of visual error with the passive-like property for the quadruped manipulator. The passivity-like property of the visual error allows us to separately control the position and attitude of the under-actuated quadruped manipulator, which can avoid the influence of the quadruped platform to the attitude on error convergence <cit.>. Then, with the spherical projection model, the robustness of the multi-variable STO <cit.> is exploited to estimate the target's velocity without depth estimation in spite of the presence of visual error. Finally, to track the desired motion, by adopting the work <cit.>, a model predictive controller (MPC) for the quadruped manipulator based on a model of single rigid body is used to generate joint torques. This work offers the following contributions: * We propose a new spherical image-based approach to enable the quadruped manipulator to exactly estimate the unknown velocity of a dynamic object, without the need for marker boards or depth estimation. * The passive-like property of visual error is exploited for the quadruped manipulator to eliminate the influences of the under-actuated quadruped platform on the visual observation, which enable the manipulator to track the dynamic object with simultaneous locomotion. * The proposed solution of dynamic object tracking for the quadruped manipulator has been verified through simulations and hardware experiments. To the best of the authors' knowledge, this is the first time to solve the problem of tracking the dynamic object for the quadruped manipulator. § PROBLEM FORMULATION Let us consider the visual tracking system that includes a quadruped manipulator with a monocular camera and a dynamic object with unknown changing velocity, as shown in Fig. <ref>, where { I:O_I - x_Iy_Iz_I} represents the right-hand inertial coordinate frame, z_I denote the inverse direction of gravity, and { B:O_b - x_by_bz_b} represents the body frame of the quadruped platform with O_b located at the centroid of the quadruped platform. Let { S:O_s - x_sy_sz_s} denote the base frame of the manipulator, which is attached to the top of the quadruped platform, and { E:O_e - x_ey_ez_e} represent the end-effector frame of the manipulator. Then, { C:O_c - x_cy_cz_c} denotes the camera frame, where z_c is along the optical axis, and S^2 is the sphere normalized imaging plane. The target image features and the manipulator's end-effector will be projected on S^2 to describe the motion. Let p_B∈ℝ^3 represent the position of the body frame B w.r.t. the inertial coordinate frame I. The orientation of the body frame B w.r.t. the inertial coordinate frame I is parameterized using ZYX-Euler angles Φ_B. Let q_j∈ℝ^18 be the limb joint positions. Therefore, the generalized coordinate vector q and the generalized velocity vector v are written as q = [ [ p_B; Φ_B; q_j ]] ∈ SE(3) ×ℝ^18, v = [ [ v_B; ω_B; q̇_j ]] ∈ℝ^24 Let { T:O_T - x_Ty_Tz_T} denote the target frame, where the target frame's attitude is aligned with the inertial coordinate frame. Let (v_T,ω_T)^ ⊤ be the unknown linear and angular velocities of the target frame, respectively. To simplify the controller, the following assumptions are made related to the unknown target: The target is translational such that v_T 0 and v_T is unknown. Its orientation is approximately constant such that ω_T = 0, and the linear acceleration v̇_T is bounded. Specifically, with the quadruped manipulator equipped with only an onboard RGB camera, the goal is first to exactly estimate the unknown and changing velocities of the target without depth estimation and any external sensors, and then to design a controller for the quadruped manipulator so that the origin of the manipulator's end-effector O_e can coincide with the target origin O_T. § MOTION TRACKING OF QUADRUPED MANIPULATOR §.§ Spherical Projection Model Let Q_oi, i = 1, ⋯ ,m be the feature points located on the target, expressed in the camera frame C, as shown in Fig. <ref>. To build an equation of visual error with the passive-like property, these feature points are projected onto the sphere S^2. From <cit.><cit.>, the spherical projection of the visible image points can be numerically computed as: s_o i=q_o i/|q_o i|=Q_o i/|Q_o i| where s_o i∈ℝ^3 is the vector describing target feature points being projected on the sphere 𝒮^2, | · | is the Euclidean distance operator, and q_o i is the point expressed in the normalized focal length imaging plane. The relationship between s_o i, q_o i and Q_o i is shown in Fig. <ref>. Notes that Q_o i has the same linear velocity as the target, and thus, based on the method in <cit.>, the kinematic equation of s_o i can be written as ṡ_oi = - [ Ω_c^c]_ ×s_oi - π_s_oi/r( Q_oi)(v_c^c - v_T^c) where π_s_oi = ( 𝕀_3 × 3 - s_ois_oi^ ⊤) ∈ℝ^3 × 3, v_c^c and Ω_c^c are the linear and angular velocities of the camera w.r.t. the camera frame C, respectively, [ · ]_ × is the skew-matrix operator, r( Q_oi) = |Q_oi| is target's depth, v_T^c is the target's linear velocity expressed in the camera frame C. Notes that r( Q_oi) and v_T^c both are unavailable. As shown in (<ref>), the rotational and translational motion of the camera is decoupled through the technique of spherical projection. Therefore, a controller that only controls the translational motion can be designed to avoid the under-actuation of the quadruped platform. For m target points, the normalized spherical centroid of s_o i and its derivative w.r.t time are as follows <cit.>: h_o = 1/m∑_i = 1^m s_oi, L_o =1m∑_i = 1^m π_s_oi/| Q_oi| ḣ_o = - [ Ω_c^c]_ ×h_o - L_o(v_c^c - v_T^c) where L_o ∈ℝ^3 × 3 is a positive definite matrix when there are at least two feature points, and h_o can represent the target center. In order to describe the motion of the manipulator on S^2, some virtual points Q_ti are captured around the center of the manipulator's end-effector frame E, expressed in the camera frame C, as shown in Fig. <ref>. Therefore, Q_ti=O_e+Δρ_i, where O_e is the origin position of the manipulator's end-effector frame E, Δρ_i is the distance between Q_ti and O_e, which can be specified manually. To build an equation of visual error with a passive-like property, a virtual plane of the camera is created<cit.>, as shown in Fig. <ref>. Let { C':O_c' - x_c'y_c'z_c'} denote the virtual plane frame of the camera, which means that roll and pitch angles of the quadruped platform are assumed to be zero during the motion of the system. Let Q'_ti denote the redefined virtual target point Q_ti in the camera's virtual plane frame C'. Therefore, Q'_ti can be calculated by Q'_ti = Q_ti + p_δ ^c = Q_ti + _c'^cR( p_Q'_ti^c' - p_Q_ti^c') = Q_ti + _c'^cR( ( p_O_c^c' + Q_ti) - ( p_O_c^c' + _c^c'RQ_ti)) = _c^c'R^ ⊤Q_ti where p_δ ^c is the offset vector caused by the quadruped rotation, expressed in the camera frame C, _c^c'R represents the rotation matrix of the camera frame C w.r.t the camera's virtual plane frame C', p_Q'_ti^c', p_Q_ti^c', and p_O_c^c' are the offset vector of Q_ti, Q'_ti and the origin of the camera frame C, expressed in the camera's virtual plane frame C', respectively. The relationship can be seen in Fig. <ref>. Taking the derivative of Q'_ti w.r.t time, we can get Q̇'_ti = _c^c'R^ ⊤Q̇_ti - [ Ω_c^c]_ ×Q'_ti. The point Q'_ti projected onto sphere plane S^2 is expressed as s'_ti = Q'_ti/| Q'_ti|. Therefore, the derivative of s'_ti w.r.t time is ṡ'_ti = π_s'_ti/r( Q'_ti)_c^c'R^ ⊤Q̇_ti - [ Ω_c^c]_ ×s'_ti where π_s'_ti = ( 𝕀_3 × 3 - s'_tis'_ti^ ⊤). From manipulator's kinematics, Q̇_ti can be calculated as Q̇_ti = _s^cR( [ 𝕀_3 × 3 -[ _e^sRΔρ_i]_ × ])( [ _e^sṫ; _e^sw ]) =J_tiJ_eSv where _e^sṫ and _e^sw are linear and angular velocity of the manipulator's end-effector frame E w.r.t the manipulator's base frame S, respectively, _e^sR represents the rotation matrix of the manipulator's end-effector frame E w.r.t the manipulator's base frame S, _s^cR represents the rotation matrix of the manipulator's base frame S w.r.t the camera frame C, J_ti = _s^cR( [ 𝕀_3 × 3 -[ _e^sRΔρ_i]_ × ]), and J_e is the jacobian matrix of the manipulator. The selection matrix S = [ [ 0_6 × 18 𝕀_6 × 6 ]] selects manipulator's joint to activate. Therefore, the kinematics of s'_ti can be rewritten as ṡ'_ti = π_s'_ti/r( Q'_ti)_c^c'R^ ⊤J_tiJ_eSv - [ Ω_c^c]_ ×s'_ti. Similar to the h_o, using the centroid technology for m redefined virtual points, we have h_t = 1/m∑_i = 1^m s'_ti, L_t = 1/m∑_i = 1^m π_s^'_ti/| Q'_ti | ḣ_t = - [ Ω_c^c]_ ×h_t + L_t_c^c'R^ ⊤J_tJ_eSv where L_t∈ℝ^3 × 3 and J_t = 1/m∑_i = 1^m J_ti. Notes that | Q'_ti | can be calculated using the kinematics of the manipulator, and thus L_t is a known parameter. With the target image and virtual point centroid position information, the visual error can be built as e = h_o - h_t. Differentiating the error e w.r.t time and combining (<ref>) and (<ref>), we have ė = - [ Ω_c^c]_ ×e - L_o(v_c^c - v_T^c) - L_t_c^c'R^ ⊤J_tJ_eSv. The equation satisfies the passive-like property <cit.>, which separates linear velocity from angular velocity, and can be used to avoid controlling the attitude of the quadruped platform. If L_o can be replaced by L_t and v_T^c can be estimated by a robust observer, then according to (<ref>), the control input v that is independent of the target depth can be designed. §.§ Multivariable Super-Twisting Velocity Observer To track the dynamic target, the linear velocity of the target needs to be observed. However, since the target depth is unknown, L_o is an unknown parameter, which brings difficulties to the observation. As mentioned above, when the manipulator successfully grabs the target, L_o will converge to L_t. Meanwhile, L_t can be obtained from the kinematics of the manipulator and is a known parameter. Therefore, Let us assume that the gain 0 < L_o < L_t is always satisfied with some known constant L_t. To exactly estimate the unknown velocity v_T^c, let us design a robust observer based on the robust super-twisting observer (STO)<cit.>: ĥ̇_o = - [ Ω_c^c]_ ×h_o - L_tv_c^c + k_1φ _1(e_o)e_o + L_ty ẏ = k_2φ _2(e_o)e_o, φ _1(e_o) = k_3| e_o|^ - p + k_4 φ _2(e_o) = ( k_3(1 - p)| e_o|^ - p + k_4)φ _1(e_o) where e_o = h_o - ĥ_o is the estimate error, 0 < p ≤ 0.5, and k_i > 0,i = 1,2,3,4 are the gains. With proper gain k_i and the assumption 0 < L_o < L_t, the variable y is an exact estimation of v_T^c within finite time. §.§ Model Predictive Control Based on the error dynamic (<ref>), the reference trajectory that guarantees error convergence can be generated as v_B^d = ^I_cR(K_be + v_T^c) + ^I_BR[ ^B_ct]_ ×ω_B q̇_arm^d = K_a(L_t_c^c'R^ ⊤J_tJ_e)^†e where e is the visual error represented in (<ref>), K_b, K_a are diagonal positive gain matrix, ^B_ct is the offset vector between the camera frame C and the body frame B, q̇_arm is the block vector of v that represents the joint velocity of the manipulator, and ( ·)^† is the pseudo-inverse operator. Because L_o and L_t are positive definite matrices, (<ref>) can make the visual error converge. To simplify the controller, the desired angular velocity of the quadruped platform keeps as zero. For the attitude control, we can design an independent control law by constructing a visual feature only related to the attitude. For example, a visual feature based on image moment is constructed in <cit.>. With the desired linear and angular velocity, the desired position and Euler angles can be obtained by integrating. Therefore, the reference trajectory of the quadruped platform is x^ref = [ [ Φ^d_B^ ⊤ p^d_B^ ⊤ ω^d_B^ ⊤ v^d_B^ ⊤ ]]^ ⊤. Following the work in <cit.>, here, a dynamics model of single rigid body is used to generate ground reaction forces (GRFs) of the quadruped manipulator. Since the mass of the legs and manipulator only accounts for 20% of the total mass in our robot, we ignore the motion of the legs and manipulator when calculating GRFs. The difference from <cit.> is that the mass of the system is the sum of the masses of the quadruped platform and manipulator, and the inertia of the system is the quadruped base's inertia adding the manipulator nominal inertia. Therefore, GRFs can be calculated from MPC. The detailed construction and solution of MPC can refer to <cit.>. The controller of legs uses a feedback law of PD control compensated with one feedforward term to compute joint torques: τ_leg,i = J_i^ ⊤[ K_p( _Bp_i,ref - _Bp_i ) + K_d(_Bv_i,ref - _Bv_i) ]+ τ_i,ff where, _ Bp_i,_ Bv_i∈ℝ^3 are the position and velocity of the i-th foot, generated by Raibert heuristic<cit.>, J_i is the foot Jacobian, K_p, K_d are diagonal positive matrix, and τ_i,ff is the feedforward torque. For stance legs, the feedforward torque can be calculated as τ_i,ff = J_i^ ⊤_B^IR^ ⊤f_i where _B^IR represents the rotation matrix of the body frame B w.r.t the inertial coordinate frame I, and f_i is the GRFs calculated by MPC. For swing legs, the feedforward torque can be calculated as τ_i,ff = J_i^ ⊤Λ_i( _ Ba_i,ref - J̇_iq̇_leg,i) + n_leg,i where Λ_i ∈ℝ^3 × 3 is the operational space inertia matrix, _Ba_i,ref∈ℝ^3 is the reference acceleration in the body frame, q̇_leg,i∈ℝ^3 is the vector of leg joint velocities, n_leg,i is the nonlinear effects (e.g., Coriolis, centrifugal and gravitational terms). In manipulator control, we follow the work <cit.> to consider the coupling of the manipulator to the quadruped platform. The difference from <cit.> is that we ignore the motion of the legs, and only consider the effect of quadruped base motion on the manipulator. From <cit.>, we can get M^flq̈_arm + n^fl = τ_arm where M^fl = M_arm - F^T( M_B)^ - 1F, 𝐧^fl = n_arm - F^T( M_B)^ - 1n_B, M_B is the rigid body inertia of the quadruped's base, F is the block matrix that encodes inertial coupling between the manipulator and the base, M_arm is the inertia of the manipulator, and n is the nonlinear effects (e.g., Coriolis, centrifugal and gravitational terms). Therefore, the control law of manipulator can be designed as: q̈_arm = q̈_arm^d + K_d_armė_arm + K_p_arme_arm where q̈^d_arm is obtained by differentiating q̇_arm^d, K_d_arm, K_p_arm are diagonal positive matrix, and e_arm, ė_arm are the tracking error between the actual and the desired joint positions/velocites, respectively. § EXPERIMENTS Various validations have been conducted both in simulation and experiments on the quadruped manipulator, as shown in Fig. <ref>. The quadruped manipulator consists of a torque-controllable quadrupedal robot, Aliengo<cit.>, a 6-DOF manipulator, Kinova gen2<cit.>, and the realsense D435i<cit.> RGB-D camera from Intel. The manipulator is lightweight (4.4 kg), and allows for torque control of all six actuators. It should be noted that only the RGB images from the RGB-D camera have been used in both simulations and experiments. The MPC controller (Sec. <ref>) runs on the user's computer (Intel Core i7-11700F@ 2.50GHz), and relies on a state estimator running at 400Hz, as shown in Fig. <ref>. The state estimator used in the controller is based on the work in <cit.> to get an estimate of the base pose, linear and angular velocity by fusing measurements from the motors' encodes and IMU. We use the open-source Pinocchio <cit.> to generate the model of the kinematics and dynamics of the robot. The MPC (Sec. <ref>) is solved by QPOASES <cit.>, and runs at a frequency of approximately 100Hz. All of the examples discussed in this paper are supported by the video submission[Available at <https://youtu.be/Tep_d-BOPwo>]. §.§ Simulation Results for Dynamic Target Tracking The primary purpose of this experiment is to demonstrate the effectiveness of the proposed approach. The proposed approach was validated in a simulator developed in the Gazebo environment <cit.>. The quadruped was under the trot gait. Feature points ware provided by a visual markers' plane, and the target was located 0.15m above the visual markers' plane (see the attached video). Opencv<cit.> was used to detect target feature points q_o i(<ref>). The target was set to unknown constant and changing velocity, respectively. The trajectory of the target is set as a straight line or an S-shaped curve to verify the effectiveness in different directions. The robot's task goal was to track the dynamic tracking point accurately. The parameters of STO (<ref>) are p=0.4, k_1=10, k_2=100, k_3=0.05, k_4=0.05. At the same time, an IBVS based on spherical projection (without STO)<cit.> was used as the comparison. §.§.§ Straight line with unknown velocity Firstly, the robot tracked the target with a straight line motion. The target's velocity was set to 0.3m/s and 0.5m/s for uniform motion, respectively. The tracking results are shown in Fig. <ref> and Fig. <ref>. In Fig. <ref>, v_x is the target's velocity expressed in the inertial coordinate frame I, and (v̂_x, v̂_y, v̂_z) is the result of the STO (<ref>). It can be observed from Fig. <ref> that our approach can accurately estimate the velocity of the target. Fig. <ref> shows the tracking error between the manipulator's end-effector and the target. It shows that the robot can accurately track the target with the unknown velocity with our approach, while the method in <cit.> failed to work. The attached video shows the same result when the target velocity was set to 0.5m/s. The method in <cit.> failed to track because the unknown target velocity made the visual error diverge and eventually, causing the target to disappear from the camera's field of view. These results illustrate the efficiency of the proposed approach for tracking the dynamic object. §.§.§ Straight line with changing velocity Then, the target was set to move along with a straight line with changed velocities. The acceleration was set to 0.015m/s^2 and 0.03m/s^2, respectively, and the maximum velocity was set to 0.3m/s. The results of tracking the target moving with 0.03m/s^2 are shown in Fig. <ref>. From Fig. <ref>, we can see that the robot can accurately estimate the changing velocity of the target. §.§.§ S-shaped trajectory Based on the straight-line motions, the S-shaped motion was added to further demonstrate the effectiveness of our approach in different directions. The target's velocity in the x direction was set to 0.1m/s, the acceleration in the y direction was set to 0.02 m/s^2, and the maximum velocity in the y direction was set to 0.1m/s. The results are shown in Fig. <ref>, where Fig. <ref> shows the 2-D trajectories of the target and the manipulator's end-effector with different tracking approaches. Fig. <ref> shows the tracking error between the manipulator's end-effector and the target. Fig. <ref> shows results of estimated velocities. These results show that by effectively estimating the unknown velocities of the dynamic target through the proposed velocity observer, the robot can track the S-shaped trajectory of the target with the smaller error. §.§ Real World Results for Grasping Dynamic Target To illustrate the efficiency of our approach of tracking various targets for the quadruped manipulator in real world, we combined a 2D detection network (YOLO<cit.>) to grab a household object (teddy bear). The target's velocity was set to 0m/s and 0.1m/s, respectively (see the attached video). The robot's task is to grasp the dynamic target moving with these unknown velocities. In the task, feature points q_o i (<ref>) were built with the corner points of the target bounding box obtained by YOLO. Then, with the formula (<ref>), (<ref>), (<ref>) and the STO (<ref>), the reference trajectory can be generate by (<ref>). Finally the robot is controlled by MPC (Sec. <ref>). When the target is static, the average of grasping error is about 0.03m and the success rate grasps is about 96%, while when the target is moving with 0.1m/s, the average of grasping error is about 0.06m and the success rate grasps is 82%. The quantitative measurements were based on 50 experiments, where the grasping error is the average of Euclidean distances between the end-effector and the center of the target when the quadruped manipulator successfully grasps. The results show that our approach has a high grasping success rate and low grasping error under different velocities of dynamic target. The attached video shows that our approach can make the robot to produce coordinated motion, where the quadruped is constantly adjusting its position to help the manipulator successfully grab the target. Some snapshots in Fig. <ref> show the results at various moments when a dynamic object moving with 0.1m/s, and the detailed results are shown in Fig. <ref>. To avoid the manipulator getting into a singularity when the target is too far away, the manipulator is only controlled when the target is in the workspace of the manipulator. Thus, the manipulator is controlled after 10s, as shown in Fig.<ref>. q̇ is the joint velocity of the manipulator. Fig.<ref> and Fig.<ref> show the angular velocity of the quadruped platform and the visual error e, respectively. We can see that although the angular velocity of the quadruped platform has been changing, it has not affected the convergence of the visual error. The results illustrate that the effectiveness of the passive-like visual equation (<ref>) of our approach. Fig.<ref> shows the quadruped platform's linear velocity in the inertial coordinate frame I. The results show that the velocity eventually converges to the approximate target's velocity (0.1, 0.0, 0.0)m/s, which is estimated by STO (<ref>). Fig. <ref> and Fig. <ref> show the evolution of the target and virtual centroid point (h_o, h_t) in the spherical image space. The evolution process of the target centroid h_o is represented by a red dotted line, and the virtual centroid h_t is represented by a blue solid line. From the figure, the target centroid h_o and the virtual centroid h_t eventually tend to coincide. Since h_o and h_t can represent the center of the target and the manipulator's end-effector respectively, when h_o coincides with h_t, the origin of the manipulator's end-effector O_e coincides with the target origin O_T in cartesian space. The results demonstrate the effectiveness and robustness of our approach to dynamic target grasping. The above results show the approach's effectiveness in improving the autonomy of the quadruped manipulator. § CONCLUSIONS This paper proposed a novel spherical image-based approach that enable the quadruped manipulator to track a dynamic object of moving with an unknown velocity. In contrast to the conventional methods, the new approach can exactly estimate the unknown velocity with only an onboard RGB camera without requiring any marker board or depth estimation. Moreover, the passive-like property of the visual error is exploited to eliminate the influence of the under-actuated angular velocity of the quadruped platform, which enables the manipulator to track the dynamic object with simultaneous locomotion. The experiments demonstrate that our approach can robustly estimate the target's unknown constant or changing velocity and track dynamic targets. Combined with 2D detection methods, our method can be used in various application scenarios, which greatly enhances the autonomy of the quadruped manipulator. Future work may extend the application of the proposed IBVS to tracking targets with angular velocity. -12cm IEEEtran
http://arxiv.org/abs/2307.04948v1
20230711003220
Viscous tweezers: controlling particles with viscosity
[ "Tali Khain", "Michel Fruchart", "Vincenzo Vitelli" ]
cond-mat.soft
[ "cond-mat.soft", "physics.flu-dyn" ]
James Franck Institute, The University of Chicago, Chicago, IL 60637, USA James Franck Institute, The University of Chicago, Chicago, IL 60637, USA Gulliver, UMR CNRS 7083, ESPCI Paris PSL, 75005 Paris, France James Franck Institute, The University of Chicago, Chicago, IL 60637, USA Kadanoff Center for Theoretical Physics, The University of Chicago, Chicago, IL 60637, USA Control of particle motion is generally achieved by applying an external field that acts directly on each particle. Here, we propose a global way to manipulate the motion of a particle by dynamically changing the properties of the fluid in which it is immersed. We exemplify this principle by considering a small particle sinking in an anisotropic fluid whose viscosity depends on the shear axis. In the Stokes regime, the motion of an immersed object is fully determined by the viscosity of the fluid through the mobility matrix, which we explicitly compute for a pushpin-shaped particle. Rather than falling upright under the force of gravity, as in an isotropic fluid, the pushpin tilts to the side, sedimenting at an angle determined by the viscosity anisotropy axis. By changing this axis, we demonstate control over the pushpin orientation as it sinks, even in the presence of noise, using a closed feedback loop. This strategy to control particle motion, that we dub viscous tweezers, could be experimentally realized in a fluid comprised of elongated molecules by suitably changing their global orientation. Viscous tweezers: controlling particles with viscosity Vincenzo Vitelli October 2023 ====================================================== The control of small particles in a fluid is crucial in applications including sedimentation <cit.>, swimming <cit.>, active matter <cit.>, crystal growth <cit.>, or cell manipulation and drug delivery <cit.>. To achieve control on the state of a single particle, it is common to apply external fields that act directly on the particle by enacting a force or a torque <cit.>. Examples include magnetic  <cit.>, electric <cit.>, optical <cit.>, or acoustic <cit.> forces as well as surface Faraday waves <cit.>. In this Letter, we take an alternative route towards particle control. Instead of acting directly on the particle, we act on the fluid. We show that modulating fluid properties such as viscosity implements a way to indirectly control the motion or orientation of the immersed object. This method of object manipulation is independent of the nature of the particle and does not impose a predetermined flow in the fluid. The basic requirement, tunable anisotropic viscosities, is present in systems ranging from fluids under electric or magnetic fields <cit.> and electron fluids <cit.> to so-called viscosity metamaterials, complex fluids whose viscosity can be controlled by applying acoustic perturbations <cit.>. Stokes flow and mobility.—Let us consider a small rigid particle immersed in a viscous incompressible fluid. In this low Reynolds number regime, the fluid flow is well-described by the Stokes equation, ∂_t v_i = ∂_j σ_ij + f_i with σ_ij = -δ_ij P + η_ijkℓ∂_ℓ v_k along with the incompressibility condition ∂_i v_i = 0. Here, v_i is the fluid velocity, P the pressure, σ_ij the stress tensor, f_i an external force, and η_ijkℓ the viscosity tensor. The overdamped motion of a particle in a fluid is described by the linear equation [ V; Ω ] = 𝕄(η) [ F; τ ], where the 6 × 6 mobility matrix 𝕄 relates the force F and torque τ applied to the particle with its velocity V and angular velocity Ω <cit.>. The form of 𝕄 depends on both the geometry of the object and the viscosity tensor η of the fluid. The position and orientation of the particle can then be obtained by integrating the velocity and angular velocity. We focus here on the orientation of a sedimenting particle that sinks under the force due to gravity F = F ẑ. Note that here we apply no torque (τ = 0), which is the most common way of changing the orientation of the particle. Equation (<ref>) then reduces to Ω = T(η) F in which T is a sub-block of 𝕄, see Supplemental Material (SM). As the force and the object are given, our only handle on the orientation dynamics is the viscosity tensor η in Eq. (<ref>). Viscosity of an anisotropic fluid.—In familiar fluids such as water, this viscosity tensor reduces to one scalar coefficient, the shear viscosity μ. When the fluid is anisotropic (for example, a fluid consisting of elongated molecules that are aligned to an externally applied magnetic field, B, as in Fig. <ref>a), the shear viscosity of the fluid may not be the same in all directions, but depends on the shear axis. Assuming that the viscosity tensor is invariant under rotations about the anisotropy (alignment) axis, the most general equation of motion can contain three shear viscosities (see SM). The shear stress and strain rate deformations corresponding to these viscosities are visualized in Fig. <ref>b, for an anisotropy axis chosen along the z direction. In a generic fluid, the magnitude of the anisotropy could depend on both B and on the microscopic details of the system. Here, we separate out the orientation and magnitude: B̂ controls the direction of the anisotropy axis and ϵ sets the strength of the anisotropy. In this case, the Stokes equation is -1/μ∇ P + Δv + ϵ𝒟(B̂)v = 0, where 𝒟 is a matrix of second derivatives. As an example, consider a weakly anisotropic fluid with shear viscosities μ_1 = μ, μ_2 = μ(1 + ϵ), and μ_3 = μ(1 + 4/3ϵ) when the anisotropy axis is along the z direction (see Fig. <ref>b and SM). This particular form allows for analytical calculations when ϵ is small (SM), but our general strategy applies to any anisotropic viscosity. The operator 𝒟 then takes the form 𝒟(B̂ = ẑ) = [ ∂_z^2 0 - ∂_x ∂_z / 3; 0 ∂_z^2 -∂_y ∂_z / 3; 0 0 Δ + 2∂_z^2 /3 ] The Green function of the Stokes equation (Stokeslet) can be computed numerically for any value of ϵ using fast Fourier transforms. We compute it analytically in the perturbative regime to linear order in ϵ (SM). Motion of a pushpin in an anisotropic fluid.—To determine the form of 𝕄, we now need to specify the shape of our particle. In principle, this requires solving boundary value problems for this specific shape <cit.>. We use a shortcut by which the mobility matrix for a given shape is obtained by constructing the object out of Stokeslets (see SM and Refs. <cit.>). To validate this method, we first consider a sphere. In this case, we can analytically solve the boundary value problem of a fluid flowing past the sphere in the limit of weak anisotropy (small ϵ), calculate the force and torque that the fluid exerts on the object, and compare with the results of the Stokeslet method (SM). The main consequence is that a sphere settling under the force of gravity in an anisotropic fluid sinks slower than in an isotropic one. The familiar Stokes drag law is modified: the drag coefficient is increased in the x and y directions by a factor of (1 + ϵ/2) and in the z direction by (1 + ϵ). When the shape of the particle is not spherically symmetric, both its velocity and angular velocity can change as compared to the isotropic case. We consider the simplest shape which exhibits non-trivial orientation evolution: a cylindrically symmetric pushpin, shown in Fig. <ref>a. The orientation of the pushpin is described by two angles: θ, the angle the pushpin long axis makes with the lab z axis, and ϕ, the angle between the plane projection of the pushpin long axis and the lab x axis. Equivalently, the pushpin orientation is given by the radial unit vector n̂(θ,ϕ) = (sin(θ)cos(ϕ), sin(θ)sin(ϕ), cos(θ)). The mobility matrix 𝕄 = 𝕄 (ϵ, B̂, n̂), which determines how the pushpin moves, depends on the orientation n̂ of the pushpin, on the anisotropy axis B̂ = (cos(ϕ_B)sin(θ_B), sin(ϕ_B)sin(θ_B),cos(θ_B)) of the fluid (Eq. <ref> is written with B̂ = ẑ), and on the strength of the anisotropy ϵ. By constructing the pushpin out of Stokeslets, we can compute the mobility matrix for any anisotropy direction and pushpin orientation (see SM for more details). Examples of mobility matrices for a tilted pushpin in an isotropic and anisotropic fluid can be visualized schematically as -0.5 < g r a p h i c s > in which red/blue represent positive/negative entries whose magnitude is represented by lightness (see SM). Orientation dynamics of a sedimenting pushpin.—We investigate the dynamics of a pushpin sinking under the force of gravity. Applying a constant force in the -z direction determines the angular velocity Ω of the pushpin, as in Eq. <ref>. Then, the equation of motion for the orientation of the pushpin is given by ∂_t n̂ = N(n̂) ≡Ω×n̂ in which Ω is given by Eq. (<ref>). Since N·n̂ = 0, the vector field N describing the orientation dynamics of the pushpin is tangent to the sphere (there is no radial component), as shown in Fig. <ref>c. The arrows show the instantaneous motion of the tip of a pushpin embedded in the center of the sphere. In spherical coordinates, Eq. <ref> reads θ̇ = N_θ sin(θ)ϕ̇ = N_ϕ, which we numerically solve with an explicit Runge-Kutta method of order 5(4) as implemented in SciPy <cit.>. We now ask what is the eventual orientation of the pushpin. Fixed points of the orientation dynamics satisfy N (θ^*, ϕ^*) = 0. In the isotropic case (ϵ=0), we find that after a transient, the pushpin orients itself to fall upright, with θ=0 (Fig. <ref>c). We expect that the anisotropy in the direction B will tilt the pushpin at an angle depending on the anisotropy direction and strength (Fig. <ref>d). Such a setup would allow us to control the orientation of the pushpin by acting on the fluid (Fig. <ref>a). We confirm that this is indeed the case using numerical simulations of the orientation dynamics. The results of our numerical simulations are presented in Fig. <ref>, in which we zoom in on the region of the sphere around the north pole, which corresponds to the stable fixed point in an isotropic fluid (Fig. <ref>a-b). In the anisotropic case (ϵ≠ 0), the steady state orientation of the pushpin can change: in Fig. <ref>c, the fixed point moves off of the north pole. We numerically compute the dependence of the fixed point position (θ^*, ϕ^*) on the orientation of the anisotropy axis (θ_B, ϕ_B) in Fig. <ref>d. As long as the anisotropy axis B̂ is neither exactly parallel nor perpendicular to F, the stable fixed point shifts off of the north pole (θ^* ≠ 0). Note that θ^* = θ^* (θ_B) and ϕ^* = ϕ^* (ϕ_B), with the exception of the case θ^* = 0, in which case ϕ^* is not defined. In this perturbative regime, we find that the numerical results are summarized by θ^* = ϵ A sin(k θ_B) and ϕ^* = ϕ_B - π where A/π≃ 0.0073 and k ≃ 2 for 0 < θ_B < π/2. Increasing ϵ moves the fixed point further from the north pole. If π/2 < θ_B < π, the θ^* dependence remains the same as shown in Fig. <ref>d, and ϕ^* shifts by π. With the help of Eq. <ref>, it is possible to adiabatically change the axis of B over time to induce the orientation of the pushpin to follow some desired trajectory. Fig. <ref> provides the necessary protocol for θ_B(t) and ϕ_B(t) that drives the pushpin to rotate in such a way as to trace out the rose trajectory. The control loop here is open: the axis of B affects the orientation of the pushpin, but there is no feedback on B from the current orientation. We now introduce a simplified description of the orientation dynamics. In the isotropic case, the orientation vector field N in Eq. <ref>-<ref> is well-approximated by N_iso = (0, -sinθ, 0) in spherical coordinates (Fig. <ref>c). From this, we can construct a toy model of N in the case ϵ≠ 0. To obtain the flow to a fixed point (θ^*, ϕ^*) which is off of the north pole, we can simply rotate the isotropic vector field to find N_an = [ 0; cosθsinθ^*cos(ϕ^* - ϕ) - cosθ^*sinθ; sinθ^*sin(ϕ^* - ϕ) ] in spherical coordinates, as shown in Fig. <ref>d. We achieve open loop control in the same way as in the full system: the fixed point (θ^*, ϕ^*) (the control variable) is simply set to the desired target (θ_set, ϕ_set) (Fig. <ref>a). In the presence of slowly varying noise, the orientation (θ(t), ϕ(t)) evolves through Eq. <ref> to be near the target, but does not follow it exactly (Fig. <ref>b). To improve control over the pushpin orientation, we close the feedback loop with a proportional-integral-derivative (PID) controller <cit.> (Fig. <ref>c-e) by setting [ θ^*; ϕ^* ] (t) = K_p e(t) + K_i ∫_0^t e(τ) dτ + K_d de(t)/dt where the error e(t) = (θ_set(t) - θ^*(t), ϕ_set(t)-ϕ^*(t)), and K_p, K_i, and K_d are parameters of the PID controller. Practically, we differentiate Eq. <ref> with respect to time to obtain a set of ordinary differential equations, which we numerically solve in conjuction with Eq. <ref> with the forward Euler method. We find that the closed loop successfully controls the orientation (Fig. <ref>d, compare with Fig. <ref>b) by changing the fixed point (Fig. <ref>e) in response to the noise (Fig. <ref>f). Discussion.—Our work suggests a novel method of indirect control of objects through the modulation of the properties of the medium in which they are immersed. By changing the viscosity of an anisotropic fluid, we manipulate the orientation of a small particle that sediments under the force of gravity. Such control could be experimentally realized in anisotropic fluids by varying the alignment axis of the fluid molecules. We thank Tom Witten, Colin Scheibner, Bryan VanSaders, and Yael Avni for discussions. T.K. acknowledges support from the National Science Foundation Graduate Research Fellowship under Grant No. 1746045. M.F. acknowledges support from the Simons Foundation, the National Science Foundation under grant DMR-2118415, and a MRSEC-funded Kadanoff–Rice fellowship (DMR-2011854). V.V. acknowledges support from the Simons Foundation, the Complex Dynamics and Systems Program of the Army Research Office under grant W911NF-19-1-0268, the National Science Foundation under grant DMR-2118415 and the University of Chicago Materials Research Science and Engineering Center, which is funded by the National Science Foundation under award no. DMR-2011854. § ANISOTROPIC VISCOSITIES A passive anisotropic fluid with cylindrical symmetry can contain three independent shear viscosity coefficients. These can be obtained by explicitly writing down the transformation law for the viscosity tensor (see for instance Refs. <cit.> and references therein). At steady state, the Stokes equation yields 0 = -∇ P +μ_1 [ (∂_x^2 + ∂_y^2) v_x; (∂_x^2 + ∂_y^2) v_y; 0 ] + μ_2 [ ∂_z^2 v_x + ∂_x ∂_z v_z; ∂_z^2 v_y + ∂_y ∂_z v_z; (∂_x^2 + ∂_y^2)v_z - ∂_z^2 v_z ] + μ_3 [ -∂_x ∂_z v_z; -∂_y ∂_z v_z; 2 ∂_z^2 v_z ]. If μ_1 = μ_2 = μ_3 ≡μ, the viscous contribution reduces to the familiar μΔv. In the main text, we consider the case μ_1 = μ, μ_2 = μ(1 + ϵ), and μ_3 = μ(1 + 4/3ϵ), where ϵ is small. In this case, the above equation reduces to Eqs. <ref>-<ref>. In a generic fluid, additional viscosity coefficients can be present, see Refs. <cit.> and references therein for more details. The viscosities in Eq. <ref> can be expressed in terms of the Leslie viscosity coefficients α_i (Eq. 6.50 of <cit.>) as μ_1 = α_4/2 μ_2 = α_4/2 + α_5 + α_6/4 μ_3 = α_4/2 + α_1 + α_5 + α_6/3 in which we have considered a uniform nematic director n = ẑ. § GREEN'S FUNCTION (STOKESLET) We compute the Green's function (Stokeslet) corresponding to Eqs. <ref>-<ref> in the perturbative regime, for small anisotropy in the z direction (B̂ = ẑ). The general case (for arbitrarily large ϵ) can be computed numerically with fast Fourier transforms. The Stokeslet is the solution to the Stokes equation with an applied point force, F: Fδ^3(r) = -∇ P + μΔv + ϵμ[ ∂_z^2 v_x - ∂_x ∂_z v_z/3; ∂_z^2 v_y - ∂_y ∂_z v_z/3; Δ v_z + 2∂_z^2 v_z/3 ] where we take ϵ≪ 1. To solve for v, we write Eq. <ref> in Fourier space, solve for the pressure and velocity fields, and integrate using contour integration to find the real-space solutions, in the same way as in <cit.>. The Stokeslet velocity field is expressed through the Green's function, 𝔾, as v(r) = 𝔾(r) F. Expanding the Green's function to linear order in ϵ, 𝔾(r) = 𝔾_0(r) + ϵ𝔾_1(r), we recover the familiar solution for a normal fluid, 𝔾_0,ij(r) = 1/8πμ r^3(r^2δ_ij + r_i r_j) and derive the first order correction due to anisotropy, 𝔾_1(r): 𝔾_1(r) = 1/8πμ r^3[ -(x^2 + y^2) 0 -xz; 0 -(x^2 + y^2) -yz; -xz -yz -(x^2 + y^2 + 2z^2) ]. The associated pressure field is P(r) = P_0(r) + ϵ P_1(r), with P_0(r) = 1/4π r^3F·r P_1(r) = -x^2 + y^2 - 2z^2/12π r^5(F·r - 2 F·z). The Green's function in Eq. <ref> holds for an anisotropy axis B̂ = ẑ. Under a rotation to an arbitrary anisotropy axis given by B̂ = (cosϕ_Bsinθ_B, sinϕ_Bsinθ_B, cosθ_B), the Green's function in Eq. <ref> transforms as 𝔾_1(r) → R 𝔾_1(R^-1r)R^-1, where R is the rotation matrix R = [ cos(θ_B)cos(ϕ_B) - sin(ϕ_B) cos(ϕ_B)sin(θ_B); cos(θ_B)sin(ϕ_B) cos(ϕ_B) sin(ϕ_B)sin(θ_B); -sin(θ_B) 0 cos(θ_B) ]. § FLOW PAST A SPHERE We solve the anisotropic Stokes equation (Eqs. <ref>-<ref>) for the flow past a sphere to linear order in ϵ by writing v(r) = v_0(r) + ϵv_1(r), P(r) = P_0(r) + ϵ P_1(r), as in <cit.>. We take the velocity of the fluid at infinity to be U, and the boundary condition on the sphere surface to be no-slip, v(r = a) = 0, where a is the sphere radius. Since the viscosity is anisotropic, we have two cases to consider: one in which U is parallel to the anisotropy axis B, and one in which U and B are perpendicular. Let us take B̂ = ẑ. We first consider the parallel case, U = U ẑ. In this situation, the velocity field around the sphere is not modified at first order, v_1(r) = 0, but the pressure is: P_1(r) = -μ a U/2r^7 z(4x^4 + 4y^4 + 5y^2 z^2 + z^4 + 8x^2y^2 + 5x^2 z^2 + a^2(-3x^2 - 3y^2 + 2z^2)). We repeat in the perpendicular case, for U = U x̂. Here, both the velocity and pressure fields are modified, v_1,x(r) = 3aU/8r^5(y-z)(y+z)(r^2 - a^2) v_1,y(r) = -3aU/8r^5xy(r^2 - a^2) v_1,z(r) = 3aU/8r^5xz(r^2 - a^2) P_1(r) = μ U a/4r^7 x (-5(x^2 + y^2)^2 - 4(x^2 + y^2)z^2 + z^4 + 2a^2 (x^2 + y^2 -4z^2)) The velocity field can be more compactly written in terms of the Green's function and its Laplacian. To linear order, the velocity is v(r) = -6πμ U a (1 + ϵ/2)𝔾(r) ·x̂ -πμ U a^3 (1 + ϵ/2) Δ𝔾(r)·x̂. The first order term can be written explicitly as v_1(r) = -6πμ U a(𝔾_0(r)/2·x̂ + 𝔾_1(r) ·x̂) -πμ U a^3Δ(𝔾_0(r)/2·x̂ + 𝔾_1(r) ·x̂). § FORCES ON A SPHERE To solve for the forces on the sphere due to the fluid flow, we compute the stress from the above velocity fields and integrate it over the surface of the sphere, F_i = ∮σ_ij n_j dS, where n̂ = r̂ is the unit vector normal to the sphere surface. Here, in addition to the familiar pressure and shear viscosity contributions, the stress contains a third term due to the anisotropic viscosity, σ_ij = -Pδ_ij + μ(∂_i v_j + ∂_j v_i) + ϵσ_an, where σ_an = μ[ 4/9 (∂_x v_x + ∂_y v_y - 2∂_z v_z) 0 ∂_z v_x + ∂_x v_z; 0 4/9(∂_x v_x + ∂_y v_y - 2∂_z v_z) ∂_z v_y + ∂_y v_z; ∂_z v_x + ∂_x v_z ∂_z v_y + ∂_y v_z -8/9 (∂_x v_x + ∂_y v_y - 2∂_z v_z) ] for an anisotropy axis B̂ = ẑ. Computing the forces yields the following subset of the propulsion matrix: [ F_x; F_y; F_z ] = 6πμ a [ 1+ϵ/2 0 0; 0 1+ϵ/2 0; 0 0 1 + ϵ ][ V_x; V_y; V_z ]. Due to the anisotropy of the viscosity, the Stokes drag law is modified. The drag coefficients in the x and y directions are increased by a factor of (1 + ϵ/2) and in the z direction (along the anisotropy axis) by (1 + ϵ). The fluid does not exert torques on the sphere. The A block of the mobility matrix 𝕄 (see Eq. <ref>) is simply the inverse of the matrix above: [ V_x; V_y; V_z ] = 1/6πμ a[ 1-ϵ/2 0 0; 0 1-ϵ/2 0; 0 0 1 - ϵ ]_A [ F_x; F_y; F_z ]. Eq. <ref> holds for an anisotropy axis B̂ = ẑ. To obtain Eq. <ref> for an arbitrary anisotropy axis B̂ = (cosϕ_Bsinθ_B, sinϕ_Bsinθ_B, cosθ_B), we transform A as follows: A → R A R^-1, where R is the rotation matrix in Eq. <ref>. Note that we can transform 𝕄 in this way only for the sphere due to its rotational invariance. For the pushpin, which has its own anisotropy axis n, 𝕄 only transforms as in Eq. <ref> if B and n rotate together. In the general case, the mobility matrix must be recomputed for different anisotropy axes, as described below. § STOKESLET APPROXIMATION OF THE MOBILITY MATRIX To isolate the coefficients that relate different degrees of freedom, the mobility matrix can be conveniently arranged in four blocks 𝕄 = [ [ A B; T S ] ]. For shapes that are less symmetric than the sphere, it is difficult to obtain an analytical form of the mobility matrix. To derive the mobility matrix of the pushpin-shaped object, we construct the pushpin out of small spheres of radius a (denoted by markers in Fig. <ref>) with the method reviewed in <cit.>. With this method, we apply a force to the pushpin at some reference point, which is then distributed amongst the small spheres. Reference <cit.> provides an algorithm to determine how to distribute these forces that depends on two main ingredients. The first ingredient is the velocity field generated by a small sphere moving in a fluid due to an applied force. The distance between the spheres that compose the pushpin is taken to be much larger than the radius of the spheres, which allows us to treat the spheres as Stokeslets, and approximate the velocity field by the Green's function in Eq. <ref>. The second ingredient is the force on a sphere moving with some velocity in a fluid (i.e. the A block of the mobility matrix), which we derived in Eq. <ref>. Combining these two ingredients, we impose the constraint of a rigid body (we insist that the small spheres cannot move relative to one another) which yields the mobility matrix of the pushpin. Moreover, with the help of Eqs. <ref> and <ref>, we can compute the mobility matrix of the pushpin for any orientation of the anisotropy axis B̂. For the computations in this work, the pushpin is composed of fourteen spheres of radius a = 0.01. The four which lie along the axis are spaced with unit distance, and the ten that are along the base lie on the vertices of a regular decagon. The line segments that connect the markers in Fig. <ref> are not real and are meant to guide the eye. Below, we provide the numerical values of T, the bottom left block of the mobility matrix shown pictorially in Eq. <ref>. For these matrices, the pushpin orientation is ϕ = 0, θ = 0.3π. For the anisotropic case, we take ϵ = 0.1, B̂ = 1/√(2)(0, 1, 1). T_iso ≃[ 0 0.00367034 0; -0.00367034 0 0.00505179; -0 -0.00505179 -0 ] T_an ≃[ 0.00007423 0.00345387 -0.00010948; -0.00350251 -0.00020758 0.00481074; -0.00008957 -0.00473651 0.00013335 ] 37 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Ramaswamy(2001)]Ramaswamy2001 author author S. Ramaswamy, https://doi.org/10.1080/00018730110050617 journal journal Advances in Physics volume 50, pages 297 (year 2001)NoStop [Guazzelli et al.(2009)Guazzelli, Morris, and Pic]Guazzelli2009 author author E. Guazzelli, author J. F. Morris, and author S. Pic, https://doi.org/10.1017/cbo9780511894671 title A Physical Introduction to Suspension Dynamics (publisher Cambridge University Press, year 2009)NoStop [Lauga and Powers(2009)]Lauga2009 author author E. Lauga and author T. R. Powers, https://doi.org/10.1088/0034-4885/72/9/096601 journal journal Reports on Progress in Physics volume 72, pages 096601 (year 2009)NoStop [Bär et al.(2020)Bär, Großmann, Heidenreich, and Peruani]Bär2020 author author M. Bär, author R. Großmann, author S. Heidenreich, and author F. Peruani, https://doi.org/10.1146/annurev-conmatphys-031119-050611 journal journal Annual Review of Condensed Matter Physics volume 11, pages 441–466 (year 2020)NoStop [Gompper et al.(2020)Gompper, Winkler, Speck, Solon, Nardini, Peruani, Löwen, Golestanian, Kaupp, Alvarez, Kiørboe, Lauga, Poon, DeSimone, Muiños-Landin, Fischer, Söker, Cichos, Kapral, Gaspard, Ripoll, Sagues, Doostmohammadi, Yeomans, Aranson, Bechinger, Stark, Hemelrijk, Nedelec, Sarkar, Aryaksama, Lacroix, Duclos, Yashunsky, Silberzan, Arroyo, and Kale]Gompper2020 author author G. Gompper, author R. G. Winkler, author T. Speck, author A. Solon, author C. Nardini, author F. Peruani, author H. Löwen, author R. Golestanian, author U. B. Kaupp, author L. Alvarez, author T. Kiørboe, author E. Lauga, author W. C. K. Poon, author A. DeSimone, author S. Muiños-Landin, author A. Fischer, author N. A. Söker, author F. Cichos, author R. Kapral, author P. Gaspard, author M. Ripoll, author F. Sagues, author A. Doostmohammadi, author J. M. Yeomans, author I. S. Aranson, author C. Bechinger, author H. Stark, author C. K. Hemelrijk, author F. J. Nedelec, author T. Sarkar, author T. Aryaksama, author M. Lacroix, author G. Duclos, author V. Yashunsky, author P. Silberzan, author M. Arroyo, and author S. Kale, https://doi.org/10.1088/1361-648x/ab6348 journal journal Journal of Physics: Condensed Matter volume 32, pages 193001 (year 2020)NoStop [Bechinger et al.(2016)Bechinger, Di Leonardo, Löwen, Reichhardt, Volpe, and Volpe]Bechinger2016 author author C. Bechinger, author R. Di Leonardo, author H. Löwen, author C. Reichhardt, author G. Volpe, and author G. Volpe, https://doi.org/10.1103/revmodphys.88.045006 journal journal Reviews of Modern Physics volume 88, pages 045006 (year 2016)NoStop [Boles et al.(2016)Boles, Engel, and Talapin]Boles2016 author author M. A. Boles, author M. Engel, and author D. V. Talapin, https://doi.org/10.1021/acs.chemrev.6b00196 journal journal Chemical Reviews volume 116, pages 11220–11289 (year 2016)NoStop [Nelson et al.(2010)Nelson, Kaliakatsos, and Abbott]Nelson2010 author author B. J. Nelson, author I. K. Kaliakatsos, and author J. J. Abbott, https://doi.org/10.1146/annurev-bioeng-010510-103409 journal journal Annual Review of Biomedical Engineering volume 12, pages 55–85 (year 2010)NoStop [Walker et al.(2022)Walker, Ishimoto, Gaffney, and Moreau]Walker2022 author author B. Walker, author K. Ishimoto, author E. Gaffney, and author C. Moreau, journal journal Journal of Fluid Mechanics volume 942, https://doi.org/10.1017/jfm.2022.253 10.1017/jfm.2022.253 (year 2022)NoStop [Lim et al.(2011)Lim, Lanni, Evarts, Lanni, Tilton, and Majetich]lim2011magnetophoresis author author J. Lim, author C. Lanni, author E. R. Evarts, author F. Lanni, author R. D. Tilton, and author S. A. Majetich, @noop journal journal Acs Nano volume 5, pages 217 (year 2011)NoStop [Venu et al.(2013)Venu, Lim, Hu, Jeong, Ramulu, and Kim]venu2013chip author author R. Venu, author B. Lim, author X. Hu, author I. Jeong, author T. Ramulu, and author C. Kim, @noop journal journal Microfluidics and nanofluidics volume 14, pages 277 (year 2013)NoStop [Alnaimat et al.(2018)Alnaimat, Dagher, Mathew, Hilal-Alnqbi, and Khashan]alnaimat2018microfluidics author author F. Alnaimat, author S. Dagher, author B. Mathew, author A. Hilal-Alnqbi, and author S. Khashan, @noop journal journal The Chemical Record volume 18, pages 1596 (year 2018)NoStop [Hunt and Westervelt(2006)]hunt2006dielectrophoresis author author T. Hunt and author R. Westervelt, @noop journal journal Biomedical microdevices volume 8, pages 227 (year 2006)NoStop [Pethig(2010)]pethig2010dielectrophoresis author author R. Pethig, @noop journal journal Biomicrofluidics volume 4, pages 022811 (year 2010)NoStop [Fan et al.(2011)Fan, Zhu, Cammarata, and Chien]fan2011electric author author D. Fan, author F. Zhu, author R. Cammarata, and author C. Chien, @noop journal journal Nano Today volume 6, pages 339 (year 2011)NoStop [Svoboda and Block(1994)]svoboda1994biological author author K. Svoboda and author S. M. Block, @noop journal journal Annual review of biophysics and biomolecular structure volume 23, pages 247 (year 1994)NoStop [Roichman et al.(2007)Roichman, Wong, and Grier]roichman2007colloidal author author Y. Roichman, author V. Wong, and author D. G. Grier, @noop journal journal Physical Review E volume 75, pages 011407 (year 2007)NoStop [Moffitt et al.(2008)Moffitt, Chemla, Smith, and Bustamante]moffitt2008recent author author J. R. Moffitt, author Y. R. Chemla, author S. B. Smith, and author C. Bustamante, @noop journal journal Annu. Rev. Biochem. volume 77, pages 205 (year 2008)NoStop [Zhong et al.(2013)Zhong, Wei, Zhou, Wang, and Li]zhong2013trapping author author M.-C. Zhong, author X.-B. Wei, author J.-H. Zhou, author Z.-Q. Wang, and author Y.-M. Li, @noop journal journal Nature communications volume 4, pages 1768 (year 2013)NoStop [Courtney et al.(2014)Courtney, Demore, Wu, Grinenko, Wilcox, Cochran, and Drinkwater]courtney2014independent author author C. R. Courtney, author C. E. Demore, author H. Wu, author A. Grinenko, author P. D. Wilcox, author S. Cochran, and author B. W. Drinkwater, @noop journal journal Applied Physics Letters volume 104, pages 154103 (year 2014)NoStop [Collins et al.(2015)Collins, Morahan, Garcia-Bustos, Doerig, Plebanski, and Neild]collins2015two author author D. J. Collins, author B. Morahan, author J. Garcia-Bustos, author C. Doerig, author M. Plebanski, and author A. Neild, @noop journal journal Nature communications volume 6, pages 8686 (year 2015)NoStop [Ozcelik et al.(2018)Ozcelik, Rufo, Guo, Gu, Li, Lata, and Huang]ozcelik2018acoustic author author A. Ozcelik, author J. Rufo, author F. Guo, author Y. Gu, author P. Li, author J. Lata, and author T. J. Huang, @noop journal journal Nature methods volume 15, pages 1021 (year 2018)NoStop [Hardman et al.(2022)Hardman, George Thuruthel, and Iida]hardman2022manipulation author author D. Hardman, author T. George Thuruthel, and author F. Iida, @noop journal journal Scientific Reports volume 12, pages 1 (year 2022)NoStop [Beenakker and McCourt(1970)]Beenakker1970 author author J. J. M. Beenakker and author F. R. McCourt, https://doi.org/10.1146/annurev.pc.21.100170.000403 journal journal Annual Review of Physical Chemistry volume 21, pages 47–72 (year 1970)NoStop [Varnavides et al.(2020)Varnavides, Jermyn, Anikeeva, Felser, and Narang]Varnavides2020 author author G. Varnavides, author A. S. Jermyn, author P. Anikeeva, author C. Felser, and author P. Narang, journal journal Nature Communications volume 11, https://doi.org/10.1038/s41467-020-18553-y 10.1038/s41467-020-18553-y (year 2020)NoStop [Gusev et al.(2020)Gusev, Jaroshevich, Levin, Kvon, and Bakarov]Gusev2020 author author G. M. Gusev, author A. S. Jaroshevich, author A. D. Levin, author Z. D. Kvon, and author A. K. Bakarov, journal journal Scientific Reports volume 10, https://doi.org/10.1038/s41598-020-64807-6 10.1038/s41598-020-64807-6 (year 2020)NoStop [Cook and Lucas(2021)]Cook2021 author author C. Q. Cook and author A. Lucas, https://doi.org/10.1103/physrevlett.127.176603 journal journal Physical Review Letters volume 127, pages 176603 (year 2021)NoStop [Sehgal et al.(2019)Sehgal, Ramaswamy, Cohen, and Kirby]sehgal2019using author author P. Sehgal, author M. Ramaswamy, author I. Cohen, and author B. J. Kirby, @noop journal journal Physical review letters volume 123, pages 128001 (year 2019)NoStop [Sehgal et al.(2022)Sehgal, Ramaswamy, Ong, Ness, Cohen, and Kirby]sehgal2022viscosity author author P. Sehgal, author M. Ramaswamy, author E. Y. Ong, author C. Ness, author I. Cohen, and author B. J. Kirby, @noop journal journal arXiv preprint arXiv:2206.01141 (year 2022)NoStop [Gibaud et al.(2020)Gibaud, Dagès, Lidon, Jung, Ahouré, Sztucki, Poulesquen, Hengl, Pignon, and Manneville]Gibaud2020 author author T. Gibaud, author N. Dagès, author P. Lidon, author G. Jung, author L. C. Ahouré, author M. Sztucki, author A. Poulesquen, author N. Hengl, author F. Pignon, and author S. Manneville, https://doi.org/10.1103/physrevx.10.011028 journal journal Physical Review X volume 10, pages 011028 (year 2020)NoStop [Kim and Karrila(1991)]KimKarrila author author S. Kim and author S. J. Karrila, https://doi.org/10.1016/c2013-0-04644-0 title Microhydrodynamics (publisher Butterworth-Heinemann, year 1991)NoStop [Witten and Diamant(2020)]witten2020review author author T. A. Witten and author H. Diamant, @noop journal journal Reports on Progress in Physics volume 83, pages 116601 (year 2020)NoStop [Mowitz and Witten(2017)]mowitz2017predicting author author A. J. Mowitz and author T. Witten, @noop journal journal Physical Review E volume 96, pages 062613 (year 2017)NoStop [Virtanen et al.(2020)Virtanen, Gommers, Oliphant, Haberland, Reddy, Cournapeau, Burovski, Peterson, Weckesser, Bright, van der Walt, Brett, Wilson, Millman, Mayorov, Nelson, Jones, Kern, Larson, Carey, Polat, Feng, Moore, VanderPlas, Laxalde, Perktold, Cimrman, Henriksen, Quintero, Harris, Archibald, Ribeiro, Pedregosa, van Mulbregt, and SciPy 1.0 Contributors]scipy author author P. Virtanen, author R. Gommers, author T. E. Oliphant, author M. Haberland, author T. Reddy, author D. Cournapeau, author E. Burovski, author P. Peterson, author W. Weckesser, author J. Bright, author S. J. van der Walt, author M. Brett, author J. Wilson, author K. J. Millman, author N. Mayorov, author A. R. J. Nelson, author E. Jones, author R. Kern, author E. Larson, author C. J. Carey, author İ. Polat, author Y. Feng, author E. W. Moore, author J. VanderPlas, author D. Laxalde, author J. Perktold, author R. Cimrman, author I. Henriksen, author E. A. Quintero, author C. R. Harris, author A. M. Archibald, author A. H. Ribeiro, author F. Pedregosa, author P. van Mulbregt, and author SciPy 1.0 Contributors, https://doi.org/10.1038/s41592-019-0686-2 journal journal Nature Methods volume 17, pages 261 (year 2020)NoStop [Bechhoefer(2021)]bechhoefer2021control author author J. Bechhoefer, @noop title Control Theory for Physicists (publisher Cambridge University Press, year 2021)NoStop [Khain et al.(2022)Khain, Scheibner, Fruchart, and Vitelli]khain2022stokes author author T. Khain, author C. Scheibner, author M. Fruchart, and author V. Vitelli, @noop journal journal Journal of Fluid Mechanics volume 934 (year 2022)NoStop [Kleman and Lavrentovich(2003)]kleman2003soft author author M. Kleman and author O. D. Lavrentovich, @noop title Soft matter physics: an introduction (publisher Springer, year 2003)NoStop
http://arxiv.org/abs/2307.04784v1
20230710180000
Positivity-causality competition: a road to ultimate EFT consistency constraints
[ "Mariana Carrillo González", "Claudia de Rham", "Sumer Jaitly", "Victor Pozsgay", "Anna Tokareva" ]
hep-th
[ "hep-th" ]
=1 decorations.pathmorphing arrows.meta arrows,positioning,decorations.markings,decorations.pathmorphing,calc edgelayer nodelayer decorations.pathreplacing edgelayer,nodelayer,main none/.style=draw=none new edge style 2/.style=black new style 0/.style=black rednode/.style=draw=none, scale=0.3pt,fill=red,circle, draw redline/.style=line width=0.3mm,red greyE/.style=line width=0.1mm,gray arrows,positioning,decorations.markings,decorations.pathmorphing,calc
http://arxiv.org/abs/2307.05183v1
20230711113737
Quantum-enhanced Electrometer based on Microwave-dressed Rydberg Atoms
[ "Shuhe Wu", "Dong Zhang", "Zhengchun Li", "Minwei Shi", "Peiyu Yang", "Jinxian Guo", "Wei Du", "Guzhi Bao", "Weiping Zhang" ]
quant-ph
[ "quant-ph" ]
[email protected] [email protected] [email protected] ^1 School of Physics and Astronomy, and Tsung-Dao Lee institute, Shanghai Jiao Tong University, Shanghai 200240, China. ^2 Shanghai Branch, Hefei National Laboratory, Shanghai 201315, China. ^3 Collaborative Innovation Center of Extreme Optics, Shanxi University, Taiyuan, Shanxi 030006, China. ^4 Shanghai Research Center for Quantum Sciences, Shanghai 201315, China. Rydberg atoms have been shown remarkable performance in sensing microwave field. The sensitivity of such an electrometer based on optical readout of atomic ensemble has been demonstrated to approach the photon-shot-noise limit. However, the sensitivity can not be promoted infinitely by increasing the power of probe light due to the increased collision rates and power broadening. Compared with classical light, the use of quantum light may lead to a better sensitivity with lower number of photons. In this paper, we exploit entanglement in a microwave-dressed Rydberg electrometer to suppress the fluctuation of noise. The results show a sensitivity enhancement beating the shot noise limit in both cold and hot atom schemes. Through optimizing the transmission of optical readout, our quantum advantage can be maintained with different absorptive index of atomic vapor, which makes it possible to apply quantum light source in the absorptive electrometer. Quantum-enhanced Electrometer based on Microwave-dressed Rydberg Atoms Weiping Zhang ^1,2,3,4 August 12, 2023 ====================================================================== § INTRODUCTION Ultrasensitive detection of electric field plays a significant role in widespread applications, including communications <cit.>, remote sensing <cit.>, and medical diagnosis <cit.>. Rydberg atom-based electrometers (RAEs) <cit.>, in which the spectroscopic characterizations of atoms are engineered by the optical and electrical fields in the process of electromagnetically induced transparency (EIT) <cit.> and Autler-Townes (AT) splitting <cit.>, bring a direct International System of Units (SI) traceable and self-calibrated measurement of electrical amplitude with broadly dynamic range. These instruments can be very sensitive to electrical fields due to the large transition electric dipole moment <cit.>, which lead to a strong atomic response to electric fields. The strength of signal can be calculated from observing the distance between the two separate peaks caused by the electrical field <cit.>. When the electrical field is strong enough, the splitting can be distinguished at the area called AT regime. However, precisely measuring the splitting will be hard when the electrical field is too small to depart from the AT regime <cit.>. Under these circumstances, the strength of the electrical field can be monitored by observing the variation of probe transmission instead of the splitting. Recently, photon-shot-noise (PSN) in the signal readout has been regarded as a main limiting factor to the performance of RAEs <cit.>. The PSN of probe light and the slope of readout signal will jointly limit the sensitivity, which can be defined as δ E_MW≡√(⟨δ^2Î⟩)/|∂⟨Î⟩/∂ ϵ||∂ϵ/ ∂ E_MW|. Here Î is the intensity of the detected field, ϵ and E_MW represent the absorptive index and amplitude of microwave field respectively. It is clear that the PSN scales with the square root of the optical power and the slope of readout signal is proportional to the optical power, so classically optimized sensors can typically be improved by increasing the laser power. However, the laser-induced collision rates and power broadening <cit.> will lead to a decrease of signal slope which also limits the laser power. In this scenario, squeezed light hold particular appeal for applications due to its ability to further enhance the sensitivity by reducing PSN with limited laser power, which has been shown in a number of applications <cit.>. In this paper, we study the absorptive measurement using the beam splitter (BS) to mimic the loss in the differential detection and theoretically analyze the characteristics of different schemes, e.g. with the coherent and squeezed input states. Then we replace the BS with microwave-dressed Rydberg atoms and study the entanglement-assisted microwave electrometer. Since quantum light is extremely fragile to loss, therefore it's necessary to find an operating point that simultaneously maintains a relatively high transmittance of probe light and slope of the observable. We demonstrate the variation of the electrical field E_MW affects the transmittance factor ϵ of probe light and the slope of the observable synchronously, therefore the minimal observable is realized by operating at the optimal slope and noise through engineering the dressed microwave (MW) field and light field. § SENSITIVITY OF ABSORPTIVE MEASUREMENT §.§ Sensitivity with classical field Before introducing the entanglement-assisted RAEs, we start by briefly calculating the sensitivity of absorptive measurement when coherent light and squeezed light are employed respectively. When coherent state is employed, the absorptive measurement can be achieved by a topology of differential detection as shown in Fig.<ref> (a). A coherent state â_0 is splitted into â_in and b̂_in by the variable beam splitter (VBS) with the transmissivity of T and reflectivity of R. â_in acts as the probe light propagate through a fictitious BS mimicking the absorptive signal of optical field with the transmission e^-ϵ and â_v is the vacuum field induced by the absorptive measurement. While the beam b̂_in serves as a reference. The input-output relations of the coherent state scheme are given by â_out =√(T e^-2ϵ) â_0+√(R e^-2ϵ)b̂_0+√(1-e^-2ϵ) â_v, b̂_out=b̂_in=√(T) b̂_0-√(R)â_0, where T and R are the transmissivity and reflectivity of the VBS with the condition T+R=1. The observable here is defined as Î=â^†_outâ_out-b̂^†_outb̂_out. Then we obtain the slope and noise of the observable ∂⟨Î⟩/∂ϵ=-2e^-2ϵI_si, ⟨δ^2Î⟩=[e^-2ϵ+R/T]I_si, where, I_si=Tα^2 denotes the sensing intensity of absorptive measurement. According to Eq. (<ref>), the sensitivity can be expressed as δ_cϵ =√(1/2+R/2e^-2ϵT)1/√(e^-2ϵI_si), The optimum sensitivity can be achieved when T=1 and R=0, which represents a direct detection scheme with all of the coherent state is transmitted to sense the signal. Then we get the minimum measurable δ_cϵ=1/√(2)√(e^-2ϵI_si). For practical application, the technical noise from laser also limits the sensitivity<cit.>. A differential measurement, which is commonly used in precision measurement, can be realized with e^-2ϵT=R where the excess noise involved in the laser can be cancelled. From Eq. (<ref>), we find the sensitivity is restricted by the absorption induced loss and slope, the fundamental limitation is still photon shot noises (PSN) originated from the statistical distribution of photons. §.§ Sensitivity with quantum field Now we focus on analyzing the performance of squeezed light in absorptive measurement. As is well-known, squeezed light is difficultly generated with large photon flux and extremely sensitive to the loss in absorptive measurement. We propose a topology similar to homodyne detection, the sensing intensity can be boosted by introducing a local oscillator (LO). The optimum operation condition will be studied to realize the maximized sensitivity with precision beating the shot-noise limit (SNL). As shown in Fig. <ref> (b), we employ a squeezed vacuum state at the unused port of VBS to replace the vacuum. The input-output relation of the squeezer can be expressed as b̂_q0=cosh r b̂_0+sinhr b̂^†_0e^iθ, where b̂_0 is vacuum state, cosh r and sinh r are the gain factors satisfying cosh^2 r-sinh^2 r=1, r is the squeezed factor and θ is phase of squeezer. The squeezed light is combined with another much stronger LO field at the VBS to boost the sensing power since it is exceedingly hard to be prepared with large photon number. For the squeezed light scheme, the input-output relationship become â_qout =√(e^-2ϵT) â_0+√(e^-2ϵR) b̂_q0+√(1-e^-2ϵ) â_v, b̂_qout =b̂_in=√(T) b̂_q0-√( R) â_0. The mode of LO can be classically represented by |α| due to the much stronger power compared to squeezed vacuum state, then we have the differential intensity between the two detector Î_q = â^†_qoutâ_qout-b̂^†_qoutb̂_qout = α[ (1+e^-2ϵ)√(TR)X̂_b_q0+√(Te^-2ϵ(1-e^-2ϵ))X̂_a_v] +α^2(e^-2ϵT-R). Here X̂_o=ô+ô^† with o∈ [ b_q0, a_v] are the amplitude quadrature of the optical modes. Note that intensity term α^2(e^-2ϵT-R) can be cancelled when satisfying the condition e^-2ϵT=R, then we give the slope and noise ∂⟨Î_q⟩/∂ϵ=-2e^-2ϵI_si, ⟨δ^2Î_q⟩ =[1+e^-2ϵ/e^2r+1-e^-2ϵ]e^-2ϵI_si, where I_si=Tα^2=α^2/(e^-2ϵ+1) represent the sensing intensity. According to Eq. (<ref>), we give the quantum-enhanced sensitivity of absorptive measurement δ_qϵ=√(1+e^-2ϵ/e^2r+1-e^-2ϵ)1/√(2e^-2ϵI_si), Compared to the classical light with differential detection, we find the sensitivity is improved by G_q=δ_cϵ/δ_qϵ=√(2)/√(1+e^-2ϵ/e^2r+1-e^-2ϵ). This improvement is same as the noise reduction originated from the squeezed light injection. The quantum-enhancement is maximal when the absorption is very small (ϵ→0), indicate an improvement factor of G_s=e^r. Next, we further analyze the influence of G_s on the quantum enhancement G_q with the Δ_c=Δ_p=0, which is given in Fig.<ref>. The black line (i) shows the quantum enhancement G_q without absorption, which is proportional to G_s. G_q with 10% (ii) and 50% (iii) absorption are shown in red (ii) and blue line (iii), respectively. The absorption parameters set here correspond to the optimal sensitivity of squeezed light injection in the case of cold and hot atoms (see detail in the next section). It is noted that with the increase of G_s, the quantum enhancement G_q in both cases tends to be saturated due to the existence of absorption. Based on this, we chose r=2.5 to measure the sensitivity. § PRINCIPLE OF QUANTUM-NOISE LIMITED ELECTROMETER In this theory, the RAEs are achieved by Cesium (Cs) atomic vapor with a four-level ladder structure as shown in Fig. <ref> (a). The frequency of transition from |1⟩ to |2⟩ and |2⟩ to |3⟩ are labelled as ω_21 and ω_32 with resonance frequency of 780 nm and 480 nm, respectively. A weak probe light field and a strong coupling light field interact to each transition with frequency near atomic resonance, the quantum interference from the two excitation pathways produce a dark state which create a transparent window for the probe light field, named EIT. A MW field with frequency ω_MW resonance to the nearby Rydberg transition |3⟩ to |4⟩ will reduce the transmitted intensity of probe light, which is the physical quantity to estimate the strength of electrical field in our strategy. The Hamiltonian of the system after the rotating wave approximation can be written as: [ Ĥ ] =ħ[ [ 0 ξ^*â^† 0 0; ξâ Δ_p Ω^*_c/2 0; 0 Ω_c/2 Δ_c+Δ_p Ω^*_MW/2; 0 0 Ω_MW/2 Δ_MW+Δ_c+Δ_p; ]], where the coupling light and MW field are treated as a classical field while the weak probe light is quantized. The probe light are described by slowly varying quantum–mechanical operators. Ω_c=μ_32E_c/ħ and Ω_MW=μ_43E_MW/ħ are Rabi frequency of coupling light and MW field, respectively. ω_c, ω_p and ω_MW are their frequencies. ξ is the atom-probe coupling constants: ξ=μ_21ε/ħ, where μ_ij (i,j=1,2,3,4) is the transition dipole moment from state |i⟩ to state |j⟩ and ε=√(ħω_p/2ϵ_0V) is the electric field of a single photon. V is the quantized volume , â is the annihilation operators for probe field and ϵ_0 is the permittivity in vacuum. Δ_c=ω_32-ω_c, Δ_MW=ω_43-ω_MW, and Δ_p=ω_21-ω_p are the single photon detuning of coupling light, MW field and probe light, respectively. The properties of the medium are described by collective, slowly-varying operators σ_μν(z,t)=1/N_z∑^N_z_j=1|μ_j⟩⟨ν_j|e^-iΔ_pt+ik_pz averaged over small layers denoted by their position z containing number of atoms N_z. k_p is the projection of the wavevector of probe light on the z axis. To account for decay and dephasing, the system is described using the Heisenberg-Langevin equations: ∂/∂ tσ̂_μν=i/ħ[Ĥ,σ̂_μν]+D̂_μν+F̂_μν Here, D̂_μν is the terms produced by spontaneous emission and dephasing: [ D_μν ] = [ [ γ_2σ_22+ γ_3σ_33+γ_4σ_44 - γ_2/2σ_12 - γ_3/2σ_13 - γ_4/2σ_14; - γ_2/2σ_12 -γ_2σ_22 -γ_2+γ_3/2σ_23 -γ_2+γ_4/2σ_24; - γ_3/2σ_31 - (γ_2+γ_3)/2σ_32 -γ_3σ_33 - (γ_3+γ_4)/2σ_34; - γ_4/2σ_41 - (γ_2+γ_4)/2σ_42 - (γ_3+γ_4)/2σ_43 -γ_4σ_44; ]], Here γ_i is the spontaneous decay from the states |i⟩ to |i-1⟩. F̂_μν is the Langevin atomic forces. The differential equations describing the propagation and temporal evolution of the quantum field operator is: (∂/∂ t+c∂/∂ z)â(z,t) =iξ𝒩σ̂_12(z,t), where 𝒩 is the atomic density. The Fourier transforms of the quantum operator satisfy the following equation: 1/k_p∂/∂ zâ(ω)=χâ(w)+F̂_a, where k_p is the wave vector of the probe, χ is the susceptibility of the medium dressed by the coupling light and MW field. F̂_â=∑_m=2,3,4iξ𝒩B_1mF̂_1m. The coefficients column [B_1m] is 1/S×[ Ω _MWΩ _MW^*+ 4Γ_13Γ_14; - 2 i Ω _cΓ_14; - Ω _cΩ _MW ]. Here S=Γ_12(4Γ_13Γ_14+Ω_MWΩ_MW^*)+Γ_14Ω_cΩ_c^* with Γ_12=iΔ_p+γ_2/2, Γ_13=i(Δ_p+Δ_c)+γ_3/2,Γ_14=iΔ+γ_4/2 and Δ=Δ_c+Δ _MW+Δ _p. The formula of susceptibility in Rydberg atoms can be expressed as χ=i𝒩|μ_21|^2/ϵ_0ħΓ_13Γ_14+Ω_MWΩ^*_MW/4/Γ_12[Γ_13Γ_14+Ω_MWΩ^*_MW/4]+Ω_cΩ^*_c/4Γ_14. The formally integration of quantum operator is â_out=e^-(ϵ+iϕ)â_in+√(1-e^-2(ϵ+iϕ))â_υ, where ϵ=-Im(χ)k_pl is the absorption index, ϕ=-Re(χ)k_pl is the dispersion index, l is the length of the atomic medium. â_υ(t)=∫ dτF̂_â(τ)e^iτ/√(1-e^-2(ϵ+iϕ)t). With γ_2>>γ_3 and γ_2>>γ_4 <cit.>, it is easy to verify ⟨â_υ(t)⟩=0, ⟨â_υ(t)â_υ(t')⟩=0, ⟨â_υ^†(t)â_υ^†(t')⟩=0, ⟨â_υ^†(t)â_υ(t')⟩=0 and ⟨â_υ(t)â_υ^†(t')⟩=δ(t-t') with D_1 and D_2 (see detail in Appendix). The input-output relation is consistent with Eq. (2) which denotes a induce of vacuum field â_ν in the loss measurement. From the above, we note that the transmission of probe light is a function of MW field E_MW. Obviously, the intensity fluctuation and slope simultaneously affect the sensitivity for measuring microwave. At the position with a large slope, we can sensitively measure the MW field by observing the intensity of transmitted probe light, as shown in the Fig.<ref> (b) and (c). § SENSITIVITY OPTIMIZATION In this chapter, we will analyze the optimal operating point of RAEs in classical light and squeezed light schemes respectively, and give their best sensitivity. As we have described above, the maximum quantum enhancement can be reached when ϵ→0. However, the slope ∂⟨Î⟩ /∂ E_MW at this point approach zero, therefore leading to a poor sensitivity. In order to sensitively measure the weak MW field, it is necessary to find an operating point with a large slope in the classical scheme, since the noise and slope jointly determine the sensitivity as shown in Fig. <ref>(c). We employ a dressed MW field E_MW^0 whose frequency resonant with Rydberg transitions. The dressed MW field cause the splitting of transmitted peak, therefore engineering the transmission and slope of the probe. We focus on the ability of the system in sensing a very weak MW field δ E_MW with a dressed MW field E_MW^0, here we define a MW field E_MW=E_MW^0+δ E_MW. The sensitivity in measuring δ E_MW field depends on Δ_c and E_MW^0. Since the E_MW^0 determines the distance between the two peak Δ f and Δ_c leads to an asymmetry of the splitting peaks. In Fig.<ref> (a) and (b), we compare the performance in measuring MW field of cold RAEs by coherent light and squeezed light injected schemes, respectively. The sensitivity is plotted as a function of the detuning of coupling light Δ_c and strength of dressed MW field E_MW^0, the optimal sensitivity is labelled with a white circle where Δ_c=0, Δ_p=0, and a small dressed MW field is applied. In order to reveal the difference between quantum and classical strategies more clearly, their sensitivity evolve with the dressed MW field when Δ_c=0 is shown in Fig.<ref> (c). Employing the squeezing light can improve the sensitivity compared with the classical light with different E_MW, as shown by the red line (ii) and the blue line (i). In the case of squeezed light, the optimum sensitivity is 2.1 × 10^-11V/m with a dressed E_MW^0=1 × 10^-4V, which has G_q=3 compared with classical light according to Eq. (<ref>). While the optimal sensitivity of classical light case is 4.1 × 10^-11V/m at E_MW^0= 2 × 10^-4V. The noise of the squeezed light is extremely sensitive to the loss, which leads to the inconsistency of the optimal sensitivity points compared with the classical light. A suitable photon number α= 10^7 <cit.> is chosen here to optimize the sensitivity of RAES by balancing the collision rates and power broadening from increasing the laser power. Since the thermal motion of atoms are inevitable, Doppler broadening should be considered when the system runs at room temperature. Next, We discuss the influence from Doppler effect and give the optimal sensitivity in the case of hot atomic vapor. The atoms in the vapor satisfy Maxwell-Boltzmann distribution of velocities, f(v)=√(M/2π k_BT) e^-Mv^2/2k_BT where M is the mass of atom, k_B is the Boltzmann constant, v is the velocities and T is temperature. The atoms move with different velocities leading to a revision of the detuning, Δ^' _c=Δ _c-κ_cf(v), Δ^' _p=Δ _p+κ_pf(v) Here, κ_c and κ_p are the wave number of probe and coupling light, respectively. Fig.<ref> (d-f) give the relationship between the detuning Δ_c, amplitude E_MW of MW field and sensitivity of classical light (d) and squeezed light (e) injection in the case of hot atoms. For hot atoms, the trend of the sensitivity is consistent with that of cold atoms. Although the number of hot atoms increases by three orders of magnitude relative to the number of cold atoms, due to the distribution of velocities broaden the transmission spectrum leads to the smaller slope, and larger absorption, which result in a smaller signal and a decrease in quantum enhancement, respectively. The overall variation leads to a worse sensitivity compared with cold atoms. We fixed the detuning Δ_c=0, and scanning the E_MW^0, as shown in Fig.<ref> (f). The optimal sensitivity of the RAEs with the classical and squeezed light injection is 4× 10^-8V/m and 2.4× 10^-8V/m, respectively, at about E_MW^0=6.5 × 10^-2V. § CONCLUSION In summary, we have investigated a absorptive measurement scheme that beat the limitation of PSN by employing squeezed light. The quantum enhancement increases linearly with gain when there is no absorption, and tends to saturate with gain when absorption exists. Loss is the main limitation for the current strategy to fully utilize the injected quantum resource. Furthermore, we have studied the entanglement-assisted microwave electrometer in the Rydberg atomic by employing squeezed light with the absorptive measurement scheme. The noise squeezing makes it possible to break the bottleneck that the limited power of the sensing field restricts the sensitivity due to the laser-induced collision rates and power broadening in the atomic system. Our theoretical analysis shows the quantum advantage in our strategy is possibly maintained in absorptive sensor at large range of measuring MW field by optimally choosing the operating parameters of VBS, optical field, and atomic vapor. Squeezed-light-assisted RAEs outperform the classical scenario when the atomic vapor is operated at both cold and hot. We notice that RAEs also induce the change of phase of probe field. The MW field sensing can also be achieved by quantum enhanced phase measurement. Our research paves the way for the promising outlook of quantum light source in the field of absorptive measurement for future. § FUNDINGS We acknowledge financial support from the Innovation Program for Quantum Science and Technology (2021ZD0303200), the National Natural Science Foundation of China (grant nos. 12234014, 12204304, 11904227, and 11654005), the Shanghai Municipal Science and Technology Major Project (grant no. 2019SHZDZX01), the Fellowship of China Postdoctoral Science Foundation (grant nos. 2020TQ0193, 2021M702146, 2021M702150, 2021M702147, and 2022T150413), the Sailing Program of the Science and Technology Commission of Shanghai Municipality (19YF1421800), the Fundamental Research Funds for the Central Universities, and the National Key Research and Development Program of China (grant no. 2016YFA0302001). W.Z. acknowledges additional support from the Shanghai Talent Program. § DISCLOSURES The authors declare that there are no conflicts of interest related to this article. § APPENDIX §.§ Steady state In zeroth-order perturbation expansion, in which â go to zero, the Heisenberg-Langevin equations for σ̂_11, σ̂_22, σ̂_33, σ̂_44, σ̂_23, σ̂_32, σ̂_24, σ̂_42, σ̂_34, σ̂_43 atomic operators are decoupled. The mean values of these operators are required for the next order solution. We assume the coupling light and MW field to propagate without depletion, as we verified numerically. Then the subset of equations for the mean value variables ⟨σ̂_11⟩, ⟨σ̂_22⟩,⟨σ̂_33⟩,⟨σ̂_44⟩, ⟨σ̂_23⟩, ⟨σ̂_32⟩, ⟨σ̂_24⟩, ⟨σ̂_42⟩, ⟨σ̂_34⟩, ⟨σ̂_43⟩ to be solved at the steady state is written in matricial form as following: ([I]_9×9∂/∂ t-[M_0])[Σ_0]=[S_0], where [I]_9×9 is the 9 × 9 identity matrix. [ M_0 ] = [ [ -γ_4 γ_2-γ_4 γ_3-γ_4 0 0 0 0 0 0; 0 -γ_2 0 -iΩ_c/2 iΩ_c/2 0 0 0 0; 0 0 -γ_3 iΩ_c/2 -iΩ_c/2 0 0 Ω_MW/2 -Ω_MW/2; 0 -iΩ_c/2 iΩ_c/2 i2Δ_c-γ_3-γ_2/2 0 -iΩ_MW/2 0 0 0; 0 iΩ_c/2 -iΩ_c/2 0 -i2Δ_c-γ_3-γ_2/2 0 iΩ_MW/2 0 0; 0 0 0 -iΩ_MW/2 0 -β_1/2 0 iΩ_c/2 0; 0 0 0 0 iΩ_MW/2 0 -β_2/2 0 -iΩ_c/2; -iΩ_MW/2 -iΩ_MW/2 -iΩ_MW 0 0 iΩ_c/2 0 -γ_3-γ_4-i2Δ_MW/2 0; iΩ_MW/2 iΩ_MW/2 iΩ_MW 0 0 0 -iΩ_c/2 0 -γ_3-γ_4+i2Δ_MW/2; ]], [ [Σ̂_0] ] = [ [ σ̂_11; σ̂_22; σ̂_33; σ̂_23; σ̂_32; σ̂_24; σ̂_42; σ̂_34; σ̂_43 ]], [ [S_0] ] = [ [ γ_4; 0; 0; 0; 0; 0; 0; iΩ_MW/2; -iΩ_MW/2 ]]. where β_1= γ_2+γ_4+i2Δ_c+i2Δ_MW and β_2= γ_2+γ_4-i2Δ_c-i2Δ_MW. The steady state solution of Eq. (<ref>) is [⟨Σ_0⟩]=[M_0]^-1[S_0], §.§ Atomic Heisenberg-Langevin equations The first order solution for the three coherences σ_12, σ_13, σ_14 is determined by the following matricial equation ([I]_3×3∂/∂ t-[M_1])[Σ̂_1]=[S_1]â+[F̂_1], with [ M_1 ] = [ [ -γ_2/2-iΔ_p -iΩ_c/2 0; -iΩ_c/2 -γ_3/2-i(Δ_p+Δ_c) -iΩ_MW/2; 0 -iΩ_MW/2 -γ_4/2-iΔ; ]], [ [Σ̂_1] ] = [ [ σ̂_12; σ̂_13; σ̂_14 ]], [ [S_1] ] =ξ[ [ i⟨σ̂_22-σ̂_11⟩; i⟨σ̂_23⟩; i⟨σ̂_24⟩ ]], [ [F̂_1] ] = [ [ F̂_12; F̂_13; F̂_14 ]]. Here, [I]_3×3 is the 3 × 3 identity matrix. The annihilation operators being denoted â, and Δ=Δ_p+Δ_c+Δ_MW. The Langevin atomic forces [F̂] are characterized by their diffusion coefficients matrix [D_1]+[D_2], defined as [D_1]2δ(t-t')δ(z-z')=⟨[F̂_1(z,t)][F̂_1(z,t')^†]⟩ [D_2]2δ(t-t')δ(z-z')=⟨[F̂_1(z,t')^†][F̂_1(z,t)]⟩ Langevin diffusion coefficients for operators can be calculated using the generalized Einstein relation<cit.>. The [D_1] and [D_2] diffusion matrices are given by [ D_1 ] = [ [ γ_2 0 0; 0 γ_3 0; 0 0 γ_4; ]], [ D_2 ] = [ [ 0 0 0; 0 0 0; 0 0 0; ]]. By linearizing Eq. (<ref>) we derive for the mean values [⟨Σ̂_1⟩]=-[M_1]^-1[S_1] [⟨â⟩] and for the Fourier-transformed quantum fluctuations [δΣ̂_1]=-([M_1]+iω[I]_3×3)^-1[S_1][δâ]-([M_1]+iω[I]_3×3)^-1[F̂_1] Here, ω is the analysis frequency. 1 communication1U. L. Rohde, J. C. Whitaker and H. Zahnd, Communications receivers: principles and design 4th edn (McGraw-Hill Education) (2017). communication2D. H. Meyer, K. C. Cox, F. K. Fatemi and P. D. Kunz, “Digital communication with Rydberg atoms and amplitude-modulated MW fields." Appl. Phys. Lett. 112 (21), 211108 (2018). remotesensing1A. K. Robinson, N. Prajapati, D Senic and M. T. Simons, “Determining the angle-of-arrival of a radio-frequency source with a Rydberg atom-based sensor." Appl. Phys. Lett. 118 (11), 114001 (2021). remotesensing2Y. Kim, J. S. Kimball, K. C. McDonald and J. Glassy, “Developing a global data record of daily landscape freeze/thaw status using satellite passive microwave remote sensing." IEEE. T. Geosci. Remote. 49 (3), 949-960 (2010). medicaldiagnosis1A. N. Reznik and N. V. Yurasova, “Electrodynamics of microwave near-field probing: Application to medical diagnostics." J. Appl. Phys. 98 (11), 114701 (2005). medicaldiagnosis2N. K. Nikolova, “Microwave imaging for breast cancer." IEEE. Microw. Mag. 12 (7), 78-94 (2011). medicaldiagnosis3M. Guardiola, S. Buitrago, G. Fernández-Esparrach, J. M. O'Callaghan, J. Romeu, M. Cuatrecasas, H. Córdova, M. Á. G. Ballester and O. Camara, “Dielectric properties of colon polyps, cancer, and normal mucosa: Ex vivo measurements from 0.5 to 20 GHz." Med. Phys. 45 (8), 3768-3782 (2018). RAEs1J. A. Sedlacek, A. Schwettmann, H. Kübler, R. Löw, T. Pfau and J. P. Shaffer, “Microwave electrometry with Rydberg atoms in a vapour cell using bright atomic resonances." Nat. Phys. 8, 819-824 (2012). RAEs2J. A. Sedlacek, A. Schwettmann, H. Kübler and J. P. Shaffer “Atom-based vector microwave electrometry using rubidium Rydberg atoms in a vapor cell." Phys. Rev. Lett. 111 (6), 063001 (2013). EIT1M. Fleischhauer, A. Imamoglu and J. P. Marangos “Electromagnetically induced transparency: Optics in coherent media." Rev. Mod. Phys. 77, 633 (2005). A-T2 S. H. Autler, C. H. Townes. “Stark effect in rapidly varying fields." Physical Review. 100, 703 (1955). dipoleT. Gallagher, Rydberg Atoms (Cambridge University, 2005). 1nVM. Jing, Y. Hu, J. Ma, H. Zhang, L. Zhang, L. Xiao and S. Jia “Atomic superheterodyne receiver based on microwave-dressed Rydberg spectroscopy." Nat. Phys. 16 (9), 911-915 (2020). PSN1S. Kumar, H. Fan, H. Kübler, J. Sheng and J. P. Shaffer, “Atom-based sensing of weak radio frequency electric fields using homodyne readout." Sci. Rep. 7 (1), 1-10 (2017). PSN2S. Kumar, H. Fan, H. Kübler, A. J. Jahangiri and J. P. Shaffer, “Rydberg-atom based radio-frequency electrometry using frequency modulation spectroscopy in room temperature vapor cells." Opt. Express. 25 (8), 8625-8637 (2017). Squeezedlight1M. A. Taylor, W. P. Bowen. “Quantum metrology and its application in biology." Phys. Rep. 615, 1-59 (2016). Squeezedlight2R. Schnabel. “Squeezed states of light and their applications in laser interferometers." Phys. Rep. 684, 1-51 (2017). Squeezedlight3F. Wolfgramm, A. Cere, F. A. Beduini, A. Predojević, M. Koschorreck, and M. W. Mitchell, “Squeezed-light optical magnetometry."Phys. Rev. Lett. 105 (5), 053601 (2010). Squeezedlight4P. M. Anisimov, G. M. Raterman, A. Chiruvelli, W. N. Plick, S. D. Huver, H. Lee, and J. P. Dowling, “Quantum metrology with two-mode squeezed vacuum: parity detection beats the Heisenberg limit." Phys. Rev. Lett. 104 (10), 103602 (2010). Squeezedlight5W. Du, J. Kong, G. Bao, P. Yang, J. Jia, S. Ming, C-H Yuan, J. F. Chen, Z. Y. Ou, M. W. Mitchell, and W. Zhang, “SU (2)-in-SU (1, 1) nested interferometer for high sensitivity, loss-tolerant quantum metrology." Phys. Rev. Lett. 128 (3), 033601 (2022). photonnumberJ. P. Shaffer, H. Kübler, “A read-out enhancement for microwave electric field sensing with Rydberg atoms." Proc. SPIE. 10674, 106740 (2018). LangevinL. Davidovich, “Sub-Poissonian processes in quantum optics". Rev. Mod. Phys. 68, 127 (1996).
http://arxiv.org/abs/2307.04440v1
20230710094116
Time-Frequency-Space Transmit Design and Signal Processing with Dynamic Subarray for Terahertz Integrated Sensing and Communication
[ "Yongzhi Wu", "Chong Han" ]
cs.IT
[ "cs.IT", "eess.SP", "math.IT" ]
Time-Frequency-Space Transmit Design and Signal Processing with Dynamic Subarray for Terahertz Integrated Sensing and Communication Yongzhi Wu, Graduate Student Member, IEEE, and Chong Han, Member, IEEE This paper will be presented in part at IEEE SPAWC, September 2023 <cit.>. Yongzhi Wu is with the Terahertz Wireless Communications (TWC) Laboratory, Shanghai Jiao Tong University, Shanghai, China (Email: [email protected]). Chong Han is with the Terahertz Wireless Communications (TWC) Laboratory, Department of Electronic Engineering and Cooperative Medianet Innovation Center (CMIC), Shanghai Jiao Tong University, Shanghai, China (Email: [email protected]). ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ empty Terahertz (THz) integrated sensing and communication (ISAC) enables simultaneous data transmission with Terabit-per-second (Tbps) rate and millimeter-level accurate sensing. To realize such a blueprint, ultra-massive antenna arrays with directional beamforming are used to compensate for severe path loss in the THz band. In this paper, the time-frequency-space transmit design is investigated for THz ISAC to generate time-varying scanning sensing beams and stable communication beams. Specifically, with the dynamic array-of-subarray (DAoSA) hybrid beamforming architecture and multi-carrier modulation, two ISAC hybrid precoding algorithms are proposed, namely, a vectorization (VEC) based algorithm that outperforms existing ISAC hybrid precoding methods and a low-complexity sensing codebook assisted (SCA) approach. Meanwhile, coupled with the transmit design, parameter estimation algorithms are proposed to realize high-accuracy sensing, including a wideband DAoSA MUSIC (W-DAoSA-MUSIC) method for angle estimation and a sum-DFT-GSS (S-DFT-GSS) approach for range and velocity estimation. Numerical results indicate that the proposed algorithms can realize centi-degree-level angle estimation accuracy and millimeter-level range estimation accuracy, which are one or two orders of magnitudes better than the methods in the millimeter-wave band. In addition, to overcome the cyclic prefix limitation and Doppler effects in the THz band, an inter-symbol interference- and inter-carrier interference-tackled sensing algorithm is developed to refine sensing capabilities for THz ISAC. Terahertz integrated sensing and communications, ultra-massive MIMO, Orthogonal frequency division multiplexing, hybrid beamforming § INTRODUCTION §.§ Background and Motivations To address the rapidly growing demand for wireless data rates and the emergence of new application scenarios, the communication community is seeking new spectrum opportunities as well as new functionalities for sixth-generation (6G) and beyond wireless networks <cit.>. Following the former trend of moving up to higher frequencies, the Terahertz (THz) band is viewed as one of the key technologies to enable enormous potential in 6G wireless systems <cit.>. Another promising exploration is to use integrated sensing and communication (ISAC) technology, which can endow wireless networks with sensing capabilities to realize the mapping of the physical world to the digital world <cit.>. Leveraging the ultra-broad bandwidth and the ultra-massive antenna arrays in the THz band, the integration of these two technologies, i.e., Terahertz integrated sensing and communication (THz ISAC) <cit.>, can achieve ultra-accurate sensing and Terabit-per-second data rates simultaneously. Despite the promising vision of THz ISAC, critical challenges arise when designing THz ISAC transmit signal. First, there exists severe path loss in the THz band, which includes free path loss, reflection, and scattering losses. These losses strictly limit the maximum sensing and communication distance, and degrade sensing accuracy and data rate. Second, with the power constraints, to compensate for such severe path loss, ultra-massive multiple-input multiple-output (UM-MIMO) antenna arrays with beamforming are used to generate highly directional beams <cit.>. Thus, energy-efficient and low-complexity beamforming algorithms need to be developed. Third, the generation of directional beams restricts the angular coverage of sensing. In general, communication prefers stable beams toward users to enable tractable data detection, while sensing requires sweeping beams to scan possible targets in the surrounding environment <cit.>. To realize omnidirectional sensing with directional beams, effective and efficient narrowbeam management schemes, including transmit design in the time-frequency domain and beamforming design in the spatial domain are demanded to realize simultaneous sensing and communication for THz ISAC systems. Meanwhile, the receive processing encounters significant challenges, especially for sensing parameter estimation algorithms in THz UM-MIMO systems, which are affected by the beamforming architectures and peculiarities of THz channels. First, the sensing algorithm for range and velocity estimation needs to be redesigned, since an additional dimension (namely, spatial domain) is introduced in the received signal model when using the ultra-large dimensional antenna arrays in the THz band. Second, with high channel sparsity due to strong power loss of non-line-of-sight (NLoS) paths, the delay spread of the THz communication channel is reduced <cit.>. In this case, to utilize broad bandwidth with a fixed subcarrier number, we can increase subcarrier spacing, which is inversely proportional to the symbol duration. Thus, the symbol duration and cyclic prefix (CP) length are reduced in classical multi-carrier communication systems, such as orthogonal frequency-division multiplexing (OFDM). Nevertheless, the round-trip delay of sensing targets should be smaller than the CP duration with classical OFDM sensing algorithms <cit.>. For communication waveforms with reduced CP, there might exist inter-symbol interference (ISI) effects on the received sensing signal, which cause existing sensing methods inapplicable. Third, as the Doppler shifts are proportional to the carrier frequency, the Doppler effects become even stricter in the THz band. If maintaining current waveform numerology of 5G wireless systems, Doppler effects in the presence of high-mobility targets may cause inter-carrier interference (ICI) effects and severely degrade sensing capabilities. Thus, to tackle these challenges, signal processing design in terms of sensing algorithms is vital to realize high-accuracy sensing, while data recovery has been well investigated <cit.>. §.§ Related Works §.§.§ Waveform Design By jointly designing the ISAC transmit signal, sensing and communication can share the hardware and signal processing modules. From the perspective of the time-frequency domain, various ISAC waveforms have been investigated in the literature. As adopted in 4G and 5G standards, CP-OFDM is a promising candidate for ISAC although being a communication-centric design <cit.>. Since an OFDM waveform suffers from a high PAPR issue, especially in uplink transmission, some single-carrier waveforms, such as discrete Fourier transform (DFT) spread OFDM (DFT-s-OFDM), are investigated for THz ISAC systems, due to their low PAPR compared to OFDM <cit.>. Recently, orthogonal time frequency space (OTFS) has been studied in ISAC applications <cit.>, thanks to its advantages under doubly-selective channels in high-mobility scenarios. Furthermore, a DFT spread OTFS (DFT-s-OTFS) waveform is proposed in <cit.> to reduce the PAPR of OTFS for THz ISAC. However, the high complexity of data detection for MIMO-OTFS constitutes a serious problem. Despite the PAPR issue, OFDM is still a potential waveform in the THz band, since it has good compatibility with UM-MIMO and enables flexible time-frequency domain resource allocation among multiple users <cit.>. Thus, wideband UM-MIMO systems with multi-carrier modulations are investigated for THz communications in many recent works, including beamforming design <cit.>, channel estimation <cit.>, multiple access <cit.>, carrier aggregation <cit.>. Nevertheless, there is a lack of research on THz ISAC in this regard, especially focusing on the transmit design and sensing algorithms in the time-frequency-space domain. §.§.§ Beamforming Design Pertaining to MIMO-OFDM systems, with conventional fully-digital and analog beamforming architectures, multi-target estimation can be realized by utilizing opportunistic sensing <cit.> and multibeam optimization <cit.>. Nevertheless, the fully-digital structure exhibits high hardware complexity and power consumption for THz ISAC systems with large-dimensional antenna arrays, while the analog beamforming architecture can only support one data stream with limited spatial multiplexing gain <cit.>. As a combined approach, hybrid beamforming can realize comparable data rates with the fully-digital structure and exhibits less hardware complexity. Based on the full-connected (FC) hybrid beamforming architecture, authors in <cit.> propose a consensus-ADMM approach to design the analog and digital beamformers by jointly optimizing the spectral efficiency (SE) and spatial spectrum matching error of sensing. With the array-of-subarray (AoSA) structure, which further reduces the number of phase shifters and power consumption at the cost of sacrificing data rate, the ISAC hybrid beamformers can be designed by optimizing the Cramér-Rao bound <cit.> or minimizing the weighted Euclidean distance between the hybrid precoding matrix and the fully digital beamforming matrix <cit.>. To balance SE and power consumption, a dynamic array-of-subarray (DAoSA) hybrid precoding architecture is proposed in <cit.>, while the ISAC hybrid precoding design with dynamic subarray has not been investigated yet. In addition, most of the aforementioned works design beamformers with some prior knowledge of target angles <cit.>, which is acceptable in target tracking scenarios but not available in general target estimation, i.e., target discovery mode. Thus, beam scanning-based sensing to discover targets with narrow beams in the THz band is still a significant issue to be addressed. §.§ Contributions and Paper Structure The contributions of this work are summarized as follows: * We present a time-frequency-space transmit design framework for THz ISAC systems by considering a dynamic subarray hybrid beamforming architecture and multi-carrier waveform. In this framework, we develop a vectorization (VEC) based and a sensing codebook-assisted (SCA) ISAC hybrid precoding algorithms for the DAoSA structure. Our proposed ISAC hybrid precoding algorithms can realize the entire angular directions of sensing and data transmission by generating scanning sensing beams at different time slots and stable communication beams toward the user. Meanwhile, the proposed VEC algorithm outperforms existing ISAC hybrid precoding methods, and the SCA approach reduces the computational complexity. * Based on the time-frequency-space domain transmit signal design, we propose parameter estimation algorithms at the sensing receiver, including a wideband DAoSA MUSIC (W-DAoSA-MUSIC) algorithm for angle estimation, and a sum-DFT and golden section search (S-DFT-GSS) method for range and velocity estimation. Simulation results indicate that the sensing accuracy with the proposed sensing algorithms can achieve centi-degree-level for angle estimation, millimeter-level for range estimation, and decimeter-per-second-level for velocity estimation. * We further propose an ISI- and ICI-tackled sensing algorithm to overcome the CP limitation on the maximum sensing distance and estimation error caused by high-mobility targets. While the ICI is studied in <cit.>, the ISI effects have not been considered in the literature. Compared to the ISI-unaware estimation, the ISI-tackled sensing algorithm can accurately estimate the target with a round-trip delay larger than the CP duration. In contrast with ICI-unaware estimation, the ICI-tackled algorithm can overcome the masking problem of weak targets caused by the side lobes of the strong target in the presence of ICI effects. The structure of the remainder of this paper is organized as follows. The system framework with the time-frequency-space transmit design for THz ISAC is presented in Sec. <ref>. The ISAC hybrid precoding algorithms are elaborated in Sec. <ref>. The sensing estimation algorithm design with the DAoSA architecture and multi-carrier modulation is proposed in Sec. <ref>. The ISI- and ICI- tackled sensing algorithm for THz ISAC is developed in Sec. <ref>. Sec. <ref> illustrates extensive simulation results. Finally, the paper is concluded in Sec. <ref>. Notations: ℂ denotes the set of complex numbers; 𝐀(i, j) is the entry on the ith row and jth column of 𝐀; 𝔼{·} defines the expectation operation; The superscripts (·)^T and (·)^H stand for the transpose and Hermitian transpose operations; The notations ⊗ and ⊙ refer to the Kronecker product and Hadamard Product, respectively; det(·) and ·_F denote the determinant and Frobenius norm of a matrix; (·)^† indicates the Moore-Penrose pseudo inverse; vec(·) represents the vectorization operation. § SYSTEM FRAMEWORK As shown in Fig. <ref>, we propose a THz ISAC system framework based on a wideband UM-MIMO architecture, in which the ISAC transceiver simultaneously senses potential targets in the surrounding spatial environment and sends information symbols to one communication receiver (without loss of generality) via the designed transmit signal in the time-frequency-space domain. Specifically, in the time-frequency domain, the data signal is modulated with orthogonal frequency-division multiplexing (OFDM) and spread across M subcarriers. In the spatial domain, the data streams at each subcarrier are precoded through a digital precoder 𝐅_BB∈ℂ^N_RF^t× N_s and an analog precoder 𝐅_RF∈ℂ^N_t × N_RF^t, where N_s denotes the number of data streams and N_RF^t refers to the number of transmit RF chains, with N_s ⩽ N_RF^t ≪ N_t. As for the transceiver structure, the ISAC transceiver is equipped with an N_t-element transmit uniform planar array (UPA) to transmit the ISAC waveform and an N_r-element receive UPA to perform sensing echo processing. The communication receiver has an N_r-element UPA to accomplish signal reception and data detection. The transmit antenna arrays adopt a DAoSA hybrid beamforming structure <cit.>. With the DAoSA structure, the transmit antennas are divided into N_RF^t subarrays and each RF chain connects to each subarray with K_t = N_t / N_RF^t elements through a switch. Similarly, the received signal is combined through the analog combiner and the digital combiner with N_RF^r RF chains, and each receiver subarray contains K_r = N_r / N_RF^r elements. §.§ Time-Frequency-Space Transmit Design At the transmitter side, the ISAC system maps the transmitted bit streams to a large amount of data frames. A data frame is divided into Q time slots, each of which contains M × N data symbols, where M and N stand for the numbers of subcarriers and symbols during a time slot. In the multi-carrier hybrid beamforming architecture, at the qth time slot, the data symbols 𝐬_q[m, n] ∈ℂ^N_s× 1, q = 1, 2, ⋯, Q, m = 0, 1, ⋯, M - 1, n = 0, 1, ⋯, N - 1, which are generated from N_s data streams with 𝔼{𝐬_q[m, n] 𝐬^H_q[m, n]} = 1/N_s𝐈_N_s, are first precoded by a digital beamformer 𝐅_BB, q[m] and mapped to the mth subcarrier in the frequency domain, 𝐱_q[m, n] = 𝐅_BB, q[m] 𝐬_q[m, n]. Then, we perform the inverse discrete Fourier transform (IDFT) to transform the frequency-domain data blocks to the time-domain signal and add one cyclic prefix (CP) for each symbol before conducting up-conversion and analog beamforming 𝐅_RF, q∈ℂ^N_t× N_RF^t. At the qth time slot, the proposed THz ISAC system with the time-frequency-space three-dimensional transmit design generates scanning beams toward the qth sensing direction and stable beams toward the communication user. Note that all subcarriers share the same analog precoder while the digital precoder is performed for each subcarrier. For the nth symbol during the qth time slot, the transmit time-domain signal can be expressed as, 𝐱̃_q, n (t) = ∑_m=0^M-1𝐅_RF, q𝐅_BB, q[m] 𝐬_q[m, n] e^j2π mΔ f t, where t denotes the time instant and Δ f refers to the subcarrier spacing. Then, the symbol duration T equals to 1/Δ f and the total symbol duration is expressed as T_o = T + T_cp with the CP duration of T_cp = M_cp/M T, where M_cp is the CP size. Thus, the duration of a time slot is T_s = N T_o and the frame duration can be expressed as T_f = Q T_s. To generate stable beams towards the communication user and scanning beams for searching sensing targets, the transmit beamformers are fixed during a time slot and vary at different time slots. In this work, we consider a DAoSA hybrid beamforming architecture <cit.>, in which the connections between RF chains and subarrays can be intelligently adjusted through a network of switches. The analog precoding matrix 𝐅_RF, q can be written as, 𝐅_RF, q = 𝐅_P, q⊙𝐏_S, where 𝐅_P, q∈ℂ^N_t× N_RF^t denotes the phase shifter network matrix and 𝐏_S∈{0, 1}^N_t × N_RF^t describes the binary switch network matrix, which can be expressed as 𝐏_S=[[ 𝐩_1,1 𝐩_1,2 … 𝐩_1, N_RF^t; 𝐩_2,1 𝐩_2,2 … 𝐩_2, N_RF^t; ⋮ ⋮ ⋱ ⋮; 𝐩_N_RF^t, 1 𝐩_N_RF^t, 2 … 𝐩_N_RF^t, N_RF^t ]], where 𝐩_i, j stands for the status of the switch between the ith subarray and the jth RF chain. If this switch is closed, 𝐩_i, j = 1_K_t is an all-one vector. Conversely, 𝐩_i, j = 0_K_t is a zero vector. The phase shifter network matrix 𝐅_P, q satisfies a constant modulus constraint, i.e., the modulus of its elements is 1. Then, the analog precoding matrix 𝐅_RF, q is given by 𝐅_RF, q=[[ 𝐟_1,1 𝐟_1,2 … 𝐟_1, N_RF^t; 𝐟_2,1 𝐟_2,2 … 𝐟_2, N_RF^t; ⋮ ⋮ ⋱ ⋮; 𝐟_N_RF^t, 1 𝐟_N_RF^t, 2 … 𝐟_N_RF^t, N_RF^t ]], where 𝐟_i, j∈ℂ^K_t × 1 represents the joint precoding vector of the switch and the phase shifters between the ith subarray and the jth RF chain. When this switch is closed, 𝐟_i, j should satisfy the unit modulus constraint. When the switch is open, 𝐟_i, j is a zero vector. We denote the feasible set of the analog precoder 𝐅_RF, q as ℱ. Moreover, the normalized transmit power constraint is expressed as, 𝐅_RF, q𝐅_BB, q[m]_F^2 = N_s. §.§ Communication Model With multi-carrier transmission, the communication received signal of the mth subcarrier and the nth symbol at qth time slot after the decoding process is expressed as 𝐫_q[m, n] = √(ρ)𝐂_BB^H[m] 𝐂_RF^H 𝐇_c[m] 𝐅_RF, q𝐅_BB, q[m] 𝐬_q[m, n] + 𝐂_BB^H[m] 𝐂_RF^H 𝐧_q[m, n], where ρ describes the average received power, 𝐂_BB[m]∈ℂ^N_RF^r× N_s is the digital combining matrix, 𝐂_RF∈ℂ^N_r × N_RF^r is the analog combining matrix, and 𝐧_q[m, n] refers to the additive white Gaussian noise with independent and identically distribution 𝒞𝒩(0, σ_n^2). In the THz band, the channel is sparse and dominated by the line-of-sight (LoS) path and several reflected rays. Thus, as a benchmark, the multi-path channel model based on ray-tracing methods of the channel matrix 𝐇_c[m] at the mth subcarrier can be given by <cit.>, 𝐇_c[m] = γα_L[m] 𝐚_r(θ_L^r, ϕ_L^r) 𝐚_t^H(θ_L^t, ϕ_L^t) + γ∑_l=1^L_Nα_N, l[m] 𝐚_r(θ_N, l^r, ϕ_N, l^r) 𝐚_t^H(θ_N, l^t, ϕ_N, l^t), where γ = √(N_t N_r/L_N + 1) and L_N represents the number of non-line-of-sight (NLoS) paths. Moreover, α_L[m] and α_N, l[m] denote the channel gain of the LoS path and lth NLoS path at mth subcarrier, respectively. In addition, θ^r(θ^t) and ϕ^r(ϕ^t) refer to the azimuth and elevation angles of arrival/departure (AoAs/AoDs). In the case of a UPA in the yz-plane with W and L elements on the y and z axes respectively, the array response vector can be expressed by, 𝐚(θ, ϕ) = 𝐚_z(ϕ) ⊗𝐚_y(θ, ϕ), where 𝐚_y(θ, ϕ) = 1/√(W) [1, ⋯, e^jπ (W - 1) sin(θ) sin(ϕ)]^T, 𝐚_z(ϕ) = 1/√(L) [1, ⋯, e^jπ (L - 1) cos(ϕ)]^T, and θ stands for the azimuth angle, and ϕ refers to the elevation angle. For THz communications, we need to design hybrid precoders to maximize spectral efficiency. The achievable spectral efficiency can be expressed as <cit.> R_q = 1/M∑_m=0^M-1log(𝐈_N_s + ρ/N_s𝐑_n^-1𝐂_BB^H[m] 𝐂_RF^H 𝐇_c[m] ×𝐅_RF, q𝐅_BB, q[m] 𝐅_BB, q^H[m] 𝐅_RF, q^H 𝐇_c^H[m] 𝐂_RF𝐂_BB[m]), where 𝐑_n = σ_n^2 𝐂_BB^H[m] 𝐂_RF^H 𝐂_RF𝐂_BB[m] is a noise covariance matrix. The optimization problem of maximizing R_q at the transmitter side is equivalent to minimizing the Euclidean distance between the optimal fully digital precoder 𝐅_c[m] and the hybrid precoder as 1/M∑_m=0^M-1𝐅_c[m] - 𝐅_RF, q𝐅_BB, q[m]_F^2. Generally, the channel state information (CSI) can be known at both transmitter and receiver by utilizing channel estimation <cit.> and is assumed to be time-invariant during a frame duration. Then, from the singular value decomposition (SVD) of the channel 𝐇_c[m], the unconstrained optimal precoder 𝐅_c[m] and decoder 𝐂_c[m] are comprised of the first N_s columns of the right and the left singular value matrices. §.§ Sensing Model In the THz band, directional beams are used to compensate for severe path loss and improve received sensing signal power, which limits the angular range of sensing targets. To realize entire-space sensing, we design a codebook-based beam-scanning scheme for THz sensing. For the azimuth angle, the whole sensing angular domain is divided into Q scanning directions, ω = [ω_1, ω_2, ⋯, ω_Q]^T, each of which corresponds to a time slot. We can set Q = W and design the sensing beamforming vector as the qth column from a discrete Fourier transform (DFT) codebook, by which the transmitter can generate W orthogonal beamforming vectors and steer signals towards W independent sensing directions. Thus, the sensing codebook can be written as, 𝐀 = 𝐚_z(ϕ) ⊗ [𝐚_y,1(ω_1, ϕ), ⋯, 𝐚_y, W(ω_Q, ϕ)] where 𝐚_y, q(ω_q, ϕ) = 1/√(W) [1, ⋯, e^jπ (W - 1)sin(ω_q)sin(ϕ)]^T, and sin(ω_q) = -1 + 1/W + (q -1) 2/W for q = 1, 2, ⋯, W. In this case, the sensing angular window Ω_q at the qth time slot contains angles from arcsin(-1+(q-1)2/W) to arcsin(-1+q2/W). At the sensing receiver, the frequency domain received signal of the mth subcarrier and the nth symbol at qth time slot is denoted as 𝐲_q[m, n]∈ℂ^N^r_RF× 1, which is given by 𝐲_q[m, n] = 𝐖_RF, q^H 𝐇_s[m, n] 𝐅_RF, q𝐅_BB, q[m] 𝐬_q[m, n] + 𝐖_RF, q^H 𝐞_q[m, n] where 𝐖_RF, q∈ℂ^N_r× N_RF^r denotes the combing matrix at the sensing receiver and 𝐞_q[m, n] represents the AWGN vector. At the ISAC transceiver side, the sensing receiver is collocated with the transmitter. Based on the OFDM radar sensing channel <cit.> and MIMO channel models <cit.>, the sensing channel matrix 𝐇_s[m, n] is expressed as, 𝐇_s[m, n] = √(N_t N_r/P)∑_p=1^P h_p e^-j2π m Δ f τ_p e^j2π ((q - 1) T_s + n T_o) ν_p ×𝐚_r(θ_p, ϕ_p) 𝐚_t^T(θ_p, ϕ_p), where P stands for the number of sensing targets, each of which corresponds to one back-reflected path with complex channel coefficient h_p. For the pth target, the delay τ_p and the Doppler shift ν_p are calculated by τ_p = 2 r_p/c_0 (τ_p ⩽ T_cp) and ν_p = 2 f_c v_p/c_0 (ν_p ≪Δ f), where r_p and v_p refer to the range and relative velocity of the p targets, respectively. c_0 denotes the speed of light and f_c describes the carrier frequency. Moreover, θ_p and ϕ_p represent the azimuth and elevation angle-of-arrival of the pth target. Beamforming design for sensing aims at achieving the highest beamforming gain towards the sensing direction. Thus, at the qth time slot, the optimal sensing precoder 𝐅_s, q∈ℂ^N_t × N_s can be generated from the qth column of the sensing codebook, namely, 𝐅_s, q = 1/√(N_t)𝐀(:, q) 1_N_s^T with a normalized factor of 1/√(N_t). Then, we need to minimize the Euclidean distance, 1/M∑_m=0^M-1𝐅_s, q - 𝐅_RF, q𝐅_BB, q[m]_F^2. At the sensing receiver side, 𝐖_RF, q is fixed during a time slot and the receive sensing beams point to N_RF^r random directions within Ω_q at the qth time slot. §.§ Problem Formulation At the THz ISAC transmitter, we need to design the analog and digital beamformers to simultaneously realize a communication link with ultra-fast data rates and provide a desired beampattern for high-accuracy sensing of surrounding targets. Different from the conventional hybrid precoding design problem for communication, the optimal ISAC hybrid precoders should be sufficiently “close" to the time-invariant and frequency-dependent optimal communication precoder and the time-varying and frequency-independent optimal sensing precoder at the same time. Based on the above models and analysis, we can formulate the following multi-objective optimization problem, min_𝐅_RF, q, 𝐅_BB, q[m] 1/M∑_m=0^M-1𝐅_c[m] - 𝐅_RF, q𝐅_BB, q[m]_F^2, 1/M∑_m=0^M-1𝐅_s, q - 𝐅_RF, q𝐅_BB, q[m]_F^2 s.t. 𝐅_RF, q∈ℱ, 𝐅_RF, q𝐅_BB, q[m]_F^2 = N_s, m = 0, 1, ⋯, M - 1, for q = 1, 2, ⋯, Q. Since this problem has multiple objective functions and the constraints are non-convex, it is rather difficult to obtain the global optimal solution. In the next section, we propose two algorithms for the THz ISAC hybrid precoding optimization problem to yield near-optimal solutions. § HYBRID PRECODING DESIGN FOR THZ ISAC For the multi-objective ISAC hybrid precoding problem, we can introduce a weighting factor η (0 ≤η≤ 1), which provides the tradeoff between sensing and communication. Then, the hybrid precoding problem (<ref>) can be formulated as, min_𝐅_RF, q, 𝐅_BB, q[m] 1/M∑_m=0^M-1(η𝐅_c[m] - 𝐅_RF, q𝐅_BB, q[m]_F^2 + (1 - η) 𝐅_s, q - 𝐅_RF, q𝐅_BB, q[m]_F^2 ) s.t. 𝐅_RF, q∈ℱ, 𝐅_RF, q𝐅_BB, q[m]_F^2 = N_s, m = 0, 1, ⋯, M - 1. where η = 0 or η = 1 stands for either sensing-only or communication-only hybrid beamforming design problem. Without loss of generality, we can consider solving the hybrid precoding problem at different time slots separately. Then, a common approach is to use alternating minimization techniques <cit.>, i.e., alternately solving for 𝐅_RF, q and 𝐅_BB, q[m]. Hereby, with the irregular structure of the DAoSA analog precoder, we propose an ISAC hybrid precoding algorithm by modifying the vectorization-based (VEC) algorithm that was used for THz communications in <cit.>. §.§ VEC-based ISAC Hybrid Precoding Algorithm §.§.§ Digital Precoding Design When fixing the analog precoder, we can impose an orthogonal constraint that 𝐅_BB, q[m] is unitary to mitigate the interference among data streams. Then, the problem (<ref>) can be transferred to, min_𝐅_BB, q[m] 1/M∑_m=0^M-1𝐆_q[m] - 𝐁_q 𝐅_BB, q[m]_F^2 s.t. 𝐅_RF, q∈ℱ, 𝐅_BB, q^H[m]𝐅_BB, q[m] = 𝐈_N_s, m = 0, 1, ⋯, M - 1. where 𝐆_q[m] = [√(η)𝐅_c^T[m], √(1 - η)𝐅_s, q^T ]^T, 𝐁_q = [√(η)𝐅_RF, q^T, √(1 - η)𝐅_RF, q^T ]^T. Similar to the solution of the so-called Orthogonal Procrustes problem (OPP) <cit.>, the solution to (<ref>) is given by, 𝐅_BB, q[m] = 𝐕_1 𝐔^H, where 𝐆_q^H[m] 𝐁_q = 𝐔Σ𝐕^H is the SVD of 𝐆_q^H[m] 𝐁_q, and 𝐕_1 is the first N_s columns of 𝐕. §.§.§ Analog Precoding Design When fixing the digital precoder, we carry the vectorization process and the analog precoding design problem can be formulated as, min_𝐅_RF, q 1/M∑_m=0^M-1(ηvec(𝐅_c[m]) - vec(𝐅_RF, q𝐅_BB, q[m])_2^2 + (1 - η) vec(𝐅_s, q) - vec(𝐅_RF, q𝐅_BB, q[m])_2^2 ). After removing the zero elements in vec(𝐅_RF, q), we need to solve its non-zero part 𝐟_eff∈ℂ^N_c K_t × 1, where N_c denotes the number of closed switches. This is a phase rotation problem, whose solution is given by 𝐟_eff = exp(j {∑_m=0^M-1𝐃^H vec(η𝐅_c[m] 𝐅_BB, q^H[m] + (1 - η) 𝐅_s, q𝐅_BB, q^H[m]) }), where 𝐃 equals to 𝐈_N_t N_RF^t with d_1th, ⋯, d_N_t N_RF^t - N_c K_tth columns punctured, which correspond to the indices of zero elements in vec(𝐅_RF, q). Based on 𝐟_eff, the effective analog precoder 𝐅_RF, q can be recovered. With (<ref>) and (<ref>), we can alternatively calculate 𝐅_BB, q[m] and 𝐅_RF, q until convergence. After that, we finally update the digital precoders as 𝐅_BB, q[m] = √(N_s)/𝐅_RF, q𝐅_RF, q^†𝐆_q[m] _F𝐅_RF, q^†𝐆_q[m]. While the VEC algorithm provides a satisfactory solution, it requires a number of iterations in each time slot. Nevertheless, the optimal communication precoder 𝐅_c[m] remains the same at different time slots during a frame duration, while only the optimal sensing precoder 𝐅_s, q changes. Motivated by this, we can calculate the initial solutions of analog and digital precoders from 𝐅_c[m] and then update the analog precoders only once at each time slot based on the sensing codebook. Thus, we further propose the following low-complexity sensing codebook-assisted (SCA) ISAC hybrid precoding algorithm. §.§ Low-Complexity SCA Algorithm Instead of using the weighted objective function in (<ref>), we can define a weighted ISAC precoder as, 𝐅_q[m] = β (√(η)𝐅_RF, q + √(1 - η)𝐅_BB, q[m]) with a normalized factor of β = √(N_s) / √(η)𝐅_RF, q + √(1 - η)𝐅_BB, q[m]_F. Before designing the ISAC analog and digital precoders, we can first obtain the solution of analog precoder for the communication-only hybrid precoding design problem, 𝐅_RF = min_𝐅_RF, 𝐅_BB[m] 1/M∑_m=0^M-1𝐅_c[m] - 𝐅_RF𝐅_BB[m]_F^2 s.t. 𝐅_RF∈ℱ, 𝐅_RF𝐅_BB[m]_F^2 = N_s, m = 0, 1, ⋯, M - 1, which can be directly solved by the VEC algorithm. Based on the initial analog precoder 𝐅_RF, we can update the analog precoder 𝐅_RF, q at the qth time slot with the desired sensing beamforming vector 𝐀(:, q). Specifically, we calculate the error between the analog precoding vectors of the phase shifters with closed switches and corresponding columns of 𝐀(:, q) as, E_i , j = 𝐀((i-1)K_t+1:iK_t, q) - 𝐅_RF((i-1)K_t+1:iK_t, j)_2, for all (i, j) satifying 𝐩_i,j = 1_K_t. Then, we find the first K_s minimum values of E_i, j with the indices {(i_1, j_1), ⋯, (i_K_s, j_K_s)}, where K_s = ⌈ N_c (1-η)⌉ denotes the number of subarray beamforming vectors that need to be updated. Next, we can set the designed analog precoder 𝐅_RF, q = 𝐅_RF and update it as, 𝐅_RF, q((i_k-1)K_t+1:i_k K_t, j_k) = 𝐀((i_k-1)K_t+1:i_k K_t, q) for k = 1, ⋯, K_s. The digital precoders are calculated as 𝐅_BB, q[m] = √(N_s)/𝐅_RF, q𝐅_RF, q^†𝐅_q[m] _F𝐅_RF, q^†𝐅_q[m]. § SENSING ESTIMATION ALGORITHM DESIGN WITH DAOSA HYBRID BEAMFORMING In this section, we propose the sensing parameter estimation algorithms at the sensing receiver. The task of the sensing receiver is to estimate the angle, range, and velocity of targets, given the transmit signal and the received sensing signal. As the whole sensing angular window is divided into Q scanning directions, at the qth time slot, we only sense the targets whose azimuth angles of arrival are within -Ω_q, given the knowledge of the received signal 𝐲_q and the transmit signal 𝐬_q. For angle estimation, multiple signal classification (MUSIC) is a subspace-based method with super-resolution accuracy. Hereby, we adopt a DAoSA-MUSIC algorithm in <cit.> to estimate the target angle and propose the wideband DAoSA-MUSIC algorithm by extending to the wideband transmission. We need to reconstruct the observation matrix by performing stacking operations on the received signals at different subcarriers. After estimating each angle parameter, we develop a range and velocity parameter estimation algorithm over two stages, i.e., sum-DFT and golden section search (S-DFT-GSS). §.§ W-DAoSA-MUSIC for Angle Estimation At the pth time slot, we construct the observation vector of the sensing receiver 𝐲_q[m, n] ∈ℂ^N_RF^r × 1 as, 𝐲_q[m, n] = 𝐖_RF, q^H 𝐀_r 𝐒_q[m, n] + 𝐄_q[m, n], where 𝐒_q[m, n] = Λ_q[m, n] 𝐀_t^T 𝐅_RF, q𝐅_BB, q[m] 𝐬_q[m, n], 𝐀_r = [𝐚_r(θ_1, ϕ_1), ⋯, 𝐚_r(θ_P, ϕ_P)], 𝐀_t = [𝐚_t(θ_1, ϕ_1), ⋯, 𝐚_t(θ_P, ϕ_P)], Λ_q[m, n] = √(N_t N_r/P)diag{h_1^(q)[m, n], ⋯, h_P^(q)[m, n]}, 𝐄_q[m, n] = 𝐖_RF, q^H 𝐞_q[m, n], and h_p^(q)[m,n] = h_p e^-j2π m Δ f τ_p e^j2π ((q - 1) T_s + n T_o) ν_p. Then, we can stack all 𝐲_q[m, n] into one matrix as, 𝐘_θ, q = [[ 𝐲_q, 0 … 𝐲_q, N-1 ]] with 𝐲_q, n = [𝐲_q[0, n],⋯, 𝐲_q[M-1, n]]. The precoders and the receive steering matrix 𝐀_r remain the same at different symbols during a time slot. Then (<ref>) can be written as, 𝐘_θ, q = 𝐖_RF, q^H 𝐀_r 𝐒_θ, q + 𝐄_q, where 𝐒_θ, q = [𝐒_q[0, 0], ⋯, 𝐒_q[M-1, N-1]] is regarded as the P × M N-dimensional equivalent signal source matrix, and 𝐄_q ∈ℂ^N_RF^r × M N refers to the noise matrix. Based on (<ref>), we can perform the W-DAoSA-MUSIC algorithm to estimate the azimuth AoAs of targets. Given the reconstructed observation matrix 𝐘_θ, q, the covariance matrix can be calculated as, 𝐑_θ, q = 1/M N𝐘_θ, q𝐘_θ, q^H. Then we can conduct the eigenvalue decomposition (EVD) as, 𝐑_θ, q = 𝐔_s Σ_s 𝐔_s^H + 𝐔_n Σ_n 𝐔_n^H, where Σ_s ∈ℂ^P_q × P_q consists of P_q leading eigenvalues, Σ_n ∈ℂ^(N_RF^r - P_q) × (N_RF^r - P_q) contains the remaining eigenvalues and P_q denotes the number of targets whose azimuth AoAs are within -Ω_q. With the signal subspace 𝐔_s ∈ℂ^N_RF^r × P_q and the noise subspace 𝐔_n ∈ℂ^N_RF^r × (N_RF^r - P_q), the pseudo spectrum of W-DAoSA-MUSIC can be formulated as, 𝐏_music(θ, ϕ) = 𝐚^H(θ, ϕ) 𝐖_RF, q𝐖_RF, q^H 𝐚(θ, ϕ)/𝐚^H(θ, ϕ) 𝐖_RF, q𝐔_n 𝐔_n^H 𝐖_RF, q^H 𝐚(θ, ϕ). Finally, the AoA estimation (θ̂_p, ϕ̂_p) can be obtained by searching the peaks of the MUSIC spectrum within the angles of -Ω_q, expressed as (θ̂_p, ϕ̂_p) = max_θ, ϕ𝐏_music(θ, ϕ). §.§ S-DFT-GSS for Range and Velocity Estimation For range and velocity estimation, the received signal model can be expressed as, 𝐲_q[m, n] = ∑_p=1^P h_p^(q) e^j2π n T_o ν_p e^-j2π m Δ f τ_p𝐱_p, q[m, n] + 𝐞_q[m, n], where 𝐱_p, q[m, n] = 𝐖_RF, q^H 𝐇_θ(θ_p, ϕ_p) 𝐅_RF, q𝐅_BB, q[m] 𝐬_q[m, n], 𝐇_θ(θ, ϕ) = 𝐚_r(θ, ϕ) 𝐚_t^T(θ, ϕ) 𝐞_q[m, n] = 𝐖_RF, q^H 𝐞_q[m, n] and h_p^(q) = √(N_t N_r/P) h_p e^j2π (q - 1) T_s ν_p. For each estimated AoA parameter (θ̂_p, ϕ̂_p), we can construct a maximum likelihood (ML) estimator by minimizing the log-likelihood function, given by (τ̂_p, ν̂_p) = min_τ, ν, h∑_u=1^N_RF^r𝐘_u, q - hΨ(τ, ν) ⊙𝐗̂_u, q_F^2, where 𝐘_u, q = [[ 𝐲_q(u)[0, 0] … 𝐲_q(u)[0, N - 1]; ⋮ ⋱ ⋮; 𝐲_q(u)[M-1, 0] … 𝐲_q(u)[M-1, N-1] ]], Ψ(τ, ν) = Ψ_τΨ_ν^T, 𝐗̂_u, q = [[ 𝐱̂_q(u)[0, 0] … 𝐱̂_q(u)[0, N - 1]; ⋮ ⋱ ⋮; 𝐱̂_q(u)[M-1, 0] … 𝐱̂_q(u)[M-1, N-1] ]], with Ψ_τ = [e^-j2π 0 Δ f τ, e^-j 2π 1 Δ f τ, ⋯, e^-j 2π (M - 1) Δ f τ]^T, Ψ_ν = [e^j2π 0 T_o ν, e^j2π 1 T_o ν, ⋯, e^j2π (N - 1) T_o ν]^T, 𝐱̂_q[m, n] = 𝐖_RF, q^H 𝐇_θ(θ̂_p, ϕ̂_p) 𝐅_RF, q𝐅_BB, q[m] 𝐬_q[m, n], for u = 1, 2, ⋯, N_RF^r. Next, this minimization problem can be transformed to the maximization problem, (τ̂_p, ν̂_p) = max_τ, ν𝐏_ML(τ, ν), where 𝐏_ML(τ, ν) = |∑_u=1^N_RF^rTr((Ψ(τ, ν) ⊙𝐗̂_u, q)^H 𝐘_u, q)|^2/∑_u=1^N_RF^rΨ(τ, ν) ⊙𝐗̂_u, q_F^2 = |∑_u=1^N_RF^rTr((Ψ(τ, ν) ⊙𝐗̂_u, q)^H 𝐘_u, q)|^2 The solution in (<ref>) is obtained by searching (τ, ν) at which 𝐏_ML(τ, ν) achieves a maximum value in the region [0, 1/Δ f)× [-1/2T_o, 1/2 T_o). To reduce the computational complexity, we can design a two-phase estimation method. Specifically, in the first phase, we operate the on-grid search within a discretized set of delay and Doppler axes with step sizes1/MΔ f and 1/N T_o, which can be implemented with the 2D DFT algorithm. In the second phase, based on the coarse estimation result, we conduct the off-grid estimation by introducing a 2D golden section search (GSS) method. We describe the proposed S-DFT-GSS estimation method in the following. §.§.§ Phase I To compute the ML estimator in (<ref>), we first perform an on-grid search on the discretized grid Γ = {(m_0/M Δ f, n_0/N T_o), m_0 = 0, ⋯, M - 1, n_0 = -N/2, ⋯, N/2-1}, as (m̂_0, n̂_0) = max_(τ, ν)∈Γ𝐏_ML(m_0/M Δ f, n_0/NT). Hereby, we need to calculate the M× N-dimensional ML estimator profiles on Γ, which can be computed from the sum of N_RF^r 2D DFT outputs, given by 𝐏_ML(m_0/M Δ f, n_0/NT) = |𝐠_d(m_0 + 1, [n_0]_N + 1)|^2 where 𝐠_d = ∑_u=1^N_RF^r𝐅_M^H (𝐗̂_u, q^* ⊙𝐘_u, q) 𝐅_N, and 𝐅_M∈ℂ^M× M and 𝐅_N ∈ℂ^N × N refer to the normalized DFT matrices. Then we determine that the delay parameter lies between m̂_0 - 1/M Δ f and m̂_0 + 1/M Δ f and the Doppler parameter is between n̂_0 - 1/N T_o and n̂_0 + 1/N T_o. Thus, the search region Γ_g for off-grid estimation in the second phase becomes, {(τ, ν), m̂_0 - 1/M Δ f≤τ≤m̂_0 + 1/M Δ f, n̂_0 - 1/N T_o≤ν≤n̂_0 + 1/N T_o}. §.§.§ Phase II In this phase, we perform an off-grid search over the continuous-valued region Γ_g, as (τ̂_p, ν̂_p) = max_(τ, ν)∈Γ_g𝐏_ML(τ, ν). Hereby, we can utilize the 2D golden section search technique, each step of which reduces the interval of uncertainty by the golden ratio. Finally, the estimated velocity and range are given by r̂_p = τ̂_p c_0/2 and v̂_p = ν̂_p c_0/2 f_c, respectively. § ISI- AND ICI-TACKLED SENSING ALGORITHM In the previous section, the proposed estimation algorithm is based on the assumption that the round-trip delay of targets is not longer than the CP duration and the Doppler shifts are much smaller than the subcarrier spacing, i.e., the sensing channel is both ISI- and ICI-free. Nevertheless, when it comes to the THz band, this assumption might become invalid in some cases. First, as the carrier frequency increases, the Doppler shift in the THz band grows much larger than the microwave band, which may cause inter-carrier interference and degrade sensing accuracy, especially in high-mobility scenarios. Second, with the decrease of communication delay spread in the THz band, larger subcarrier spacing can be used and the symbol and CP durations are reduced. However, this limits the maximum sensing distance if still using the proposed ISI- and ICI-unaware sensing algorithm in Sec. <ref> even when the link budget is sufficient. In this section, we first derive the received signal model with ISI and ICI caused by the sensing channel and then develop an ISI- and ICI-tackled sensing algorithm to overcome the estimation problem with ICI and ISI. Since we take into account the ISI and ICI effects, we focus on the time-frequency domain signal model and design, by simplifying the notations of the spatial domain in this section. §.§ Received Signal Model with ICI and ISI During a time slot, we denote the data signal at the mth subcarrier and the nth symbol as X_m, n. Then, the transmit baseband signal with the CP part is expressed as, s(t) = ∑_m=1^M-1∑_n=0^N-1 X_m, nrect(t - n T_o) e^j 2π m Δ f (t - T_cp - n T_o), where rect(t) refers to a rectangular pulse that is limited to [0, T_o]. At the sensing receiver, the baseband time-domain continuous signal r(t) is given by, r(t) = ∑_p=1^Pα_p e^j2πν_p t s(t - τ_p) + w(t), where α_p stands for the channel coefficient of the pth target, w(t) denotes the AWGN, delay and Doppler parameters are described in Sec. <ref> with relaxing the assumptions τ_p ⩽ T_cp into τ_p ⩽ T_s and ν_p ≪Δ f into ν_p < Δ f. By sampling the received signal and removing the CP part, we obtain the baseband time-domain discrete signal, r_m, n = r(t)|_t = nT_o + T_cp + m/MT = ∑_p=1^Pα_p e^j2 πν_i (n T_o + T_cp + m/M T) s(nT_o + T_cp + m/M T - τ_p) + w_m, n. Hereby, the key step is to derive the sampling signal s_τ_p, m, n = s(nT_o + T_cp + m/M T - τ_p), given by s_τ_p, m, n = ∑_m'=0^M-1∑_n'=0^N-1 X_m', n'rect((n - n')T_o + T_cp + m/MT - τ_p) × e^j 2π m' Δ f ((n - n') T_o + m/MT - τ_p ). When k_p T_o ⩽τ_p < k_p T_o + T_cp with k_p = ⌊τ_i/T_o⌋ (⌊·⌋ stands for the floor function), we can obtain s_τ_p, m, n = ∑_m'=0^M-1 X_m', n-k_p e^j2πm' m/M e^-j2π m' Δ f τ_p e^j2π m' k_p M_cp/M. When k_p T_o + T_cp⩽τ_p < (k_p + 1) T_o, for m ⩾τ_p/TM - M_cp - k_p(M+M_cp), s_τ_p, m, n is the same as that in (<ref>). For m < τ/T M - M_cp - k_p (M + M_cp), we obtain ∑_m'=0^M-1 X_m', n-k-1 e^j2πm' m/M e^-j2π m' Δ f τ e^j2π m' k T_cp/T e^j2π m' M_cp/M. Based on the above derivations, we can derive the time-domain input-output relation, i.e., the vector form of the received signal time-domain r_m, n at the q time slot, 𝐫_q ∈ℂ^MN× 1, is expressed as, 𝐫_q = ∑_p=1^Pα_p Δ^(ν_p)𝐃_NΠ_2MN^l_p + k_p Mvec( Π_M^-l_p (𝐃_l_pΠ_M^-M_cp + 𝐃̂_l_p) ·𝐅_M^H 𝐛_τ_p [𝐗_q-1, 𝐗_q ] ) + 𝐰_q, where 𝐗_q ∈ℂ^M× N denotes the time-frequency domain transmit signal at the qth time slot, l_p = max{0, ⌈τ_p/T M - M_cp - k_p(M + M_cp) ⌉} (⌈·⌉ describes the ceiling function), Δ^(ν_p) = diag(vec(𝐕_ν_p)) with 𝐕_ν_p(m, n) = e^j2πν_p (n T_o + T_cp + m/M T), the matrix Π_M∈ℂ^M× M refers to the forward cyclic-shift (permutation) matrix, 𝐃_N equals to the identity matrix 𝐈_2MN with the first MN rows punctured, 𝐃_l_p equals to the identity matrix 𝐈_M with the last M - l_p rows turning into zero elements, 𝐃̂_l_p equals to the identity matrix 𝐈_M with the first l_p rows becoming zero elements, 𝐛_τ_p = diag{b_τ_p^0, ⋯, b_τ_p^M-1} with b_τ_p = e^j2π(k_pT_cp/T - τ_p/T), and 𝐰_q is the noise vector. After performing DFT on the matrix form of 𝐫_q, 𝐑_q = vec^-1(𝐫_q) ∈ℂ^M× N, we obtain the frequency-domain received signal 𝐲_q ∈ℂ^MN× 1 at the qth time slot, given by 𝐲_q = vec(𝐅_M 𝐑_q) = ∑_p=1^Pα_p 𝐇_p(τ_p, ν_p) [𝐱_q-1^T, 𝐱_q^T]^T + 𝐰_q, where the matrix 𝐇_p(τ_p, ν_p) ∈ℂ^MN× 2MN is given by, 𝐇_p(τ_p, ν_p) = (𝐈_N ⊗𝐅_M) Δ^(ν_p)𝐃_NΠ_2MN^l_p + k_p M( 𝐈_2N⊗( Π_M^-l_p ·(𝐃_l_pΠ_M^-M_cp + 𝐃̂_l_p) 𝐅_M^H 𝐛_τ_p) ), and 𝐱_q-1 = vec(𝐗_q - 1), 𝐱_q = vec(𝐗_q). If the ISI and ICI effects are ignored, the input-output relation in the time-frequency domain is approximated as the following matrix form, 𝐘_q ≈∑_p=1^P α_p 𝐗_q ⊙Ψ(τ_p, ν_p) + 𝐖_q. The ISI- and ICI-unaware estimation is based on this approximated input-output relation, which is not accurate and causes estimation error in the presence of ISI and ICI effects. §.§ ISI- and ICI-tackled Estimator Based on the received sensing signal model with ISI and ICI in (<ref>), we can obtain the ISI- and ICI-tackled estimator, given by (τ̂, ν̂) = max_τ, ν(𝐇_p(τ, ν) [𝐱_q-1^T, 𝐱_q^T]^T )^H 𝐲_q/𝐇_p(τ, ν) [𝐱_q-1^T, 𝐱_q^T]^T _2^2. The complexity of the proposed ISI- and ICI-tackled estimation algorithm depends on the computation of 𝐇_p(τ, ν) [𝐱_q-1^T, 𝐱_q^T]^T. This can be implemented with computationally efficient operations, including FFT algorithms, cyclic shift, vectorization, and Hadamard product. Thus, the overall computational complexity of this estimator is 𝒪(MN log (MN)). § NUMERICAL RESULTS In this section, we evaluate the sensing and communication performance of the proposed precoding algorithms and sensing parameter estimation methods. The key simulation parameters are listed in Table <ref>, which refer to the physical layer numerology for beyond 52.6 GHz communications in <cit.> and the THz link budget analysis in <cit.>. We consider a THz multipath channel with one LoS path and L_N = 4 NLoS paths. In the simulations, we consider 2D beamforming, i.e., all elevation angles are set as ϕ_0 = 90^∘. §.§ Performance of Hybrid Precoding Algorithms for THz ISAC First, we evaluate the performance of the proposed VEC and SCA hybrid precoding algorithms for THz ISAC in terms of spectral efficiency and transmit beamforming gain towards the sensing direction. Specifically, we consider three hybrid precoding architectures, i.e., FC, AoSA, and DAoSA structures. In comparison, the PE-AltMin approach <cit.> and the TAltMin <cit.> algorithm are used for the FC and the AoSA structures, respectively. The proposed VEC and SCA algorithms are performed for the DAoSA architecture, which is equivalent to FC with N_c = (N_RF^t)^2 and AoSA with N_c = N_RF^t. Since we focus on the evaluations of the hybrid precoding design, the FC combining architecture is set at the communication receiver side. Moreover, the performance of fully digital precoding is evaluated as an upper bound. The subcarrier spacing is set as 1.92 MHz and the number of subcarriers equals 64. The signal-to-noise ratio (SNR) of the communication link is -20 dB. As shown in Fig. <ref>, the performance tradeoff between spectral efficiency and transmit sensing beamforming gain using different hybrid precoding algorithms is plotted by setting the weighting factor within [0, 1]. We learn that the spectral efficiency decreases as the transmit sensing beamforming gain is improved as expected, since more energy is concentrated toward the sensing direction. In the FC structure, the proposed VEC algorithm performs slightly better than the PE-AltMin approach and achieves close performance to the fully digital precoding. In the AoSA architecture, the VEC algorithm realizes higher spectral efficiency than the TAltMin method when η > 0.5, i.e., communication dominates the precoding design. Moreover, while the proposed VEC algorithm outperforms the SCA method for all dynamic hybrid beamforming structures, the SCA algorithm is more computationally efficient. Next, we investigate the spectral efficiency versus SNR with different numbers of closed switches. In Fig. <ref>, compared to the communication-only precoding design (η = 1), the spectral efficiency of the ISAC precoding design (η = 0.6) is reduced by approximately 2.5 bits/s/Hz at the SNR of -30 dB. When N_c = 16, the DAoSA structure becomes FC, and the proposed VEC ISAC hybrid precoding algorithm achieves near-optimal performance over the whole SNR range. With fewer closed switches, fewer phase shifters are used, which causes some performance loss while improving energy efficiency. §.§ Transmit Beampattern We illustrate the transmit beampattern of the designed hybrid precoders in Fig. <ref> and Fig. <ref> for different weights of ISAC precoding design and beam scanning over sequential time slots. As shown in Fig. <ref>, η = 0 corresponds to the sensing-only precoder 𝐅_s, q. In this case, both the proposed VEC and SCA can realize the desired beampattern in the FC (N_c = 16) and AoSA (N_c = 4) architectures, which is generated from the DFT sensing codebook. When η becomes 0.5, we learn that the beamforming gain toward the sensing direction is slightly reduced while several communication sub-beams are formed and point to the angles of communication paths. In the case of η = 1, the communication-only precoding design does not generate sensing beams toward the sensing direction and concentrates all beams toward the communication receiver. In addition, it is demonstrated that the transmit beam in the FC structure realizes more similar pattern to the fully digital precoding compared with the AoSA structure. In Fig. <ref>, it is shown that during a frame duration, the designed THz ISAC transmit signal can generate sweeping beams to scan possible targets in the surrounding environment over different time slots and stable beams toward the communication user to enable ultra-fast data transmission. We observe that the transmit beamforming gains toward the sensing direction can achieve approximately 20 dBi as the beam angle varies, while the communication beams remain similar at different time slots. Complexity Analysis: We denote N_iter as the number of iterations of the alternating minimization in the VEC algorithm for each time slot. The overall computational complexity of the VEC-based ISAC hybrid precoding algorithm is given by 𝒪(Q N_iter N_t^2 ). Since the SCA ISAC hybrid precoding algorithm does not require the process of alternating minimization for each time slot, it can reduce the computational complexity to 𝒪(N_iterN_t^2) compared with the VEC algorithm. §.§ Sensing Accuracy We further investigate the effectiveness of the proposed sensing algorithm with the DAoSA hybrid beamforming architecture. In Fig. <ref>, a number of sensing targets are randomly distributed between -90^∘ and 90^∘. We conduct beam scanning by using the proposed hybrid precoding algorithms in Sec. <ref> and then plot the normalized range profile based on the back-reflected sensing received signal by using the proposed sensing estimation algorithms in Sec. <ref>. At the qth time slot, we estimate the parameters of the target within the sensing angular window Ω_q. With the time-frequency-space transmit design, we realize entire-space multi-target sensing, although the directional narrow beams are used in the THz band. Moreover, we evaluate the sensing accuracy of angle, range, and velocity estimation with the proposed sensing algorithm. In Fig. <ref>, we set the target parameters including the azimuth angle of 70^∘, the distance of 15 m, and the velocity of 20 m/s. The waveform parameters are M = 64 and Δ f = 3.84 MHz. The number of closed switches is 4 at both transmitter and sensing receiver sides. As the sensing SNR increases, the sensing accuracy is improved. Specifically, we observe that the angle, range, and velocity estimation can achieve centi-degree-level, millimeter-level, and decimeter-per-second-level accuracy, respectively. In addition, by decreasing the weighting factor η from 0.6 to 0.4, the sensing accuracy is improved, since more power is allocated to the sensing beam. Complexity Analysis: The computational complexity of EVD in (<ref>) is 𝒪((N_RF^r)^3). Since N_RF^r is much smaller than N_r, the overall computational complexity of W-DAoSA-MUSIC mainly depends on the matrix-vector multiplication in (<ref>), namely, 𝒪(N_RF^r N_r). The computational complexity of the S-DFT-GSS algorithm is 𝒪(N_RF^r M N log (MN)) in the first phase and 𝒪(N_gss N_RF^r M N) in the second phase, where N_gss denotes the iterations of golden section search. §.§ ISI and ICI Effects on Sensing Parameter Estimation Finally, we study the ISI and ICI effects on sensing parameter estimation for THz ISAC systems. The subcarrier number is set as 1024. The considered scenario contains 3 targets with the ranges (10, 20, 30) m and the effective SNRs (-10, -15, 20) dB considering the beamforming gain. In Fig. <ref>, we compare the ICI-unaware and ICI-tackled estimation algorithms under two cases, i.e., sensing channels with weak and strong ICI effects, respectively. As shown in Fig. <ref>(a), the velocity of targets is set as 5 m/s, which corresponds to the low-mobility scenario. In this case, we learn that both ICI-unaware and ICI-tackled sensing algorithms have similar estimation results and can accurately estimate the parameters of 3 targets. Nevertheless, when the target velocity increases to 50 m/s in Fig. <ref>(b), with ICI-unaware estimation, ICI effects increase side-lobe levels of the target with the strongest power, which may cause masking of weak targets or large errors on the parameters of the other two targets. The distance of the target at 30 m is estimated as 29.4 m and the target at 20 m cannot be detected successfully due to the ambiguity caused by side lobes. In contrast, the proposed ICI-tackled sensing algorithm can overcome this problem and still accurately estimate these three targets. In Fig. <ref>, we consider the ISI effects on THz ISAC systems. We consider the scenario containing 2 targets with the ranges (10, 45) m, the same velocity v = 5 m/s, and the effective SNRs (-10, -10) dB considering the beamforming gain. As shown in Fig. <ref>(a), when the subcarrier spacing is 480 kHz, the CP-limited maximum sensing distance is 78 m, which is longer than the target ranges. In this case, there is no ISI effect and we can obtain accurate estimated values of target ranges by using the ISI-unaware sensing algorithm. When the delay spread of the THz communication channel decreases, we can increase the subcarrier spacing and the CP duration becomes shorter, which reduces the CP-limited sensing distance. In Fig. <ref>(b), the subcarrier spacing increases to 3.84 MHz, and the CP-limited sensing distance is 9.8 m, which is shorter than the target ranges. Thus, there exist ISI effects on the received sensing signal. According to the normalized range profile using the ISI-unaware sensing algorithm, the range of the second target is estimated as 49 m, while the ground truth is 45 m. By comparison, the ISI-tackled sensing algorithm still performs well and is robust against the ISI effect. § CONCLUSION In this paper, we have proposed a THz ISAC system framework, including the time-frequency-space transmit design with the DAoSA hybrid beamforming architecture and OFDM waveform, and sensing algorithms for angle, range, and velocity estimation. We propose two ISAC hybrid precoding algorithms, i.e., the near-optimal VEC method and the low-complexity SCA approach. Meanwhile, in the ISI- and ICI-free case, we propose the W-DAoSA-MUSIC angle estimation algorithm and the S-DFT-GSS range and velocity estimation method. Furthermore, when there exist ISI and ICI effects on target estimation in the THz band, we develop the ISI- and ICI-tackled sensing algorithm to overcome the CP limitation and high-mobility target estimation problem. With extensive simulations, the results indicate that the proposed VEC ISAC hybrid precoding algorithm can achieve close performance to fully digital precoding and outperforms other existing methods. The developed SCA algorithm can reduce computational complexity by removing the process of alternating minimization for each time slot. Meanwhile, with the proposed estimation algorithms, centi-degree-level angle estimation, millimeter-level range estimation, and decimeter-per-second-level velocity estimation can be realized in THz ISAC systems. IEEEtran